I received an email recently about an event happening later this month in London, UK. It's the world's first official Testathon (testathon.co). The site describes the event as "like a hackathon but for testers. You’ll be testing apps in teams with some of the best testers in the world." I know some of the judges and think this will be a fantastic opportunity for those who can attend.
When I received this notice my first thought was: this is a really cool thing and I should tell people about it. My second thought was: I don't normally write about conferences so do I blog about this or not? Well, yes, I decided to blog about it.
In the "Context-Driven" Software Testing community, actions speak louder than words. That's one of the reasons that certifications (like those from the ISTQB and QAI) are treated with low regard and even disdain from some people in the testing community. The main issue here is that these paper transactions (certifications) emphasize memory-work over hands-on practice. Here's a Quick Acceptance Test: does the [particular] certification reflect (1) a level of demonstrable competence and ability in the desired field, or (2) the ability to spend money and regurgitate specific knowledge without context?
Showing posts with label testing. Show all posts
Showing posts with label testing. Show all posts
Test Management is Wrong
Test Management is wrong. There. I said it.
I can't believe it took me this long to notice the obvious. If you are doing Software Development in any fashion, and are worried about how to manage your testing to develop a "quality" product, stop it.
Let's be clear about what I mean here. If you consider any software development life cycle (SDLC) process, you will find activities like the following arranged in some fashion or other:
These activities aren't Waterfall or Agile or anything else, they are just activities. HOW you choose to do them will reflect your SDLC. I don't care about that right now. The part I'm picking on is the Testing bit near the middle, regardless of whether you do them in an agile, Waterfall, or some other way.
In particular, I am picking on the fallacy or myth that a good Test Management plan/process is what you need to develop and release a high Quality product.
I can't believe it took me this long to notice the obvious. If you are doing Software Development in any fashion, and are worried about how to manage your testing to develop a "quality" product, stop it.
Let's be clear about what I mean here. If you consider any software development life cycle (SDLC) process, you will find activities like the following arranged in some fashion or other:
- Requirements gathering, specification
- Software design
- Implementation and Integration
- Testing (or Validation)
- Deployment
- Lather, Rinse, Repeat (i.e. Maintain, Enhance, Fix, Mangle, Spin, and so on)
These activities aren't Waterfall or Agile or anything else, they are just activities. HOW you choose to do them will reflect your SDLC. I don't care about that right now. The part I'm picking on is the Testing bit near the middle, regardless of whether you do them in an agile, Waterfall, or some other way.
In particular, I am picking on the fallacy or myth that a good Test Management plan/process is what you need to develop and release a high Quality product.
Testing is a Medium
In a few days I will be giving a presentation to the local Agile/Lean Peer 2 Peer group here in town. The group has a web site - Waterloo Agile Lean, and the announcement is also on the Communitech events page.
I noticed the posted talk descriptions are shorter than what I wrote. The Waterloo Agile Lean page has this description:
I noticed the posted talk descriptions are shorter than what I wrote. The Waterloo Agile Lean page has this description:
"This session will introduce the basic foundation of Exploratory Testing and run through a live, interactive demo to demonstrate some of how it’s done. Bring your open minds and questions and maybe even an app to test. If ET is new to you, prepare to get blown away!"The Communitech page has this description:
"Exploratory Testing is the explosive sound check that helps us see things from many directions all at once. It takes skill and practice to do well. The reward is a higher-quality, lower-risk solution that brings teams a richer understanding of the development project.
This session will introduce the basic foundation of Exploratory Testing and run through a live, interactive demo to demonstrate some of how it's done. Bring your open minds and questions and maybe even an app to test. If ET is new to you, prepare to get blown away!"
Testers, Learn about Agile (and Lean)
Let me tell you about something called Dramatic Irony. You see it in movies, television shows, plays and in many other places. It happens when you (as the audience or observer) see or understand something that the main characters don't. Often times this is funny, sometimes it's not. Personally, I am one of those that likes to laugh when I see this happen.
On my learning/education quest over a decade ago, I took many different positions and roles within various IT organisations so that I could learn different aspects of Quality. I went through various phases, and the one I am least proud of was the "Quality champion." This wasn't a job title so much as a belief that (mis-)guided my actions. The role/part/perspective came mainly from believing what my employer(s) told me at the time - namely that "the QA/Test team was responsible for quality."
If you have worked in Software Development for a while, and perhaps for a larger organisation, you have likely seen someone who believes they are a Quality Champion. They don't want to see any (known) bugs go out; they check up on everyone in the team to see that they have done their reviews or had someone else inspect their work before passing it onto the next person/team; they join committees to create, document, maintain or present processes that will increase the quality of the delivered products/solutions; and so on.
Ah, the poor misguided fools. Bless their hearts.
On my learning/education quest over a decade ago, I took many different positions and roles within various IT organisations so that I could learn different aspects of Quality. I went through various phases, and the one I am least proud of was the "Quality champion." This wasn't a job title so much as a belief that (mis-)guided my actions. The role/part/perspective came mainly from believing what my employer(s) told me at the time - namely that "the QA/Test team was responsible for quality."
If you have worked in Software Development for a while, and perhaps for a larger organisation, you have likely seen someone who believes they are a Quality Champion. They don't want to see any (known) bugs go out; they check up on everyone in the team to see that they have done their reviews or had someone else inspect their work before passing it onto the next person/team; they join committees to create, document, maintain or present processes that will increase the quality of the delivered products/solutions; and so on.
Ah, the poor misguided fools. Bless their hearts.
Quality Center Must Die
It is not a matter of "if" -- it is a matter of "when" HP's Quality Center software will die. And you, my dear readers will help make that happen.
"How?" you may ask? Simple. There are two things you should do: (1) think, and (2) don't put up with crap that gets in the way of delivering value to the customer and interacting intelligently with other human beings.
But I am getting ahead of myself. Let's rewind the story a bit...
Several months ago I was hired by my client to help train one of the test teams on agile and exploratory testing methods. The department has followed a mostly Waterfall development model until now and wants to move in the Agile direction. (A smart choice for them, if you ask me.) Why am I still there after all this time? That's a good question.
After attending the Problem Solving Leadership course last year, and after attending a few AYE conferences, I changed my instructional style to be more the kind of consultant that empowers the client with whatever they need to help themselves learn and grow. It's a bit of a slower pace, but the results are more positive and long-lasting.
I am a part of a "pilot" agile/scrum team and am working closely with one of the testers (I will call him "Patient Zero") to coach him on good testing practices to complement the agile development processes. I have done this several times now at different clients, so this is nothing new to me. One of the unexpected surprises that cropped up this time was that this development team is not an end-to-end delivery team, so when they are "done" their work, the code moves into a Waterfall Release process and it all kind of falls apart. There are still some kinks to be solved here and I am happy to see some really bright, caring people trying to solve these problems. So that's okay.
"How?" you may ask? Simple. There are two things you should do: (1) think, and (2) don't put up with crap that gets in the way of delivering value to the customer and interacting intelligently with other human beings.
But I am getting ahead of myself. Let's rewind the story a bit...
Several months ago I was hired by my client to help train one of the test teams on agile and exploratory testing methods. The department has followed a mostly Waterfall development model until now and wants to move in the Agile direction. (A smart choice for them, if you ask me.) Why am I still there after all this time? That's a good question.
After attending the Problem Solving Leadership course last year, and after attending a few AYE conferences, I changed my instructional style to be more the kind of consultant that empowers the client with whatever they need to help themselves learn and grow. It's a bit of a slower pace, but the results are more positive and long-lasting.
I am a part of a "pilot" agile/scrum team and am working closely with one of the testers (I will call him "Patient Zero") to coach him on good testing practices to complement the agile development processes. I have done this several times now at different clients, so this is nothing new to me. One of the unexpected surprises that cropped up this time was that this development team is not an end-to-end delivery team, so when they are "done" their work, the code moves into a Waterfall Release process and it all kind of falls apart. There are still some kinks to be solved here and I am happy to see some really bright, caring people trying to solve these problems. So that's okay.
Thoughts on the StarEast 2011 conference
I first attended StarEast in 1999. I remember the day-long tutorial I attended (by Rick Craig), and two track sessions - one by Cem Kaner on hiring testers, and one by James Whittaker on "Exploiting a Broken Design Process." I know I attended other sessions but I don't have active memories of them any more. I do remember the experience of attending the conference - one of surprise and excitement. Surprise at seeing so many other people in the testing community with similar questions and problems as myself, and excitement at the speakers with lots of great information and advice to give.
Fast forward to 2011 - I returned to StarEast, this time as a speaker. I suppose I didn't need to wait 12 years to return as a speaker. I didn't intentionally ignore the conference. I think I've been busy with other things and it just didn't come up - until last Fall when I received an invitation in my inbox to submit a proposal. I'm really glad I went.
Some things were familiar - the beautiful hotel, the Florida sunshine, the amazingly fresh orange juice, and the basic conference format. One thing that was different for me this time around was the number of people/speakers that I new who were also speaking at the conference. After having attended and spoken at several other conferences over the years, I guess I have gotten to know many of the popular speakers.
I was happy to see many more people speaking that I have never heard about before. That tells me that the community is still growing after all this time and that there are still many more people sharing their knowledge to help enlighten future generations of testing leaders. That's awesome!
Fast forward to 2011 - I returned to StarEast, this time as a speaker. I suppose I didn't need to wait 12 years to return as a speaker. I didn't intentionally ignore the conference. I think I've been busy with other things and it just didn't come up - until last Fall when I received an invitation in my inbox to submit a proposal. I'm really glad I went.
Some things were familiar - the beautiful hotel, the Florida sunshine, the amazingly fresh orange juice, and the basic conference format. One thing that was different for me this time around was the number of people/speakers that I new who were also speaking at the conference. After having attended and spoken at several other conferences over the years, I guess I have gotten to know many of the popular speakers.
I was happy to see many more people speaking that I have never heard about before. That tells me that the community is still growing after all this time and that there are still many more people sharing their knowledge to help enlighten future generations of testing leaders. That's awesome!
Testing & Programming = Oil & Water
I was watching a science program just now and it occurred to me that Testing is very much science. And then I wondered about Programming.
I started in IT over 22 years ago doing programming. For me, the process of programming broke down to three parts: figuring out the algorithm to solve the problem, implementing/coding the solution, and cleaning up the code (for whatever reason - e.g. maintainability, usability of UI, etc.). It gets more complicated than that of course, but I think that about sums it up the major activities as I saw them. (SIDE NOTE: I didn't write those to mirror TDD's Red-Green-Refactor, but it does align nicely that way.)
When I think back on my experiences in programming, I don't see a lot of overlap with my experiences in Science (~ 8 years studying, researching and doing Physics & Environmental Science + teaching Science on top of that). Science is about answering questions. The Scientific Method provides a framework for asking and answering questions. Programming isn't about that. Building software isn't about that. I'm having difficulty at the moment trying to see how testing and programming go together.
It occurs to me that schools and universities don't have any courses that teach students how to build software. It also occurs to me that schools and universities provide students with the opportunities to learn and develop the skills required to build software well. The schools just don't know they're doing that and consequently the students don't get that opportunity intentionally.
I'm not talking about learning to program. That's trivial. Building software isn't about programming.
I started in IT over 22 years ago doing programming. For me, the process of programming broke down to three parts: figuring out the algorithm to solve the problem, implementing/coding the solution, and cleaning up the code (for whatever reason - e.g. maintainability, usability of UI, etc.). It gets more complicated than that of course, but I think that about sums it up the major activities as I saw them. (SIDE NOTE: I didn't write those to mirror TDD's Red-Green-Refactor, but it does align nicely that way.)
When I think back on my experiences in programming, I don't see a lot of overlap with my experiences in Science (~ 8 years studying, researching and doing Physics & Environmental Science + teaching Science on top of that). Science is about answering questions. The Scientific Method provides a framework for asking and answering questions. Programming isn't about that. Building software isn't about that. I'm having difficulty at the moment trying to see how testing and programming go together.
It occurs to me that schools and universities don't have any courses that teach students how to build software. It also occurs to me that schools and universities provide students with the opportunities to learn and develop the skills required to build software well. The schools just don't know they're doing that and consequently the students don't get that opportunity intentionally.
I'm not talking about learning to program. That's trivial. Building software isn't about programming.
Software Testing "Popcorn" button
I made myself some microwave popcorn for a snack just now. Placed the popcorn bag in the microwave, pressed the 'popcorn' button and then 'start'. Someone next to me said: "There's a popcorn button?" Um, yes, there is. In fact, there has been a 'popcorn' button on every microwave oven I've ever seen.
I explained to my colleague that the recommended time on the bag (in this case it was 2 min 30 sec) doesn't work on every oven. Different ovens have different power output and so the actual cook time may vary. If I go with the default time, it might burn or be under-done and leave too many unpopped kernels in the bag. You could figure out the correct time in a few ways.
I explained to my colleague that the recommended time on the bag (in this case it was 2 min 30 sec) doesn't work on every oven. Different ovens have different power output and so the actual cook time may vary. If I go with the default time, it might burn or be under-done and leave too many unpopped kernels in the bag. You could figure out the correct time in a few ways.
Using MS Outlook to support SBTM
Okay, to recap, Session-Based Test Management (SBTM) is a test framework to help you manage and measure your Exploratory Testing (ET) effort. There are 4 basic elements that make this work: (1) Charter or mission (the purpose that drives the current testing effort), (2) Time-boxed periods (the 'sessions'), (3) Reviewable result, and (4) Debrief. There are many different ways that you might implement or apply these elements in your team or testing projects.
Let's take a look at tracking the testing effort from strictly a Project Management perspective. Years ago, when I first became a test manager, I was introduced to the idea of the 60% 'productive' work day as a factor to consider when estimating effort applied to project schedules. That is, in a typical 8-hour workday you don't really get 8 complete, full hours of work from someone. I don't believe it's mentally possible to get that. The brain needs a break, as does the body, and there are many natural distractions in the workplace (meetings, email, breaks, support calls, stability of the code or environments, and so on), so the reality is that the number of productive working hours for each employee is actually something less than the total number of hours they're physically present in the workplace.
That 'productivity' factor changes with each person, their role and responsibilities, the number of projects in the queue, and so on. Applying some statistical averaging to my past experiences, I find that 60% seems about right for a tester dedicated to a single project. I have worked with some teams that have been more productive and some much less.
So what does this look like? If we consider an 8-hour day, 60% is 4.8 hours. I'm going to toss in an extra 15 minute break or distraction and say that it works out to about 4.5 hours of productive work from a focussed employee in a typical 8-hour day. Again, it depends on the person and the tasks that they're performing, so this is just an averaging factor.
Let's take a look at tracking the testing effort from strictly a Project Management perspective. Years ago, when I first became a test manager, I was introduced to the idea of the 60% 'productive' work day as a factor to consider when estimating effort applied to project schedules. That is, in a typical 8-hour workday you don't really get 8 complete, full hours of work from someone. I don't believe it's mentally possible to get that. The brain needs a break, as does the body, and there are many natural distractions in the workplace (meetings, email, breaks, support calls, stability of the code or environments, and so on), so the reality is that the number of productive working hours for each employee is actually something less than the total number of hours they're physically present in the workplace.
That 'productivity' factor changes with each person, their role and responsibilities, the number of projects in the queue, and so on. Applying some statistical averaging to my past experiences, I find that 60% seems about right for a tester dedicated to a single project. I have worked with some teams that have been more productive and some much less.
So what does this look like? If we consider an 8-hour day, 60% is 4.8 hours. I'm going to toss in an extra 15 minute break or distraction and say that it works out to about 4.5 hours of productive work from a focussed employee in a typical 8-hour day. Again, it depends on the person and the tasks that they're performing, so this is just an averaging factor.
Test-Driven Development isn't new
I used TDD as an analogy to a tester today to explain how logging bugs in a bug tracking system drives the development. A bug report represents a failing test (when you verify that it's really a bug that is) according to some stakeholder need/want.
In Test-Driven Development, the programmer writes/automates the test first that represents the user story that the customer/user wants. The test fails. The programmer then writes enough code required to pass the test and then moves on. (refactoring code along the way, etc..)
It's much the same with regular system testing (i.e. in the absence of agile/TDD practices) where a tester identifies and logs a bug in the bug tracking system. One difference is that these bug reports/tests aren't always automated. (Okay, I've never seen anyone automate these bug reports/tests before but I like to believe that some companies/dev teams out there actually do do this.) That doesn't change the fact that a bug report is the failing test. Even if it's a manual test, it drives the development change and then the bug report is checked/retested to see that the fix works as expected.
Bug regression testing, then, is a requirement for good testing and system/software development, not an option.
So, while the agile practices of TDD and others may seem new, I see this one as a retelling of a common tester-programmer practice. If anything, I see TDD as an opportunity to tighten/shorten/quicken the loop between testing feedback and development. With practice, TDD helps programmers develop the skills and habits they need to create code and systems with confidence -- to know that as the system grows, the specific needs of the customers are being met every step along the way. No one gets left behind.
How can we, as testers, help? If your programmers don't practice TDD or automate tests, start investigating ways that you can do this. Investigate Open Source scripting languages. Engage your programmers in discussions of testability of the interfaces. There are many articles and presentations on the internet on the topics of test/check automation, frameworks and Domain Specific Languages (DSL).
Start reading. Participate in discussions (in real life and online). Start developing scripting skills (I recommend Ruby, of course, especially to the tester newbie). If you don't feel confident with your programming skills, help hire someone onto your test team that can help all the testers advance their skills, knowledge, and productivity in that area.
Be the Quality Advocate by putting your words into practice. You want your programmers to start practicing TDD? Show them how you can do it. You are already doing it - scripting/automating the checks that demonstrate a bug failure is just the next step.
Start by automating a single bug failure. Take it from there.
In Test-Driven Development, the programmer writes/automates the test first that represents the user story that the customer/user wants. The test fails. The programmer then writes enough code required to pass the test and then moves on. (refactoring code along the way, etc..)
It's much the same with regular system testing (i.e. in the absence of agile/TDD practices) where a tester identifies and logs a bug in the bug tracking system. One difference is that these bug reports/tests aren't always automated. (Okay, I've never seen anyone automate these bug reports/tests before but I like to believe that some companies/dev teams out there actually do do this.) That doesn't change the fact that a bug report is the failing test. Even if it's a manual test, it drives the development change and then the bug report is checked/retested to see that the fix works as expected.
Bug regression testing, then, is a requirement for good testing and system/software development, not an option.
So, while the agile practices of TDD and others may seem new, I see this one as a retelling of a common tester-programmer practice. If anything, I see TDD as an opportunity to tighten/shorten/quicken the loop between testing feedback and development. With practice, TDD helps programmers develop the skills and habits they need to create code and systems with confidence -- to know that as the system grows, the specific needs of the customers are being met every step along the way. No one gets left behind.
How can we, as testers, help? If your programmers don't practice TDD or automate tests, start investigating ways that you can do this. Investigate Open Source scripting languages. Engage your programmers in discussions of testability of the interfaces. There are many articles and presentations on the internet on the topics of test/check automation, frameworks and Domain Specific Languages (DSL).
Start reading. Participate in discussions (in real life and online). Start developing scripting skills (I recommend Ruby, of course, especially to the tester newbie). If you don't feel confident with your programming skills, help hire someone onto your test team that can help all the testers advance their skills, knowledge, and productivity in that area.
Be the Quality Advocate by putting your words into practice. You want your programmers to start practicing TDD? Show them how you can do it. You are already doing it - scripting/automating the checks that demonstrate a bug failure is just the next step.
Start by automating a single bug failure. Take it from there.
Subscribe to:
Posts (Atom)