Let me tell you about something called Dramatic Irony. You see it in movies, television shows, plays and in many other places. It happens when you (as the audience or observer) see or understand something that the main characters don't. Often times this is funny, sometimes it's not. Personally, I am one of those that likes to laugh when I see this happen.
On my learning/education quest over a decade ago, I took many different positions and roles within various IT organisations so that I could learn different aspects of Quality. I went through various phases, and the one I am least proud of was the "Quality champion." This wasn't a job title so much as a belief that (mis-)guided my actions. The role/part/perspective came mainly from believing what my employer(s) told me at the time - namely that "the QA/Test team was responsible for quality."
If you have worked in Software Development for a while, and perhaps for a larger organisation, you have likely seen someone who believes they are a Quality Champion. They don't want to see any (known) bugs go out; they check up on everyone in the team to see that they have done their reviews or had someone else inspect their work before passing it onto the next person/team; they join committees to create, document, maintain or present processes that will increase the quality of the delivered products/solutions; and so on.
Ah, the poor misguided fools. Bless their hearts.
The Future(s) of Software Testing
This topic keeps coming up in various discussions so here are some thoughts on what I think a future may hold for us one day. What does it mean to predict the future? Does it mean it is inevitable? If it is something appealing, maybe it is something we can work towards.
What does it mean to talk about the Future of Software Testing without talking about "Quality" (whatever that is)? I believe that Testing is a means to helping you attain the desired quality but that on its own it is not an indicator of what the delivered quality will be. I think it is fair to speculate on the practice of Testing in isolation of the idea of Quality. Just like it is fair to speculate on the kind of vehicles you use for travel without saying it is a clear indicator of the quality of the journey.
When it comes to predicting the future, how far ahead do we look? It used to bother me that H.G. Wells chose to go 800,000 years in the future in his famous work, The Time Machine. Some authors go 100 years or even just 5 or 10 to tell their tale. I will not be writing about what will happen in 5 years time and I really hope it doesn't take 100 years. I don't know how long some of these events will take to manifest. I do have some idea of 'markers' that we may see along the way though.
When it comes to Software Testing, two thoughts jump immediately to mind: 1) Software Testing is an inseparable part of the creative Software/Solution Development process; and 2) there will be many different possible futures depending on how you do testing today. Put another way, there is no one right way to solve a problem, and creating software is a complex problem-solving activity performed in shifting organisations of human interaction so there are many ways to do it. In my opinion, the technical 'fiddly bits' like code, tests and user documentation pale in comparison to the difficulty of those two primary challenges: (1) solving the real problem, and (2) working with others to get the job done.
When we ask what is the future of software testing, we are really asking what is the future of software development. So what will Testing look like in the future? Well, what does Testing look like today? What does Development look like today?
What does it mean to talk about the Future of Software Testing without talking about "Quality" (whatever that is)? I believe that Testing is a means to helping you attain the desired quality but that on its own it is not an indicator of what the delivered quality will be. I think it is fair to speculate on the practice of Testing in isolation of the idea of Quality. Just like it is fair to speculate on the kind of vehicles you use for travel without saying it is a clear indicator of the quality of the journey.
When it comes to predicting the future, how far ahead do we look? It used to bother me that H.G. Wells chose to go 800,000 years in the future in his famous work, The Time Machine. Some authors go 100 years or even just 5 or 10 to tell their tale. I will not be writing about what will happen in 5 years time and I really hope it doesn't take 100 years. I don't know how long some of these events will take to manifest. I do have some idea of 'markers' that we may see along the way though.
When it comes to Software Testing, two thoughts jump immediately to mind: 1) Software Testing is an inseparable part of the creative Software/Solution Development process; and 2) there will be many different possible futures depending on how you do testing today. Put another way, there is no one right way to solve a problem, and creating software is a complex problem-solving activity performed in shifting organisations of human interaction so there are many ways to do it. In my opinion, the technical 'fiddly bits' like code, tests and user documentation pale in comparison to the difficulty of those two primary challenges: (1) solving the real problem, and (2) working with others to get the job done.
When we ask what is the future of software testing, we are really asking what is the future of software development. So what will Testing look like in the future? Well, what does Testing look like today? What does Development look like today?
Hobbies and Interests
Several years ago I wrote an article summarising some of the key points I keep in mind while interviewing candidates for a test team. The article is called "Hiring Software Testers in an Information Age" and is available as a PDF on my main site. The article was originally targetted to recruiters who kept asking me for advice on hiring software testers and they would always be surprised at the level of detail that I went through in describing what it takes to hire a good person for a testing position.
Conversations with recruiters over coffee would always start the same. I would say something like: if you are just trying to find a warm body to fill a position, then you don't need to hear what I have to say. If you want to hire someone who thinks and has a good chance of fitting in with the culture of the team and organisation to provide value, then it is a complex problem that requires insights into what the position actually involves.
There are about a dozen different checkpoints that I go through when considering and interviewing candidates, and the paper I wrote touched on some of the major points but not all of them. Actually, I even removed some of them from the article as early drafts had too much information. My intention was to get some of the important points across without writing a book.
Recently, a colleague and friend, Michael Mahlberg tweeted the following:
Conversations with recruiters over coffee would always start the same. I would say something like: if you are just trying to find a warm body to fill a position, then you don't need to hear what I have to say. If you want to hire someone who thinks and has a good chance of fitting in with the culture of the team and organisation to provide value, then it is a complex problem that requires insights into what the position actually involves.
There are about a dozen different checkpoints that I go through when considering and interviewing candidates, and the paper I wrote touched on some of the major points but not all of them. Actually, I even removed some of them from the article as early drafts had too much information. My intention was to get some of the important points across without writing a book.
Recently, a colleague and friend, Michael Mahlberg tweeted the following:
RT @NolanBushnell: At Atari we hired based on hobbies and not grades in school. We ended up with the best engineering group in the world.I liked that comment and followed up with a supporting tweet:
On Hiring: if a résumé or cover letter doesn't describe Hobbies or other Interests, I usually skip it.This sparked some conversation on twitter and I want to elaborate on my comment here.
Quality Center Must Die
It is not a matter of "if" -- it is a matter of "when" HP's Quality Center software will die. And you, my dear readers will help make that happen.
"How?" you may ask? Simple. There are two things you should do: (1) think, and (2) don't put up with crap that gets in the way of delivering value to the customer and interacting intelligently with other human beings.
But I am getting ahead of myself. Let's rewind the story a bit...
Several months ago I was hired by my client to help train one of the test teams on agile and exploratory testing methods. The department has followed a mostly Waterfall development model until now and wants to move in the Agile direction. (A smart choice for them, if you ask me.) Why am I still there after all this time? That's a good question.
After attending the Problem Solving Leadership course last year, and after attending a few AYE conferences, I changed my instructional style to be more the kind of consultant that empowers the client with whatever they need to help themselves learn and grow. It's a bit of a slower pace, but the results are more positive and long-lasting.
I am a part of a "pilot" agile/scrum team and am working closely with one of the testers (I will call him "Patient Zero") to coach him on good testing practices to complement the agile development processes. I have done this several times now at different clients, so this is nothing new to me. One of the unexpected surprises that cropped up this time was that this development team is not an end-to-end delivery team, so when they are "done" their work, the code moves into a Waterfall Release process and it all kind of falls apart. There are still some kinks to be solved here and I am happy to see some really bright, caring people trying to solve these problems. So that's okay.
"How?" you may ask? Simple. There are two things you should do: (1) think, and (2) don't put up with crap that gets in the way of delivering value to the customer and interacting intelligently with other human beings.
But I am getting ahead of myself. Let's rewind the story a bit...
Several months ago I was hired by my client to help train one of the test teams on agile and exploratory testing methods. The department has followed a mostly Waterfall development model until now and wants to move in the Agile direction. (A smart choice for them, if you ask me.) Why am I still there after all this time? That's a good question.
After attending the Problem Solving Leadership course last year, and after attending a few AYE conferences, I changed my instructional style to be more the kind of consultant that empowers the client with whatever they need to help themselves learn and grow. It's a bit of a slower pace, but the results are more positive and long-lasting.
I am a part of a "pilot" agile/scrum team and am working closely with one of the testers (I will call him "Patient Zero") to coach him on good testing practices to complement the agile development processes. I have done this several times now at different clients, so this is nothing new to me. One of the unexpected surprises that cropped up this time was that this development team is not an end-to-end delivery team, so when they are "done" their work, the code moves into a Waterfall Release process and it all kind of falls apart. There are still some kinks to be solved here and I am happy to see some really bright, caring people trying to solve these problems. So that's okay.
Thoughts on the StarEast 2011 conference
I first attended StarEast in 1999. I remember the day-long tutorial I attended (by Rick Craig), and two track sessions - one by Cem Kaner on hiring testers, and one by James Whittaker on "Exploiting a Broken Design Process." I know I attended other sessions but I don't have active memories of them any more. I do remember the experience of attending the conference - one of surprise and excitement. Surprise at seeing so many other people in the testing community with similar questions and problems as myself, and excitement at the speakers with lots of great information and advice to give.
Fast forward to 2011 - I returned to StarEast, this time as a speaker. I suppose I didn't need to wait 12 years to return as a speaker. I didn't intentionally ignore the conference. I think I've been busy with other things and it just didn't come up - until last Fall when I received an invitation in my inbox to submit a proposal. I'm really glad I went.
Some things were familiar - the beautiful hotel, the Florida sunshine, the amazingly fresh orange juice, and the basic conference format. One thing that was different for me this time around was the number of people/speakers that I new who were also speaking at the conference. After having attended and spoken at several other conferences over the years, I guess I have gotten to know many of the popular speakers.
I was happy to see many more people speaking that I have never heard about before. That tells me that the community is still growing after all this time and that there are still many more people sharing their knowledge to help enlighten future generations of testing leaders. That's awesome!
Fast forward to 2011 - I returned to StarEast, this time as a speaker. I suppose I didn't need to wait 12 years to return as a speaker. I didn't intentionally ignore the conference. I think I've been busy with other things and it just didn't come up - until last Fall when I received an invitation in my inbox to submit a proposal. I'm really glad I went.
Some things were familiar - the beautiful hotel, the Florida sunshine, the amazingly fresh orange juice, and the basic conference format. One thing that was different for me this time around was the number of people/speakers that I new who were also speaking at the conference. After having attended and spoken at several other conferences over the years, I guess I have gotten to know many of the popular speakers.
I was happy to see many more people speaking that I have never heard about before. That tells me that the community is still growing after all this time and that there are still many more people sharing their knowledge to help enlighten future generations of testing leaders. That's awesome!
Reflection on my Testing workshop at the KWSQA Targeting Quality Conference
At this year's KWSQA Targeting Quality conference I gave a half-day workshop titled "Exploratory Testing Basics". I originally proposed that title since I thought it followed nicely from the shorter workshop I gave at the QAI TesTrek conference in Toronto last Fall. I thought to myself - I'd like to redo the exercises again, change up a few things and it should be a piece of cake.
As the Winter months progressed into Spring, I began to worry about my workshop idea more and more. You see, the exercise I gave at the QAI conference, while fun and appropriate, only really covered one aspect of Exploratory Testing - a broader framework. Perhaps that isn't enough? What is enough, then? What makes up the "basics" of ET?
You see, when I teach ET, it's usually one-on-one and I spend 2-3 days just to cover the basics. It takes me a few more days of pair testing and debriefing/coaching to help the new tester put everything into practice. It really is quite complex and a lot of ideas and models may seem abstract until you try them out and adjust with good feedback.
As the Winter months progressed into Spring, I began to worry about my workshop idea more and more. You see, the exercise I gave at the QAI conference, while fun and appropriate, only really covered one aspect of Exploratory Testing - a broader framework. Perhaps that isn't enough? What is enough, then? What makes up the "basics" of ET?
You see, when I teach ET, it's usually one-on-one and I spend 2-3 days just to cover the basics. It takes me a few more days of pair testing and debriefing/coaching to help the new tester put everything into practice. It really is quite complex and a lot of ideas and models may seem abstract until you try them out and adjust with good feedback.
Radiating Testing Information - Part 1
This topic is one that I have been asked about many times over the years and I am long overdue for a detailed discussion of it. Back in 2006 I presented an Experience Report at the STiFS workshop in New York titled "Low-Tech Testing Dashboard Revisited." The content of that presentation will be in Part 2. To quote "The Do-Re-Mi Song" from the movie The Sound of Music, "Let's start at the very beginning, a very good place to start."
I attended the StarEast conference in 1999 and there was a talk by James Bach titled "A Low Tech Testing Dashboard." This presentation clicked with me as I was managing several test teams at the time and it addressed a problem that I felt was important. I have used this communication tool many times ever since. If you are not familiar with it, I suggest you read through the PDF slides on the Satisfice web site before you continue. Go ahead. I'll wait.
In this review I will cover some of the who, what, where, when, how and why of the Low Tech Testing Dashboard (LTTD) through examples from past projects I have worked on. I expect your context is different, so my hope is that these examples may help you think about how you might apply this communication tool on your project.
I attended the StarEast conference in 1999 and there was a talk by James Bach titled "A Low Tech Testing Dashboard." This presentation clicked with me as I was managing several test teams at the time and it addressed a problem that I felt was important. I have used this communication tool many times ever since. If you are not familiar with it, I suggest you read through the PDF slides on the Satisfice web site before you continue. Go ahead. I'll wait.
In this review I will cover some of the who, what, where, when, how and why of the Low Tech Testing Dashboard (LTTD) through examples from past projects I have worked on. I expect your context is different, so my hope is that these examples may help you think about how you might apply this communication tool on your project.
Testing & Programming = Oil & Water
I was watching a science program just now and it occurred to me that Testing is very much science. And then I wondered about Programming.
I started in IT over 22 years ago doing programming. For me, the process of programming broke down to three parts: figuring out the algorithm to solve the problem, implementing/coding the solution, and cleaning up the code (for whatever reason - e.g. maintainability, usability of UI, etc.). It gets more complicated than that of course, but I think that about sums it up the major activities as I saw them. (SIDE NOTE: I didn't write those to mirror TDD's Red-Green-Refactor, but it does align nicely that way.)
When I think back on my experiences in programming, I don't see a lot of overlap with my experiences in Science (~ 8 years studying, researching and doing Physics & Environmental Science + teaching Science on top of that). Science is about answering questions. The Scientific Method provides a framework for asking and answering questions. Programming isn't about that. Building software isn't about that. I'm having difficulty at the moment trying to see how testing and programming go together.
It occurs to me that schools and universities don't have any courses that teach students how to build software. It also occurs to me that schools and universities provide students with the opportunities to learn and develop the skills required to build software well. The schools just don't know they're doing that and consequently the students don't get that opportunity intentionally.
I'm not talking about learning to program. That's trivial. Building software isn't about programming.
I started in IT over 22 years ago doing programming. For me, the process of programming broke down to three parts: figuring out the algorithm to solve the problem, implementing/coding the solution, and cleaning up the code (for whatever reason - e.g. maintainability, usability of UI, etc.). It gets more complicated than that of course, but I think that about sums it up the major activities as I saw them. (SIDE NOTE: I didn't write those to mirror TDD's Red-Green-Refactor, but it does align nicely that way.)
When I think back on my experiences in programming, I don't see a lot of overlap with my experiences in Science (~ 8 years studying, researching and doing Physics & Environmental Science + teaching Science on top of that). Science is about answering questions. The Scientific Method provides a framework for asking and answering questions. Programming isn't about that. Building software isn't about that. I'm having difficulty at the moment trying to see how testing and programming go together.
It occurs to me that schools and universities don't have any courses that teach students how to build software. It also occurs to me that schools and universities provide students with the opportunities to learn and develop the skills required to build software well. The schools just don't know they're doing that and consequently the students don't get that opportunity intentionally.
I'm not talking about learning to program. That's trivial. Building software isn't about programming.
Software Testing "Popcorn" button
I made myself some microwave popcorn for a snack just now. Placed the popcorn bag in the microwave, pressed the 'popcorn' button and then 'start'. Someone next to me said: "There's a popcorn button?" Um, yes, there is. In fact, there has been a 'popcorn' button on every microwave oven I've ever seen.
I explained to my colleague that the recommended time on the bag (in this case it was 2 min 30 sec) doesn't work on every oven. Different ovens have different power output and so the actual cook time may vary. If I go with the default time, it might burn or be under-done and leave too many unpopped kernels in the bag. You could figure out the correct time in a few ways.
I explained to my colleague that the recommended time on the bag (in this case it was 2 min 30 sec) doesn't work on every oven. Different ovens have different power output and so the actual cook time may vary. If I go with the default time, it might burn or be under-done and leave too many unpopped kernels in the bag. You could figure out the correct time in a few ways.
Subscribe to:
Posts (Atom)