Many years ago when I was in university, a friend approached me one day to ask if I'd be interested in going skydiving with her. She said she wanted to go but she wanted the company. I went. It was on my list of things "to try at least once before I die" so why not. =)
There was a full day of training for newbies - which included hands on (wearing the jumpsuits, learning the equipment, how the chutes are packed, jumping off picnic tables, etc.), videos, in-class instruction and discussion, and ended with an exam. The written exam was the last thing before everyone suited up for the plane ride up.
I was a bit surprised when 2 of the instructors pulled me aside after the exam to go over my test results. They wanted to discuss my answer to the final question. I think it was something like: "If something goes wrong, whose fault is it?"
I had run out of room trying to fill in a suitable answer. I had wondered why they didn't give much space to write. My answer was something along the lines of: "Well, if the wind blows me off course and I float into power lines and die it's nobody's fault. Or if I land in some marsh and get eaten by alligators I don't blame the alligators." and so on .. until I ran out of room at the end of the page. (There are no alligators in this part of Canada, by the way.. unless I land in a zoo.)
Looking back at my response now, I was doing what a good tester might do and thought of how many different ways something might "go wrong". But I missed the point. My instructors patiently kept rephrasing the question to see if they could get a different answer from me.
One of them blurted out: "Paul, is someone holding a gun to your head and asking you to jump out of the plane?" To which I replied "no." "Okay, so who is making you do this?" "No one," I replied. Wrong answer again. I still didn't get what they were trying to say.
Finally they explained to me that *I* was the only one making myself do anything here. If something goes wrong, it's simply my fault. Then they told me to cross out what I had written and write in big letters: "IT'S MY FAULT."
They explained that the whole exam didn't matter - none of the answers mattered except for this one question. If I didn't answer this question correctly then I wouldn't be allowed to jump. It was like signing the waiver.
I wrote what I needed to. I went up in a perfectly good plane and then I jumped out. It was an amazing experience and we all had the same silly grins on our faces when we met up with each other on the ground again.
I learned an important lesson that day. I didn't realise that I deflected responsibility for my own actions and decisions. It wasn't intentional, it was just how I thought about things in such abstract ways. When there are billions of different possible events that may occur from any given moment, why would I even consider taking responsibility for one of those outcomes if things go bad?
Well, it's easy. You make the choice yourself. Assuming no one is holding a gun to your head or threatening to harm your loved ones, then the choice is yours. When you make a choice you own the responsibility for the outcome. It's your fault if something goes wrong. It's your fault if something goes right.
Working in the software industry all these years, I have witnessed many times when people don't take responsibility for the decisions they make and how they choose to act - at all levels within the organisation. I have seen employees who whine about not getting the raise or praise that they think they deserve but they don't put in the care/attention/effort required. I have seen managers look for scapegoats when projects/things don't go as planned. I have witnessed the irrational, childish backtracking of senior management who refuse to admit that they ever did anything wrong or made a wrong decision.
Why is it so hard for people to take responsibility for the choices they make?
We make choices every day. Everyone does. Sometimes we have help making them, sometimes we don't. Some are big choices, others not so much.
What about the choices people make when they are at work? How they act? Or rather, how they choose to act towards others?
No one's putting a gun to your head and telling you to "test!" (Well, there was that one scene in the movie Swordfish that was quite entertaining, but that's the movies for you.)
Testing is not easy. Software Development is not easy. If you think it is, it's likely because you don't understand the problem.
As testers, we should aim to provide as much required information about the product/system/service as possible so that the stakeholders can make informed, timely choices. That is, we help others make big/important choices - hopefully good choices. Good information too late doesn't help either. (Note: what the stakeholders choose to do with that information is a different matter. Sometimes all we can do is just provide the information. How they use it is up to them. Good choices are not automatic, even when you do have the information you need in time.)
If the choice you make turns out not to be the right choice (i.e. you get an undesired outcome), then admit it, learn from it, and try not to make the same mistake the next time.
That's the rub - that process right there. If there is no admission of responsibility for the choice you make, do you learn from it? I don't think so. When I see someone deflect responsibility or look for a scapegoat, they aren't learning anything from the situation and will likely make the same mistakes the next time.
I have seen good leaders admit mistakes/poor choices to their departments. Admit that they've learned from it and will try something new or different to try and get a better outcome. That's the kind of leader I like working with/for. Someone who learns. Someone who grows. Someone willing to admit shortcomings and knows how to leverage the strengths others to help make better choices in the future.
No one is omniscient, so why hide your mistakes? There's some risk in every choice you make. That's life. Don't whine about it or blame others if things don't go your way. Take responsibility, learn from it and try not to make the same mistake again.
When was the last time you said "it's my fault"?
Reflections on AYE
I had the privilege to attend the Amplifying Your Effectiveness (AYE) conference this year. Finally! I've mentioned that conference in several of my presentations and talks over the years, so I was pleased to finally be able to make it out to Phoenix, AZ, this year for the event.
There isn't much I'm going to say about the conference at this time. Browse the conference web site to get an idea of the kinds of sessions and discussions that happen there. Reading about it doesn't do it justice.
Everyone I know who has attended an AYE conference in the past has told me how wonderful it was and how much I would enjoy it. They were right. Even though I was told to expect it, and I hoped it would live up to those expectations, I felt a kind of relief and happiness in knowing that I wasn't disappointed.
In my experience, I've noticed that testers tend to start out only interested in developing their technical skills (e.g. programming/scripting language, automation tools, databases, etc.) - if they show any interest at all in professional development related to their jobs. If you take your career and profession seriously, there will come a time when you realise that the technical skills aren't as important as communication and people skills.
Why does learning happen in that order? Does it make sense? Build People skills upon/after your Technical skills? Shouldn't we start with a good base in communication, understanding and relationship-building, and then work to develop technical skills and expertise afterwards?
Should we focus more effort on teaching teenagers in High School how to understand and communicate effectively with each other to prepare them for developing good working relationships in adulthood? Why is it that the High School/teenage experience tends to do the opposite?
I've seen my children play nicely with other kids, even strange/unknown children in the playground quite nicely. So when do adults forget how to be nice to each other? To play nicely or fairly with others? When do they forget how to show respect and trust, and act with integrity and honesty towards others?
At AYE, those values were apparent. I saw kindness, respect, trust and honesty in abundance. It was overwhelming at times. I wasn't expecting that. I felt a sense of instant community at the conference.
Learning happened. Sharing happened. Discussions and conferring happened. It was fun.
It was everything that I hoped how a group of adults would act. I wish that was a more common occurrence. I wonder what we could accomplish if more people acted that way.
There isn't much I'm going to say about the conference at this time. Browse the conference web site to get an idea of the kinds of sessions and discussions that happen there. Reading about it doesn't do it justice.
Everyone I know who has attended an AYE conference in the past has told me how wonderful it was and how much I would enjoy it. They were right. Even though I was told to expect it, and I hoped it would live up to those expectations, I felt a kind of relief and happiness in knowing that I wasn't disappointed.
In my experience, I've noticed that testers tend to start out only interested in developing their technical skills (e.g. programming/scripting language, automation tools, databases, etc.) - if they show any interest at all in professional development related to their jobs. If you take your career and profession seriously, there will come a time when you realise that the technical skills aren't as important as communication and people skills.
Why does learning happen in that order? Does it make sense? Build People skills upon/after your Technical skills? Shouldn't we start with a good base in communication, understanding and relationship-building, and then work to develop technical skills and expertise afterwards?
Should we focus more effort on teaching teenagers in High School how to understand and communicate effectively with each other to prepare them for developing good working relationships in adulthood? Why is it that the High School/teenage experience tends to do the opposite?
I've seen my children play nicely with other kids, even strange/unknown children in the playground quite nicely. So when do adults forget how to be nice to each other? To play nicely or fairly with others? When do they forget how to show respect and trust, and act with integrity and honesty towards others?
At AYE, those values were apparent. I saw kindness, respect, trust and honesty in abundance. It was overwhelming at times. I wasn't expecting that. I felt a sense of instant community at the conference.
Learning happened. Sharing happened. Discussions and conferring happened. It was fun.
It was everything that I hoped how a group of adults would act. I wish that was a more common occurrence. I wonder what we could accomplish if more people acted that way.
The Price of Clarity
I went to a "Startup Drinks" event tonight and met some interesting entrepreneurs with lots of great ideas. One thing I discovered (about myself) is that I'm going to need to get on Twitter in the near future. ;-)
One person mentioned a tweet that he posted yesterday from the "Monday Morning Memo" about advertising. The tweet was about the topic "Why Most Ads Don't Work" (by Roy Williams, author of the book "The Wizard of Ads") and is summed up in the nine secret words: "The Risk of Insult is the Price of Clarity."
This was an interesting phrase. I instantly thought about the feedback that we, as testers, give about the products we test. You know what I'm talking about.. the Ugly Baby Syndrome. Sometimes we have to be the bearer of bad news. Hopefully, we can find a nice way to say it, but ultimately, I believe that the risk of insult is the price of clarity.
Thinking about my motto "ubi dubium, ibi opportunitas" (where there is doubt, there is opportunity), we often exploit the vague, unclear areas of products and applications.. because there will usually be lots of bugs there. The bugs we report are about more than just little typos and minor UI issues, though. If we do our jobs well, bug reports can be the catalyst to help bring clarity to features/requirements/implementations and consensus among the stakeholders about the app/system under test.
When I test, I report every bug. It's all information. I can't judge when something will be worthwhile (to report) and when something won't. You won't recognise that your baby is ugly because it has thousands of little things wrong with it if you don't report the thousands of little things. It's not often that you'll test a system where you find a "nose" in the completely wrong place, or an "ear" where an "eye" should be (to continue the 'baby' analogy). Those moments are easy - you *have* to risk the insult or risk completely failing to meet the customers' (and other stakeholders') needs.
If you test, and report bugs, in a way so as to "not offend" (i.e. by watering down the message, not testing certain features too hard, or choosing not to report certain bugs) are you really providing a helpful service?
What are you willing to risk to ensure clarity on your projects? What do you think?
One person mentioned a tweet that he posted yesterday from the "Monday Morning Memo" about advertising. The tweet was about the topic "Why Most Ads Don't Work" (by Roy Williams, author of the book "The Wizard of Ads") and is summed up in the nine secret words: "The Risk of Insult is the Price of Clarity."
This was an interesting phrase. I instantly thought about the feedback that we, as testers, give about the products we test. You know what I'm talking about.. the Ugly Baby Syndrome. Sometimes we have to be the bearer of bad news. Hopefully, we can find a nice way to say it, but ultimately, I believe that the risk of insult is the price of clarity.
Thinking about my motto "ubi dubium, ibi opportunitas" (where there is doubt, there is opportunity), we often exploit the vague, unclear areas of products and applications.. because there will usually be lots of bugs there. The bugs we report are about more than just little typos and minor UI issues, though. If we do our jobs well, bug reports can be the catalyst to help bring clarity to features/requirements/implementations and consensus among the stakeholders about the app/system under test.
When I test, I report every bug. It's all information. I can't judge when something will be worthwhile (to report) and when something won't. You won't recognise that your baby is ugly because it has thousands of little things wrong with it if you don't report the thousands of little things. It's not often that you'll test a system where you find a "nose" in the completely wrong place, or an "ear" where an "eye" should be (to continue the 'baby' analogy). Those moments are easy - you *have* to risk the insult or risk completely failing to meet the customers' (and other stakeholders') needs.
If you test, and report bugs, in a way so as to "not offend" (i.e. by watering down the message, not testing certain features too hard, or choosing not to report certain bugs) are you really providing a helpful service?
What are you willing to risk to ensure clarity on your projects? What do you think?
Grey is made up of Black and White
There's an idea that has been on the back of my mind for a while now.
In certain testing (and development) circles, one hears about "good enough" software. I get it. Nothing's perfect and no software can ever be completely bug free, so the best you can hope for is to have caught and fixed the most important bugs (to your stakeholders) before you ship. Or something like that. I'm paraphrasing.
The thing that irks some people is that "good enough" is not carved in stone. It has a human element to the decision-making process that some bean counters might prefer to do without.
I think I've come to appreciate that definition over time because I've had the benefit of seeing many projects of different types for different companies go out over the last decade. With experience, you develop a sense of when things are in your comfort zone (i.e. acceptable risk) compared with when they are not (i.e. this risk gives me the willies - ship it if you dare but I'm going to duck and cover).
That part I'm okay with. That's not what's troubling me. The thing I'm wondering about is where do you start to teach someone this?
So, when I first started to Test, I started by working on some automation tests, and then moved to developing some written test cases and test plans. Over time, I came to appreciate that the brain tends to move quicker than paper and that there are some testing practices out there that can help you test more and provide you with more good information and feedback in the time given on a project. You can choose to spend that precious quality time with paperwork or do more testing. The choice is yours.
Fast forward a few years. I've chosen to teach Exploratory Testing to people who are new to the Testing field. It's an interesting and challenging approach and I love how it gets you to think bigger than you've thought before.
So, you decide to test something. Did the test "Pass"? Did it "Fail"? How do you know? Are you sure? Can you check? ... and so on...
This leads to discussion of Oracles - i.e. a principle or mechanism by which you can tell something is a bug. Funny thing about Oracles - they're like siblings. Each one is unique, they're similar/related in some ways but different in others, and they all think they're the most important and want to be in front.
As a tester, you need to decide *which* is the most relevant Oracle to help you decide if something is a bug or not.
For example, I often explain to new testers that the app might match the specification when you compare them, so is that a Pass? The simple answer is "yes"... assuming the only thing you care about is comparing the application to a particular claim. So what's the problem?
Well, the problem I have is the assumption that the spec, the claim, is correct in the first place. What if they both match each other but I don't care because it goes against what I *expect* as a user? That is, when you use computers for a while you develop a general understanding of how many apps work in similar ways for similar functions. And then along comes some hot-shot spec-writer who thinks they're going to be awesome and develop something completely new and different for ... oh, I dunno, the 'Print' function.
"Okay," you say to the spec-writer, "you might do that.. but then again, reinventing something that everyone has a certain idea of how it should work might lead to mistakes, frustration and user/operator errors that are not so good."
So, back to the tester, the app matches the spec, but they're both wrong I say because neither matches my expectations. Where's the Pass? Where's the Fail? It's not so easy anymore. In fact, you might say there are shades of grey when it comes to determining Pass/Fail - especially if you find yourself in the situation where there are 3 or more applicable Oracles. (eek!)
That's where I think my line of thinking went wrong a while back. It's not about shades of grey, it's clearly about determining which sibling, which Oracle, is the most applicable here. You *have* to pick one. There has to be a binary, logical Pass/Fail answer to the question "Is there a problem here?" One Oracle needs to edge out more than the others. You may not be able to make that decision on your own, and that's okay. But someone, or some team of people, needs to reach a consensus about whether or not something is a bug according to some oracle that they care about most in the given situation.
At the end of the day, it doesn't matter if your brain is firing on all cylinders or none, a "black and white" decision needs to be made about whether or not something is a bug. Computers are logical things. I think it makes sense that we need to apply logic in this situation too.
But hold on here, where does this fit in with the whole "good enough" software thing I started talking about a few minutes ago? That doesn't sound very logical. In fact, it sounds kind of like the opposite, doesn't it?
Why, yes, it does sound illogical, and yet there is logic in it too. The idea came to me when standing in a Wendy's looking at a poster while waiting to order lunch. The poster had a picture made up of smaller photos.. a photo mosaic it's called. (Aside: Google search turns up lots of cool info on these.)
And then it hit me. Aha!
Determining "good enough" is like finally recognising the picture of the quality of the release when you spend every day looking at all of the smaller 'quality' photos. These dozens, hundreds, or thousands of 'black and white' decisions that are made over the course of a project (i.e. the bugs you find) paint a picture. Some people pay attention to these, while others do not (to their detriment).
Good development teams come up with Release criteria that make sense and often contain human elements in the final go/no-go release decision. Different project team members may have different pieces of the picture. As a Test Lead/Manager, you bring information about the tests that have been run, the risks that were covered, and the kinds of bugs that were found, reported and fixed (and the ones left outstanding). Sometimes you might even have other metrics that you feel are important in helping you recognise "good enough" when you see it.
I like the idea of a photo mosaic as an analogy for good enough software. You may never get to see the complete picture, but you might not need to either. When the picture is too fuzzy, then you should recognise that you might not be collecting all the information you need to help make a good timely decision.
As a tester, you need to be concerned with all the logical decisions in the Pass/Fail criteria world that helps you identify bugs on a daily basis. But don't get attached to any one particular bug, because in the grand scheme it may not be as important as you think it is. (It *might* be, but be aware that it might *not* be too.)
As a Test Lead or Manager, you need to step back from all the separate little details and ask yourself what patterns you recognise in the complete body of [testing] work that has been performed? One might even use the cliché "can you recognise the forest for the trees?" but I like my photo mosaic analogy better. ;)
I like how the colour grey is made up of mixing the colours black and white together. And similarly, good test management decisions, which may seem 'fuzzy' to some, are also based on the understanding of all the little logical decisions made by the testers on the team.
These are different skills (between tester and test lead). I haven't been specifically teaching testers to see the big picture and patterns in work beyond their own. Where does that fit in? When? I don't know that I'm comfortable doing that just yet.
I think I got the teaching testing part down, so I guess the next logical step is looking into the skills required to manage good test teams.
Any thoughts or recommendations? Is it just something that you learn by trial and error, by osmosis, working on different projects for different companies? (Nahh, that's how I learned.. it takes too long.)
This, I think, is going to be the next big need for training in the testing field. Let's say for a moment that you've trained testers in good/different testing practices. So how do you train the managers or leads to manage in new, intelligent ways to complement the productive, agile testing effort? Part of it will be in developing new test management tools, but part of it, clearly, will be in skills development and education too. I just don't see that available right now.
I'll have to keep my eyes open the next time I'm out for lunch. ;)
In certain testing (and development) circles, one hears about "good enough" software. I get it. Nothing's perfect and no software can ever be completely bug free, so the best you can hope for is to have caught and fixed the most important bugs (to your stakeholders) before you ship. Or something like that. I'm paraphrasing.
The thing that irks some people is that "good enough" is not carved in stone. It has a human element to the decision-making process that some bean counters might prefer to do without.
I think I've come to appreciate that definition over time because I've had the benefit of seeing many projects of different types for different companies go out over the last decade. With experience, you develop a sense of when things are in your comfort zone (i.e. acceptable risk) compared with when they are not (i.e. this risk gives me the willies - ship it if you dare but I'm going to duck and cover).
That part I'm okay with. That's not what's troubling me. The thing I'm wondering about is where do you start to teach someone this?
So, when I first started to Test, I started by working on some automation tests, and then moved to developing some written test cases and test plans. Over time, I came to appreciate that the brain tends to move quicker than paper and that there are some testing practices out there that can help you test more and provide you with more good information and feedback in the time given on a project. You can choose to spend that precious quality time with paperwork or do more testing. The choice is yours.
Fast forward a few years. I've chosen to teach Exploratory Testing to people who are new to the Testing field. It's an interesting and challenging approach and I love how it gets you to think bigger than you've thought before.
So, you decide to test something. Did the test "Pass"? Did it "Fail"? How do you know? Are you sure? Can you check? ... and so on...
This leads to discussion of Oracles - i.e. a principle or mechanism by which you can tell something is a bug. Funny thing about Oracles - they're like siblings. Each one is unique, they're similar/related in some ways but different in others, and they all think they're the most important and want to be in front.
As a tester, you need to decide *which* is the most relevant Oracle to help you decide if something is a bug or not.
For example, I often explain to new testers that the app might match the specification when you compare them, so is that a Pass? The simple answer is "yes"... assuming the only thing you care about is comparing the application to a particular claim. So what's the problem?
Well, the problem I have is the assumption that the spec, the claim, is correct in the first place. What if they both match each other but I don't care because it goes against what I *expect* as a user? That is, when you use computers for a while you develop a general understanding of how many apps work in similar ways for similar functions. And then along comes some hot-shot spec-writer who thinks they're going to be awesome and develop something completely new and different for ... oh, I dunno, the 'Print' function.
"Okay," you say to the spec-writer, "you might do that.. but then again, reinventing something that everyone has a certain idea of how it should work might lead to mistakes, frustration and user/operator errors that are not so good."
So, back to the tester, the app matches the spec, but they're both wrong I say because neither matches my expectations. Where's the Pass? Where's the Fail? It's not so easy anymore. In fact, you might say there are shades of grey when it comes to determining Pass/Fail - especially if you find yourself in the situation where there are 3 or more applicable Oracles. (eek!)
That's where I think my line of thinking went wrong a while back. It's not about shades of grey, it's clearly about determining which sibling, which Oracle, is the most applicable here. You *have* to pick one. There has to be a binary, logical Pass/Fail answer to the question "Is there a problem here?" One Oracle needs to edge out more than the others. You may not be able to make that decision on your own, and that's okay. But someone, or some team of people, needs to reach a consensus about whether or not something is a bug according to some oracle that they care about most in the given situation.
At the end of the day, it doesn't matter if your brain is firing on all cylinders or none, a "black and white" decision needs to be made about whether or not something is a bug. Computers are logical things. I think it makes sense that we need to apply logic in this situation too.
But hold on here, where does this fit in with the whole "good enough" software thing I started talking about a few minutes ago? That doesn't sound very logical. In fact, it sounds kind of like the opposite, doesn't it?
Why, yes, it does sound illogical, and yet there is logic in it too. The idea came to me when standing in a Wendy's looking at a poster while waiting to order lunch. The poster had a picture made up of smaller photos.. a photo mosaic it's called. (Aside: Google search turns up lots of cool info on these.)
And then it hit me. Aha!
Determining "good enough" is like finally recognising the picture of the quality of the release when you spend every day looking at all of the smaller 'quality' photos. These dozens, hundreds, or thousands of 'black and white' decisions that are made over the course of a project (i.e. the bugs you find) paint a picture. Some people pay attention to these, while others do not (to their detriment).
Good development teams come up with Release criteria that make sense and often contain human elements in the final go/no-go release decision. Different project team members may have different pieces of the picture. As a Test Lead/Manager, you bring information about the tests that have been run, the risks that were covered, and the kinds of bugs that were found, reported and fixed (and the ones left outstanding). Sometimes you might even have other metrics that you feel are important in helping you recognise "good enough" when you see it.
I like the idea of a photo mosaic as an analogy for good enough software. You may never get to see the complete picture, but you might not need to either. When the picture is too fuzzy, then you should recognise that you might not be collecting all the information you need to help make a good timely decision.
As a tester, you need to be concerned with all the logical decisions in the Pass/Fail criteria world that helps you identify bugs on a daily basis. But don't get attached to any one particular bug, because in the grand scheme it may not be as important as you think it is. (It *might* be, but be aware that it might *not* be too.)
As a Test Lead or Manager, you need to step back from all the separate little details and ask yourself what patterns you recognise in the complete body of [testing] work that has been performed? One might even use the cliché "can you recognise the forest for the trees?" but I like my photo mosaic analogy better. ;)
I like how the colour grey is made up of mixing the colours black and white together. And similarly, good test management decisions, which may seem 'fuzzy' to some, are also based on the understanding of all the little logical decisions made by the testers on the team.
These are different skills (between tester and test lead). I haven't been specifically teaching testers to see the big picture and patterns in work beyond their own. Where does that fit in? When? I don't know that I'm comfortable doing that just yet.
I think I got the teaching testing part down, so I guess the next logical step is looking into the skills required to manage good test teams.
Any thoughts or recommendations? Is it just something that you learn by trial and error, by osmosis, working on different projects for different companies? (Nahh, that's how I learned.. it takes too long.)
This, I think, is going to be the next big need for training in the testing field. Let's say for a moment that you've trained testers in good/different testing practices. So how do you train the managers or leads to manage in new, intelligent ways to complement the productive, agile testing effort? Part of it will be in developing new test management tools, but part of it, clearly, will be in skills development and education too. I just don't see that available right now.
I'll have to keep my eyes open the next time I'm out for lunch. ;)
Think *inside* the Box
Last weekend I presented at the Toronto Workshop on Software Testing (TWST). The topic for the workshop this year was "Coaching, Mentoring and Training Software Testers" and I decided to present a process that I had developed with my team to help manage a peer review process. For us, that's a coaching opportunity, so I thought it was relevant to the theme. There was some excitement and hoopla over the presentation that had me baffled for a bit though.
My presentation had some specific content and charts and things which I know were "new" because, well, we developed them. That was the point of sharing it with colleagues. The thing that had me stumped/confused was at a higher level than that though.
You see, I didn't think there was anything particularly new about the whole idea. Our team follows a process that, as someone reminded me, was initially developed 10 years ago for some specific company's needs. That initial presentation is on the web for all to see and I came across it a long time ago.
The high-level process description fits in with our current company's needs and has been working well for us for several years now. In all the articles and descriptions that I've found online, there was just one piece that wasn't described too well.. okay, at all. To clarify the specific example here, I'm talking about Session-Based Test Management (SBTM) - a test management framework to help manage and measure your Exploratory Testing (ET) effort.
Overall, the process has worked for us in our present company/context, but the "Debrief" step/phase is a bit 'vague' or 'light' in description compared to the other important elements. As it happens, I have a teaching (B.Ed) degree, coupled with my 10+ years of experience, so I feel I am able to improvise this step fairly well.
But one thing always bugged me about that step - it didn't fit in with the overall flow of the rest of the framework. That is, according the the framework 'activity hierarchy' you're either testing or your not. Well... I don't think that it's that clear cut. What about the Debrief step? Is that Testing? Or is it clearly "not"? I don't think it's either, but closer to the 'testing' side if I had to pick one.
Okay, so if I look at it as closer to the testing side, then does it fit in with the overall framework? Actually, no. It's an oddball. The catch here is that the basic SBTM framework helps you manage and measure the ET effort, but nothing about the Debrief step. Wait a minute! Hold the phone. Why not?!
So we, as a team, came up with a process for managing and measuring the Debrief step that follows the same basic format as the testing part. When I look at it, it makes sense. It looks more like a complete picture to me now.
For a number of people at the workshop last week, our process seemed like a novel/fresh way of looking at it. I can't see that actually. As far as I can tell, our team is still following the same basic outline and process described over a decade ago. We just filled in some of the blanks in a way that we thought was consistent with the rest of the framework.
When people ask you to "think outside the box" they mean to think in an unconventional way to solve some problem. To me, I looked inside the box and noticed something was incomplete/missing. My solution, as far as I'm concerned, was "thinking inside the box."
Having said that, when I step back from the process to see what the big picture looks like, I feel a bit like Alfred Nobel after inventing dynamite. The reaction from my colleagues at the workshop was kind of like that too - explosive! (It was quite cool actually. I don't recall the last time I've seen such a flurry of excitement and questions in such a short time!)
I don't know what to do with this process right now. I'm changing it to put some "safeties" in place because I recognise the dangers of allowing people to easily generate "metrics" that represent the quality of a tester's work and learning.
Sigh. There is one thing I've gotten from all of this. I have more learning to do. This time, I know it is clearly in the field of Psychology, although I'm not yet sure where to start. I need to understand this Debrief dynamite that we've invented - what it means and how it can be used for good and not for evil in the hands of fools.
I think the new process is working (for me) though, because it allows me to ask new questions that I haven't thought to ask before. For instance, if ET is the simultaneous learning, design and test execution (by one definition), then managing the debrief step helps me to track a tester's learning. Am I really tracking learning though? Am I observing someone's level of interest and attention to detail? How much they care? What are the implications of poor quality work that doesn't improve over time? Should that tester move onto something else? Should I leave them alone if training and reinforcement doesn't help and they do generally "okay" work? ... ?
Aside from the details and all of these interesting new questions, it was just surprising to me that no one has come up with something similar already. I looked inside the box. I filled in the missing piece using a similar-looking piece. I don't see that as being unconventional.
Sometimes you don't have to go out of your way to come up with something fresh. Maybe no one has gotten around to looking at all the corners of the box yet. Maybe someone meant to but got distracted and didn't return. Maybe there's an opportunity waiting. Maybe you are the one to see it.
Have you looked in your box yet? Sometimes a good tester, like an inventor or explorer, is just thorough.
My presentation had some specific content and charts and things which I know were "new" because, well, we developed them. That was the point of sharing it with colleagues. The thing that had me stumped/confused was at a higher level than that though.
You see, I didn't think there was anything particularly new about the whole idea. Our team follows a process that, as someone reminded me, was initially developed 10 years ago for some specific company's needs. That initial presentation is on the web for all to see and I came across it a long time ago.
The high-level process description fits in with our current company's needs and has been working well for us for several years now. In all the articles and descriptions that I've found online, there was just one piece that wasn't described too well.. okay, at all. To clarify the specific example here, I'm talking about Session-Based Test Management (SBTM) - a test management framework to help manage and measure your Exploratory Testing (ET) effort.
Overall, the process has worked for us in our present company/context, but the "Debrief" step/phase is a bit 'vague' or 'light' in description compared to the other important elements. As it happens, I have a teaching (B.Ed) degree, coupled with my 10+ years of experience, so I feel I am able to improvise this step fairly well.
But one thing always bugged me about that step - it didn't fit in with the overall flow of the rest of the framework. That is, according the the framework 'activity hierarchy' you're either testing or your not. Well... I don't think that it's that clear cut. What about the Debrief step? Is that Testing? Or is it clearly "not"? I don't think it's either, but closer to the 'testing' side if I had to pick one.
Okay, so if I look at it as closer to the testing side, then does it fit in with the overall framework? Actually, no. It's an oddball. The catch here is that the basic SBTM framework helps you manage and measure the ET effort, but nothing about the Debrief step. Wait a minute! Hold the phone. Why not?!
So we, as a team, came up with a process for managing and measuring the Debrief step that follows the same basic format as the testing part. When I look at it, it makes sense. It looks more like a complete picture to me now.
For a number of people at the workshop last week, our process seemed like a novel/fresh way of looking at it. I can't see that actually. As far as I can tell, our team is still following the same basic outline and process described over a decade ago. We just filled in some of the blanks in a way that we thought was consistent with the rest of the framework.
When people ask you to "think outside the box" they mean to think in an unconventional way to solve some problem. To me, I looked inside the box and noticed something was incomplete/missing. My solution, as far as I'm concerned, was "thinking inside the box."
Having said that, when I step back from the process to see what the big picture looks like, I feel a bit like Alfred Nobel after inventing dynamite. The reaction from my colleagues at the workshop was kind of like that too - explosive! (It was quite cool actually. I don't recall the last time I've seen such a flurry of excitement and questions in such a short time!)
I don't know what to do with this process right now. I'm changing it to put some "safeties" in place because I recognise the dangers of allowing people to easily generate "metrics" that represent the quality of a tester's work and learning.
Sigh. There is one thing I've gotten from all of this. I have more learning to do. This time, I know it is clearly in the field of Psychology, although I'm not yet sure where to start. I need to understand this Debrief dynamite that we've invented - what it means and how it can be used for good and not for evil in the hands of fools.
I think the new process is working (for me) though, because it allows me to ask new questions that I haven't thought to ask before. For instance, if ET is the simultaneous learning, design and test execution (by one definition), then managing the debrief step helps me to track a tester's learning. Am I really tracking learning though? Am I observing someone's level of interest and attention to detail? How much they care? What are the implications of poor quality work that doesn't improve over time? Should that tester move onto something else? Should I leave them alone if training and reinforcement doesn't help and they do generally "okay" work? ... ?
Aside from the details and all of these interesting new questions, it was just surprising to me that no one has come up with something similar already. I looked inside the box. I filled in the missing piece using a similar-looking piece. I don't see that as being unconventional.
Sometimes you don't have to go out of your way to come up with something fresh. Maybe no one has gotten around to looking at all the corners of the box yet. Maybe someone meant to but got distracted and didn't return. Maybe there's an opportunity waiting. Maybe you are the one to see it.
Have you looked in your box yet? Sometimes a good tester, like an inventor or explorer, is just thorough.
New Ruby SBTM scripts on my web site
Hi there, for anyone who is interested, I have updated the sbtm-ruby-tools zip file on my web site at: http://www.staqs.com/sbtm/
The current version says it's 1.2, but it's a bit of a mix. I made some updates to some of the scripts last Fall but didn't get around to pushing them onto my site. Just yesterday I ran into an annoying bug when I ran the scripts on a laptop with a newer version of Ruby. It was a one-line change to fix, but this change is worth posting because the bug may cause the 2 most important scripts to *not* run on some systems.
The reason I'm posting this on my blog is to solicit feedback on these scripts. It literally took me 4 hours to create this archive tonight. The reason it took me so long was because the gap between these v1.x files and the v2.x (with the 2.x folder structure that I use on a daily basis) is getting quite large.
You see, last year I completely changed the SBTM 'Sessions' folder structure to help us manage multiple projects simultaneously. To do that, I created new BATCH files and modified ruby scripts to help us work with the different project folders. It's pretty sweet actually. I'm currently managing 3-5 project simultaneously with the SBTM 2.0 framework and it's only a few clicks to switch between any project.
Is this new framework worth sharing? I don't think so. I'm bothered by all the text files and batch scripts (it's so 20th century)... although I have come to really like all the ruby scripts that I have for all my test management needs. From one perspective, it's like a file-based database. On the other hand, it's really a bunch of disjoint text files and command-line scripts (even when you do integrate them with the Windows Explorer).
So, new stuff aside, the help I'm looking for is from someone who is actually using the v1.x SBTM Ruby scripts. Since the gap is so large between my free ones and the ones I use on a daily basis, I'd really like to get some feedback from someone on a completely different system to let me know that the scripts work as advertised.
They should work. They're pretty simple. I'd just like to confirm that.
Any volunteers?
The current version says it's 1.2, but it's a bit of a mix. I made some updates to some of the scripts last Fall but didn't get around to pushing them onto my site. Just yesterday I ran into an annoying bug when I ran the scripts on a laptop with a newer version of Ruby. It was a one-line change to fix, but this change is worth posting because the bug may cause the 2 most important scripts to *not* run on some systems.
The reason I'm posting this on my blog is to solicit feedback on these scripts. It literally took me 4 hours to create this archive tonight. The reason it took me so long was because the gap between these v1.x files and the v2.x (with the 2.x folder structure that I use on a daily basis) is getting quite large.
You see, last year I completely changed the SBTM 'Sessions' folder structure to help us manage multiple projects simultaneously. To do that, I created new BATCH files and modified ruby scripts to help us work with the different project folders. It's pretty sweet actually. I'm currently managing 3-5 project simultaneously with the SBTM 2.0 framework and it's only a few clicks to switch between any project.
Is this new framework worth sharing? I don't think so. I'm bothered by all the text files and batch scripts (it's so 20th century)... although I have come to really like all the ruby scripts that I have for all my test management needs. From one perspective, it's like a file-based database. On the other hand, it's really a bunch of disjoint text files and command-line scripts (even when you do integrate them with the Windows Explorer).
So, new stuff aside, the help I'm looking for is from someone who is actually using the v1.x SBTM Ruby scripts. Since the gap is so large between my free ones and the ones I use on a daily basis, I'd really like to get some feedback from someone on a completely different system to let me know that the scripts work as advertised.
They should work. They're pretty simple. I'd just like to confirm that.
Any volunteers?
My illumination chamber
When I was a teenager, I used to bike to work to the downtown of the city. Sometimes a particular coworker/friend would bike home with me most of the way as we both lived in the same general part of the city.
I distinctly remember one night having a philosophical discussion as we pedalled quietly through the dark calm streets. I don't remember the particulars of the discussion anymore, but I recall that that was the first time when I was introduced to the term "Tao." He tried to describe it to me, and said that we were in a state of Tao while we cycled and talked, but that once we began discussing Tao we were no longer in a state of it.
Huh? So, we can be in a state of something until we realise that we're in a state of something and then we're no longer in that state? Is this a Schrödinger's cat kind of problem? Is it like The Game?
It took me a long time, a lot of reading, and personal experiences to begin to understand what my friend tried to explain to me that night all those years ago.
Oddly enough, I had a similar awareness moment just this morning.
First let's rewind about a year or two.. to a presentation I attended on Critical Thinking. Again, I don't remember the particulars of the talk, but I recall that the speaker described the Four Steps of Creativity at one point:
When we "prepare" to test a new feature, we research and discuss that feature. We explore our understanding and ideas and challenge every assumption we have. We *design* tests meant to explore our understanding and observe the results.
There are subtle and simple "aha" moments as we test to help concrete the information we began with. Things change from assumptions to facts, observations, trends and patterns that lead to recommendations.
And yet, things are not always that simple. Sometimes we get stumped when thinking through the problems we face. The answers do not come to us right away. Forcing more information into our heads is not usually the best way to solve a problem, I find. Taking time away from the problem is often what's required... i.e. enter the "Incubation" period.
If you are looking really hard for something and can't find it, sometimes you need to stop looking for it and move onto something else. Often you will find what you are looking for when you don't expect it.
Over the years, I've observed that even though I stop *actively* looking for solutions to a problem, as long as a problem is unsolved, my brain doesn't stop thinking about it. Sometimes, answers come to me in dreams, but I'm not a great sleeper so I don't often remember dreams. More often than not, answers come to me in those moments when I deprive myself of all sensory inputs and let my brain completely relax.
You see, I have a sensory deprivation chamber (of sorts) in my house that I amiably refer to as the Special Hydrogen hydrOxide Wave-particle Emission Room (or SHOWER, for short). I find that this SHOWER helps rejuvenate my energy, and provides a healthy glow to what hair remains on the top of my head.
While I'm in the SHOWER I generally try not to think about any particular problems. It's my only real time to myself all day, so I let the hydrogen hydroxide particles just bounce off my body and lose all sense of everything else.
And then it happens. Many times, under these conditions, ideas just pop into my head! Illumination! Aha moments! I see answers to questions that I had stopped thinking about.
I don't believe that your brain really stops working when you sleep, however, your consciousness needs a break and that's what we get when we sleep. Perhaps a good night's sleep is sufficient incubation period for our minds to mull over the collected information of the previous day(s).
I had noticed previously that I seem to get a lot of interesting ideas when I'm in the SHOWER. However, for some reason, colleagues are not always as happy and eager as I am when I tell them that I was thinking about them in the SHOWER. ;-) I don't know why. It's my illumination chamber.
This morning was slightly different. While in the shower I cheated. I tried to (actively/intentionally) *think* about the answer to a problem I've been working on ... and no new ideas came to me. I think I broke the rules of Tao on this one.
Illumination happens when you let it happen, not when you want it to happen. The other steps in Creativity (and Problem Solving) are a bit more straight-forward, they're done intentionally. This step has a tricky catch to it. Unlike the others, you can't force this one to happen on command.
Darn.
I look forward to my SHOWER session tomorrow. I'll give into the particles and just let them wash all my worries away.. if only for a short time. If illumination happens, that's cool. If not, then that's cool too. I love my showers. Sometimes I'm even moved to sing. =)
Singing is good. At least then I know that I'm using the creative part of my brain and not the analytical side. I've got all day to use my analytical skills. It's good to have time allotted daily to something creative too. There's something very Tao in that balance.
I distinctly remember one night having a philosophical discussion as we pedalled quietly through the dark calm streets. I don't remember the particulars of the discussion anymore, but I recall that that was the first time when I was introduced to the term "Tao." He tried to describe it to me, and said that we were in a state of Tao while we cycled and talked, but that once we began discussing Tao we were no longer in a state of it.
Huh? So, we can be in a state of something until we realise that we're in a state of something and then we're no longer in that state? Is this a Schrödinger's cat kind of problem? Is it like The Game?
It took me a long time, a lot of reading, and personal experiences to begin to understand what my friend tried to explain to me that night all those years ago.
Oddly enough, I had a similar awareness moment just this morning.
First let's rewind about a year or two.. to a presentation I attended on Critical Thinking. Again, I don't remember the particulars of the talk, but I recall that the speaker described the Four Steps of Creativity at one point:
- Preparation - Research: Collect information or data
- Incubation - Percolation: Milling over collected information
- Illumination - Light bulb idea: Aha moment
- Implementation - Actual making/creating: Verification
When we "prepare" to test a new feature, we research and discuss that feature. We explore our understanding and ideas and challenge every assumption we have. We *design* tests meant to explore our understanding and observe the results.
There are subtle and simple "aha" moments as we test to help concrete the information we began with. Things change from assumptions to facts, observations, trends and patterns that lead to recommendations.
And yet, things are not always that simple. Sometimes we get stumped when thinking through the problems we face. The answers do not come to us right away. Forcing more information into our heads is not usually the best way to solve a problem, I find. Taking time away from the problem is often what's required... i.e. enter the "Incubation" period.
If you are looking really hard for something and can't find it, sometimes you need to stop looking for it and move onto something else. Often you will find what you are looking for when you don't expect it.
Over the years, I've observed that even though I stop *actively* looking for solutions to a problem, as long as a problem is unsolved, my brain doesn't stop thinking about it. Sometimes, answers come to me in dreams, but I'm not a great sleeper so I don't often remember dreams. More often than not, answers come to me in those moments when I deprive myself of all sensory inputs and let my brain completely relax.
You see, I have a sensory deprivation chamber (of sorts) in my house that I amiably refer to as the Special Hydrogen hydrOxide Wave-particle Emission Room (or SHOWER, for short). I find that this SHOWER helps rejuvenate my energy, and provides a healthy glow to what hair remains on the top of my head.
While I'm in the SHOWER I generally try not to think about any particular problems. It's my only real time to myself all day, so I let the hydrogen hydroxide particles just bounce off my body and lose all sense of everything else.
And then it happens. Many times, under these conditions, ideas just pop into my head! Illumination! Aha moments! I see answers to questions that I had stopped thinking about.
I don't believe that your brain really stops working when you sleep, however, your consciousness needs a break and that's what we get when we sleep. Perhaps a good night's sleep is sufficient incubation period for our minds to mull over the collected information of the previous day(s).
I had noticed previously that I seem to get a lot of interesting ideas when I'm in the SHOWER. However, for some reason, colleagues are not always as happy and eager as I am when I tell them that I was thinking about them in the SHOWER. ;-) I don't know why. It's my illumination chamber.
This morning was slightly different. While in the shower I cheated. I tried to (actively/intentionally) *think* about the answer to a problem I've been working on ... and no new ideas came to me. I think I broke the rules of Tao on this one.
Illumination happens when you let it happen, not when you want it to happen. The other steps in Creativity (and Problem Solving) are a bit more straight-forward, they're done intentionally. This step has a tricky catch to it. Unlike the others, you can't force this one to happen on command.
Darn.
I look forward to my SHOWER session tomorrow. I'll give into the particles and just let them wash all my worries away.. if only for a short time. If illumination happens, that's cool. If not, then that's cool too. I love my showers. Sometimes I'm even moved to sing. =)
Singing is good. At least then I know that I'm using the creative part of my brain and not the analytical side. I've got all day to use my analytical skills. It's good to have time allotted daily to something creative too. There's something very Tao in that balance.
I Found This in My Underwear...
I purchased a new pair of underwear and small piece of paper fell out of the package. It was a note and it read:
Hm. That's interesting. Why is it we don't see notes like this in Software packages?
*Who* would have the courage to put their name as being responsible for assuring the product "meets high quality standards," has "fine workmanship" and that you will have long-term comfort in use?
That's pretty cool. I think the Software Industry still has a long way to go in achieving true customer satisfaction.
STANFIELD'SI have personally inspected this garment to be sure it meets the high quality standards that have made Standfields Limited famous. Exclusive fabrics and fine workmanship assure long-wearing comfort and style.Maureen
Hm. That's interesting. Why is it we don't see notes like this in Software packages?
*Who* would have the courage to put their name as being responsible for assuring the product "meets high quality standards," has "fine workmanship" and that you will have long-term comfort in use?
That's pretty cool. I think the Software Industry still has a long way to go in achieving true customer satisfaction.
The Testing Paradox
I've seen much debate over the last few years regarding Scripted Testing vs. Exploratory Testing. I think I know the answer as to which approach is the one, correct, true way of doing Testing - it's "Yes."
Before I became a software tester I was a scientist and a High School Science Teacher. I recall a few lessons learned that helped shape how I do testing today.
When teaching (Inorganic) Chemistry, there was an experiment that I performed at the beginning of the year/term. This experiment served a few purposes. The first was to show the students an example of a chemical reaction. You know - the whizz, bang, cool aspect. The second purpose was to stress the importance of reading through the experiment carefully to understand what the important parts were - the warnings, dangers, and critical factors. You see the experiment is designed to fail. That's right, you follow the steps and nothing happens -- the big let-down, and students return to their desks. Denied. *Then* you add some water to dilute the chemicals before discarding the solution and .... POP! WHIZZ! SMOKE! SPARKS! Oooooo, ahhhhhh!
You see, Water is the catalyst required to make the reaction occur. The demonstration experiment is designed to challenge your beliefs that water is fine to dilute any failed experiment. Students need to understand that water is another chemical and that it is not always the best way to deal with a failed experiment. There is no always.
Firefighters understand this when dealing with different kinds of fires. You don't throw water on every type of fire. There are big differences between wood fires, electrical fires and chemical fires. They need to understand the situation before they can effectively deal with it.
When doing experiments in Science, there are times when you can improvise certain variables/steps and times when you clearly can't. So how can you tell the difference? You need to read everything carefully first. You need to understand what you're doing. Only then can you tell the critical steps and components from those that you have some freedom with.
So, what's the tie to testing?
When I first started testing software, many years ago, it was mostly scripted. In fact, I was responsible for an automation suite that tested different kinds of fax modems. The scripts ran through a series of functions in the software apps to test the compatibility of different hardware. Because I knew that, I was able to make variations to the software scripts as long as I knew that the hardware baseline information was still maintained. That is, there were critical functions that we needed to know about, and other, somewhat interesting things that were fine if we knew about them too (and fine if we didn't). I understood the purpose of the tests, so I was able to improvise as long as I didn't negatively affect the bottom line.
Over the last 6 years, I have been doing Exploratory Testing almost exclusively. Does that mean that we do it 100% of the time? No. Why not? Because I can think and tell the difference between when it's good to explore and when it's time to follow a script.
For example, when testing something new, we explore. We don't know anything about it and we don't know how well it meets the customer's needs. Scripting is not the way to go here. When we find problems we log bug reports.
Bug reports are interesting creatures. They are scripts. They indicate the conditions, data, exact steps and expected outcome(s) required to reproduce a very specific problem (or class of problems). Often, if you don't follow these steps as outlined, you will NOT see the unexpected behaviour. It's important that (1) a tester identifies this script as exactly as required and that (2) a programmer follow the steps as exactly as possible so that they don't miss the problem and say "it works on my machine."
When a bug is fixed and returned to our test team for testing, we do a few things. The first is to follow the script and see if the original/exact problem reported is indeed fixed. The second is to now use the bug report as a starting point and explore through the system looking for similar problems. Sometimes we have the time to do that when we first report a bug, sometimes we don't. It depends on what we were thinking/doing/exploring when we first encountered the problem. When a bug comes back to you, though, then that's the centre of your world and there's nothing to keep you from using it to give you additional ideas for finding similar or related problems.
When doing Performance Testing, it is important to understand that it is a controlled experiment.. a scripted test, if you will. You may have done some exploration of the system or risks to identify the particular aspect of the system that you want to observe, but now that you know what you're looking for, you need to come up with a specific plan to control the environment, inputs and steps as best as possible in order to observe and record the desired metrics. This is just good science. Understand your controls and variables. If you don't know what I'm talking about, DON'T DO PERFORMANCE TESTING. Leave it to the professionals.
I have a few stories about incompetent testers looking for glory who took my Performance Test Plans and improvised them in unintended ways or didn't even read the whole thing because they were lazy or thought they knew better .. just to have meaningless results that couldn't be used to infer anything about the system under test. My plans weren't the problem, the testers were.
So how do you do good testing? It starts with your brain. You have to think. You have to read. You have to understand the purpose of the activity you are being asked to perform and the kind of information your stakeholders need to make a good, timely decision.
Sometimes Exploratory Testing is the way to go, sometimes it's not. Note: I recognise that at this point there are still many, many testers out there who don't know how to do ET properly or well. Sigh. That's unfortunate. Those of us who do understand ET have a long way to go to help educate the rest so that we can see a real improvement in the state of the craft of testing.
Ditto for Scripted Testing. If you're going to follow the exact steps (because it is important to do so), then follow the steps and instructions exactly. Can't follow the steps exactly because they are incomplete or no longer relevant? Well, what do you think you should do then?
The point of this note is just to say that no one side is correct. There is no one true, correct, testing approach/method. They both are and they both aren't. It's a paradox. An important one. Practice good science and understand what you're doing before you do it. Improvise only when you know you can. Understand the strengths, weaknesses, and risks of any approach in the given situation and you should do fine.
Before I became a software tester I was a scientist and a High School Science Teacher. I recall a few lessons learned that helped shape how I do testing today.
When teaching (Inorganic) Chemistry, there was an experiment that I performed at the beginning of the year/term. This experiment served a few purposes. The first was to show the students an example of a chemical reaction. You know - the whizz, bang, cool aspect. The second purpose was to stress the importance of reading through the experiment carefully to understand what the important parts were - the warnings, dangers, and critical factors. You see the experiment is designed to fail. That's right, you follow the steps and nothing happens -- the big let-down, and students return to their desks. Denied. *Then* you add some water to dilute the chemicals before discarding the solution and .... POP! WHIZZ! SMOKE! SPARKS! Oooooo, ahhhhhh!
You see, Water is the catalyst required to make the reaction occur. The demonstration experiment is designed to challenge your beliefs that water is fine to dilute any failed experiment. Students need to understand that water is another chemical and that it is not always the best way to deal with a failed experiment. There is no always.
Firefighters understand this when dealing with different kinds of fires. You don't throw water on every type of fire. There are big differences between wood fires, electrical fires and chemical fires. They need to understand the situation before they can effectively deal with it.
When doing experiments in Science, there are times when you can improvise certain variables/steps and times when you clearly can't. So how can you tell the difference? You need to read everything carefully first. You need to understand what you're doing. Only then can you tell the critical steps and components from those that you have some freedom with.
So, what's the tie to testing?
When I first started testing software, many years ago, it was mostly scripted. In fact, I was responsible for an automation suite that tested different kinds of fax modems. The scripts ran through a series of functions in the software apps to test the compatibility of different hardware. Because I knew that, I was able to make variations to the software scripts as long as I knew that the hardware baseline information was still maintained. That is, there were critical functions that we needed to know about, and other, somewhat interesting things that were fine if we knew about them too (and fine if we didn't). I understood the purpose of the tests, so I was able to improvise as long as I didn't negatively affect the bottom line.
Over the last 6 years, I have been doing Exploratory Testing almost exclusively. Does that mean that we do it 100% of the time? No. Why not? Because I can think and tell the difference between when it's good to explore and when it's time to follow a script.
For example, when testing something new, we explore. We don't know anything about it and we don't know how well it meets the customer's needs. Scripting is not the way to go here. When we find problems we log bug reports.
Bug reports are interesting creatures. They are scripts. They indicate the conditions, data, exact steps and expected outcome(s) required to reproduce a very specific problem (or class of problems). Often, if you don't follow these steps as outlined, you will NOT see the unexpected behaviour. It's important that (1) a tester identifies this script as exactly as required and that (2) a programmer follow the steps as exactly as possible so that they don't miss the problem and say "it works on my machine."
When a bug is fixed and returned to our test team for testing, we do a few things. The first is to follow the script and see if the original/exact problem reported is indeed fixed. The second is to now use the bug report as a starting point and explore through the system looking for similar problems. Sometimes we have the time to do that when we first report a bug, sometimes we don't. It depends on what we were thinking/doing/exploring when we first encountered the problem. When a bug comes back to you, though, then that's the centre of your world and there's nothing to keep you from using it to give you additional ideas for finding similar or related problems.
When doing Performance Testing, it is important to understand that it is a controlled experiment.. a scripted test, if you will. You may have done some exploration of the system or risks to identify the particular aspect of the system that you want to observe, but now that you know what you're looking for, you need to come up with a specific plan to control the environment, inputs and steps as best as possible in order to observe and record the desired metrics. This is just good science. Understand your controls and variables. If you don't know what I'm talking about, DON'T DO PERFORMANCE TESTING. Leave it to the professionals.
I have a few stories about incompetent testers looking for glory who took my Performance Test Plans and improvised them in unintended ways or didn't even read the whole thing because they were lazy or thought they knew better .. just to have meaningless results that couldn't be used to infer anything about the system under test. My plans weren't the problem, the testers were.
So how do you do good testing? It starts with your brain. You have to think. You have to read. You have to understand the purpose of the activity you are being asked to perform and the kind of information your stakeholders need to make a good, timely decision.
Sometimes Exploratory Testing is the way to go, sometimes it's not. Note: I recognise that at this point there are still many, many testers out there who don't know how to do ET properly or well. Sigh. That's unfortunate. Those of us who do understand ET have a long way to go to help educate the rest so that we can see a real improvement in the state of the craft of testing.
Ditto for Scripted Testing. If you're going to follow the exact steps (because it is important to do so), then follow the steps and instructions exactly. Can't follow the steps exactly because they are incomplete or no longer relevant? Well, what do you think you should do then?
The point of this note is just to say that no one side is correct. There is no one true, correct, testing approach/method. They both are and they both aren't. It's a paradox. An important one. Practice good science and understand what you're doing before you do it. Improvise only when you know you can. Understand the strengths, weaknesses, and risks of any approach in the given situation and you should do fine.
Exploration in Literature
While reading a book recently, I came across a quote by T.S. Elliot. I looked up "Little Gidding" and found the whole poem on the internet. Here's a paragraph from the last quartet:
Is there a tie here to software testing? Dunno. Haven't thought that far. If in life we explore to gain wisdom, how does that compare to exploring software? Do we appreciate the requirements more if we have done an effective job of test exploration? Do we gain wisdom? If so, about what? The people we're working with? The processes or technologies? The development practices or industry?
Just thought I'd share the find.
We shall not cease from explorationI tend to avoid deep poetry because I sometimes find it depressing more often than uplifting. This one made me reflective. In life, we explore so that we may gain wisdom and appreciate where we came from - where we started. As children and teenagers we may not appreciate things quite the same until we strike out on our own. "Home" always feels different when you've been away for a while.
And the end of all our exploring
Will be to arrive where we started
And know the place for the first time.
Is there a tie here to software testing? Dunno. Haven't thought that far. If in life we explore to gain wisdom, how does that compare to exploring software? Do we appreciate the requirements more if we have done an effective job of test exploration? Do we gain wisdom? If so, about what? The people we're working with? The processes or technologies? The development practices or industry?
Just thought I'd share the find.
Subscribe to:
Posts (Atom)