I've seen much debate over the last few years regarding Scripted Testing vs. Exploratory Testing. I think I know the answer as to which approach is the one, correct, true way of doing Testing - it's "Yes."
Before I became a software tester I was a scientist and a High School Science Teacher. I recall a few lessons learned that helped shape how I do testing today.
When teaching (Inorganic) Chemistry, there was an experiment that I performed at the beginning of the year/term. This experiment served a few purposes. The first was to show the students an example of a chemical reaction. You know - the whizz, bang, cool aspect. The second purpose was to stress the importance of reading through the experiment carefully to understand what the important parts were - the warnings, dangers, and critical factors. You see the experiment is designed to fail. That's right, you follow the steps and nothing happens -- the big let-down, and students return to their desks. Denied. *Then* you add some water to dilute the chemicals before discarding the solution and .... POP! WHIZZ! SMOKE! SPARKS! Oooooo, ahhhhhh!
You see, Water is the catalyst required to make the reaction occur. The demonstration experiment is designed to challenge your beliefs that water is fine to dilute any failed experiment. Students need to understand that water is another chemical and that it is not always the best way to deal with a failed experiment. There is no always.
Firefighters understand this when dealing with different kinds of fires. You don't throw water on every type of fire. There are big differences between wood fires, electrical fires and chemical fires. They need to understand the situation before they can effectively deal with it.
When doing experiments in Science, there are times when you can improvise certain variables/steps and times when you clearly can't. So how can you tell the difference? You need to read everything carefully first. You need to understand what you're doing. Only then can you tell the critical steps and components from those that you have some freedom with.
So, what's the tie to testing?
When I first started testing software, many years ago, it was mostly scripted. In fact, I was responsible for an automation suite that tested different kinds of fax modems. The scripts ran through a series of functions in the software apps to test the compatibility of different hardware. Because I knew that, I was able to make variations to the software scripts as long as I knew that the hardware baseline information was still maintained. That is, there were critical functions that we needed to know about, and other, somewhat interesting things that were fine if we knew about them too (and fine if we didn't). I understood the purpose of the tests, so I was able to improvise as long as I didn't negatively affect the bottom line.
Over the last 6 years, I have been doing Exploratory Testing almost exclusively. Does that mean that we do it 100% of the time? No. Why not? Because I can think and tell the difference between when it's good to explore and when it's time to follow a script.
For example, when testing something new, we explore. We don't know anything about it and we don't know how well it meets the customer's needs. Scripting is not the way to go here. When we find problems we log bug reports.
Bug reports are interesting creatures. They are scripts. They indicate the conditions, data, exact steps and expected outcome(s) required to reproduce a very specific problem (or class of problems). Often, if you don't follow these steps as outlined, you will NOT see the unexpected behaviour. It's important that (1) a tester identifies this script as exactly as required and that (2) a programmer follow the steps as exactly as possible so that they don't miss the problem and say "it works on my machine."
When a bug is fixed and returned to our test team for testing, we do a few things. The first is to follow the script and see if the original/exact problem reported is indeed fixed. The second is to now use the bug report as a starting point and explore through the system looking for similar problems. Sometimes we have the time to do that when we first report a bug, sometimes we don't. It depends on what we were thinking/doing/exploring when we first encountered the problem. When a bug comes back to you, though, then that's the centre of your world and there's nothing to keep you from using it to give you additional ideas for finding similar or related problems.
When doing Performance Testing, it is important to understand that it is a controlled experiment.. a scripted test, if you will. You may have done some exploration of the system or risks to identify the particular aspect of the system that you want to observe, but now that you know what you're looking for, you need to come up with a specific plan to control the environment, inputs and steps as best as possible in order to observe and record the desired metrics. This is just good science. Understand your controls and variables. If you don't know what I'm talking about, DON'T DO PERFORMANCE TESTING. Leave it to the professionals.
I have a few stories about incompetent testers looking for glory who took my Performance Test Plans and improvised them in unintended ways or didn't even read the whole thing because they were lazy or thought they knew better .. just to have meaningless results that couldn't be used to infer anything about the system under test. My plans weren't the problem, the testers were.
So how do you do good testing? It starts with your brain. You have to think. You have to read. You have to understand the purpose of the activity you are being asked to perform and the kind of information your stakeholders need to make a good, timely decision.
Sometimes Exploratory Testing is the way to go, sometimes it's not. Note: I recognise that at this point there are still many, many testers out there who don't know how to do ET properly or well. Sigh. That's unfortunate. Those of us who do understand ET have a long way to go to help educate the rest so that we can see a real improvement in the state of the craft of testing.
Ditto for Scripted Testing. If you're going to follow the exact steps (because it is important to do so), then follow the steps and instructions exactly. Can't follow the steps exactly because they are incomplete or no longer relevant? Well, what do you think you should do then?
The point of this note is just to say that no one side is correct. There is no one true, correct, testing approach/method. They both are and they both aren't. It's a paradox. An important one. Practice good science and understand what you're doing before you do it. Improvise only when you know you can. Understand the strengths, weaknesses, and risks of any approach in the given situation and you should do fine.
Great post.
ReplyDeleteThe problem I have is mostly in terminology.
I refer to "scripted testing" as programmatic testing. You can use programs and scripts to explore a system if you want. You can do exploratory testing with scripts/programs/tools.
The debate seems to overlook that definition and define "scripted" as just following a number of predefined steps.
I don't see it as a boolean argument. I think of it terms of a spectrum and somewhere along that programmatic/manual continuum is where you work.. exploratory testing can fall in many areas of the spectrum.
That is where the argument breaks down (IMO).
Hi Corey, thanks for your note.
ReplyDeleteFrom one perspective, the difference between Exploratory and Scripted Testing can simply be defined by the answer to the following question: "When you sit down to test, are the tests already written or are you going to figure it out as you go along?"
You can use programs, automated scripts, and tools with either approach, so I can understand your confusion. "Programmatic testing" is not a term that I am familiar with, so I can't comment on it.
I suppose an equivalent practice for programmers would be to compare "Scripted Testing" to Test-Driven Development. In TDD, the tests are written *before* the code is written. When the code is done, you run the tests.. you don't have to think about them. No exploration there - just run the scripts. (Literally and figuratively.)
In general I agree with you that a testing approach falls somewhere in the continuum between pure scripted and pure exploratory methods.
Why is that? Is it to leverage the benefits of both approaches, or because the testers do both rather poorly, so they combine the approaches to hide the fact that they can't do either really well?
Hi, Paul...
ReplyDeleteI don't agree when you suggest that there's a clear distinction:
"When you sit down to test, are the tests already written or are you going to figure it out as you go along?"I'm much closer to agreeing with you when you suggest that
a testing approach falls somewhere in the continuum between pure scripted and pure exploratory methods.It's important to think of the exploratory and scripted continuum as one of approach and mindset. Very generally, we assess something as a scripted approach to the degree that the focus is on confirmation, verification, and validation; to the degree that a tester (or a programmer, for that matter) requires explicit instruction; to the degree it fosters separation the processes of test design, test execution, result interpretation, and learning from each other over time and space. A machine can perform purely scripted test execution (but can't design or interpret it); a human whose brain cells are still working can't help but behaving in a somewhat exploratory way no matter how rigourous the script's instructions.
http://www.developsense.com/2008/09/evolving-understanding-about.html
---Michael B.
Hi Michael, I believe there is a clear distinction between ET and Scripted Testing. It's like the difference between the goal posts on either end of a football field. Most of the action happens in the field between, and so it is with Testing too.
ReplyDeleteHowever, just because most of the game is played somewhere in the field, moving back and forth as required, it doesn't mean that you should ignore the posts and boundaries at either end.
There are clear distinctions between these two approaches. One requires thinking and learning, while another does not. Whether or not Scripted Testing is done [by humans] in true scripted style doesn't change the nature of it. It just reflects the inability of the tester to completely disengage his/her brain while testing.
The two lines of my blog that you quote are not contradictory - they are complementary. The question is meant to identify the boundaries, while the continuum comment reflects the reality of the practice.
There are strengths and weaknesses at both ends of the spectrum. You ignore the boundaries at your peril - but you don't stay there either. In teaching testers the complete picture of the opposing approaches, we can prepare them to make better decisions on how to make adjustments to their processes based on the needs of their situations.
I like to think of it as differences in the level of detail. We have a test idea, it could become a charter for an ET session, or it could become a description of a scripted test. And while scripts I see tend typically to focus on functionality, they come in many flavors often allowing choice of data, browser, etc. When tests were first written, you needed a lot more detail because the mainframe didn't know what the printer was or where the input stream was or where something was saved too, but now a lot of this happens by default so it remains as unscripted details in a scripted test.....!
ReplyDeleteErik Petersen
how do you make test plans when there are no specs?
ReplyDeleteHello Loron,
ReplyDeleteYou asked "how do you make test plans when there are no specs?"
It's a good question, but in the words of Fermat, "this margin is too narrow to contain" a proper answer. ;-)
First, I'd need to understand what *you* call a Test Plan, because it may not be what *I* call one. So, for instance, are you referring to the overall Testing Strategy for a feature/product/system/component/release? Are you referring to the collection of test cases for a common feature/product/component/area? Or do you mean something other?
The short answer is that in the absence of written specs, I'd go and ask all the people I can ask who are related to the project as to what's important to them and how they think it should work. (Make notes.) Work from there. A little song.. a little dance.. and voilĂ ! Testing done. ;-)
The reality (for me) is that written reference material only represents about a third of the input source that I'd usually use to test something. More importantly is that I don't perceive myself as a "tester" in a traditional sense anyway.
As a good exploratory tester, I see myself more as a "doubt management specialist". But that's a different discussion..
Cheers! =)
Very good article, It's really useful for me. I am doing a bit on research about "Software testing" and i found also macrotesting www.macrotesting.com to be very good source for software testing.
ReplyDeleteThanks for your article
Regards,
Prem