The Testing Paradox

I've seen much debate over the last few years regarding Scripted Testing vs. Exploratory Testing. I think I know the answer as to which approach is the one, correct, true way of doing Testing - it's "Yes."

Before I became a software tester I was a scientist and a High School Science Teacher. I recall a few lessons learned that helped shape how I do testing today.

When teaching (Inorganic) Chemistry, there was an experiment that I performed at the beginning of the year/term. This experiment served a few purposes. The first was to show the students an example of a chemical reaction. You know - the whizz, bang, cool aspect. The second purpose was to stress the importance of reading through the experiment carefully to understand what the important parts were - the warnings, dangers, and critical factors. You see the experiment is designed to fail. That's right, you follow the steps and nothing happens -- the big let-down, and students return to their desks. Denied. *Then* you add some water to dilute the chemicals before discarding the solution and .... POP! WHIZZ! SMOKE! SPARKS! Oooooo, ahhhhhh!

You see, Water is the catalyst required to make the reaction occur. The demonstration experiment is designed to challenge your beliefs that water is fine to dilute any failed experiment. Students need to understand that water is another chemical and that it is not always the best way to deal with a failed experiment. There is no always.

Firefighters understand this when dealing with different kinds of fires. You don't throw water on every type of fire. There are big differences between wood fires, electrical fires and chemical fires. They need to understand the situation before they can effectively deal with it.

When doing experiments in Science, there are times when you can improvise certain variables/steps and times when you clearly can't. So how can you tell the difference? You need to read everything carefully first. You need to understand what you're doing. Only then can you tell the critical steps and components from those that you have some freedom with.

So, what's the tie to testing?

When I first started testing software, many years ago, it was mostly scripted. In fact, I was responsible for an automation suite that tested different kinds of fax modems. The scripts ran through a series of functions in the software apps to test the compatibility of different hardware. Because I knew that, I was able to make variations to the software scripts as long as I knew that the hardware baseline information was still maintained. That is, there were critical functions that we needed to know about, and other, somewhat interesting things that were fine if we knew about them too (and fine if we didn't). I understood the purpose of the tests, so I was able to improvise as long as I didn't negatively affect the bottom line.

Over the last 6 years, I have been doing Exploratory Testing almost exclusively. Does that mean that we do it 100% of the time? No. Why not? Because I can think and tell the difference between when it's good to explore and when it's time to follow a script.

For example, when testing something new, we explore. We don't know anything about it and we don't know how well it meets the customer's needs. Scripting is not the way to go here. When we find problems we log bug reports.

Bug reports are interesting creatures. They are scripts. They indicate the conditions, data, exact steps and expected outcome(s) required to reproduce a very specific problem (or class of problems). Often, if you don't follow these steps as outlined, you will NOT see the unexpected behaviour. It's important that (1) a tester identifies this script as exactly as required and that (2) a programmer follow the steps as exactly as possible so that they don't miss the problem and say "it works on my machine."

When a bug is fixed and returned to our test team for testing, we do a few things. The first is to follow the script and see if the original/exact problem reported is indeed fixed. The second is to now use the bug report as a starting point and explore through the system looking for similar problems. Sometimes we have the time to do that when we first report a bug, sometimes we don't. It depends on what we were thinking/doing/exploring when we first encountered the problem. When a bug comes back to you, though, then that's the centre of your world and there's nothing to keep you from using it to give you additional ideas for finding similar or related problems.

When doing Performance Testing, it is important to understand that it is a controlled experiment.. a scripted test, if you will. You may have done some exploration of the system or risks to identify the particular aspect of the system that you want to observe, but now that you know what you're looking for, you need to come up with a specific plan to control the environment, inputs and steps as best as possible in order to observe and record the desired metrics. This is just good science. Understand your controls and variables. If you don't know what I'm talking about, DON'T DO PERFORMANCE TESTING. Leave it to the professionals.

I have a few stories about incompetent testers looking for glory who took my Performance Test Plans and improvised them in unintended ways or didn't even read the whole thing because they were lazy or thought they knew better .. just to have meaningless results that couldn't be used to infer anything about the system under test. My plans weren't the problem, the testers were.

So how do you do good testing? It starts with your brain. You have to think. You have to read. You have to understand the purpose of the activity you are being asked to perform and the kind of information your stakeholders need to make a good, timely decision.

Sometimes Exploratory Testing is the way to go, sometimes it's not. Note: I recognise that at this point there are still many, many testers out there who don't know how to do ET properly or well. Sigh. That's unfortunate. Those of us who do understand ET have a long way to go to help educate the rest so that we can see a real improvement in the state of the craft of testing.

Ditto for Scripted Testing. If you're going to follow the exact steps (because it is important to do so), then follow the steps and instructions exactly. Can't follow the steps exactly because they are incomplete or no longer relevant? Well, what do you think you should do then?

The point of this note is just to say that no one side is correct. There is no one true, correct, testing approach/method. They both are and they both aren't. It's a paradox. An important one. Practice good science and understand what you're doing before you do it. Improvise only when you know you can. Understand the strengths, weaknesses, and risks of any approach in the given situation and you should do fine.