What is Exploratory Testing?

What is Exploratory Testing (ET)? I am asked this every once in a while and I hear a wide range of ideas as to what it is. This is one of those topics where Wikipedia doesn't really help much.

For some, ET is just "good" testing and the reason we say "exploratory" is to distinguish it from bad testing practices. Unfortunately, bad, lazy, haphazard, thoughtless, incomplete, and incompetent testing is quite popular. I won't go into the reasons or supporting evidence for this disgraceful blight on the Software Development industry at this time. Suffice it to say, I don't want to be mixed in with that lot either, so I am happy to describe what I do as something different - something that is far more successful and rewarding when done well.

Okay, so if ET = [good] testing, what is testing then? According to Cem Kaner, "software testing is a technical investigation conducted to provide stakeholders with information about the quality of the product or service under test." This definition took me a while to absorb but the more I thought about it the more I found it to be a pretty good definition.

If you ask Elisabeth Hendrickson, she would say that "a test is an experiment designed to reveal information or answer a specific question about the software or system." See, now I really like this definition! I studied Science in university and I love the way this definition reminds me of the Scientific Method. The more I learn about testing software, the more I find similarities with doing good Science. (By the way, if you want to learn more about how to do good testing, I highly recommend you read up on the Scientific Method. So much goodness in there!)

So, is that all there is to it? Testing = Science, blah blah blah, and we're done? Um, well, no, not really. ET has its own Wikipedia page after all!


I dislike the first line description of ET on the Wikipedia page. I dislike it because it is incomplete. It says that ET is "concisely described as simultaneous learning, test design and test execution." ... AND?!? And then what?! This definition is kind of missing what happens after you do the execution part. That's really important.

Elisabeth Hendrickson offers a better description (IMHO): "Exploratory Testing is simultaneously learning about the system while designing and executing tests, using feedback from the last test to inform the next."

I like this because it closes the loop between the purpose of the test and what you do with the results. In this case, when you learn something from the test you intentionally performed, you use that to decide what you will do next. It is kind of like playing the game of 20 questions. If you play the game poorly, you ask specific questions about what you think the answer is - e.g. "is it a turnip? No. Is it a bicycle? No. Is it a ...?" There's a slight, random chance you may guess it right, but that's really unlikely. If you play the game well, each question you ask helps you narrow down the possibilities until you can make a good guess with a high probability of getting it right.

I often use this diagram to explain the relationship between exploratory testing and test cases:

When we first look at a new feature or system, we don't know very much. We design experiments (or tests) to help us learn more about it. Initially, it is like the game of 20 questions, where we try many things and look at the system in different ways to try and discover what is important to someone who matters. That is, we explore the system for qualities and risks that we believe the customers, users, or other stakeholders may care about.

Test cases are different. When you have learned something about the feature (ET session complete), you may choose to document or automate important or representative paths (i.e. test cases) through the software for future reference (e.g. "regression testing"). You don't learn anything new from these test cases, so we sometimes refer to them as scripted "checks". We may use and reuse specific test cases for many different purposes - e.g. regression testing, performance testing user profiles, sanity checks, and so on.

At some point in history, the (1) intent or purpose and (2) test design behind the testing activities were lost and some idiot propagated the idea that test cases are the important part of the testing activity. This "bad practice" has doomed most of the technological world for a few generations now.

Let me be clear about this: test cases are not important. To anyone who knows how to test, we can create and choose new representative paths at any time, and, often times, the variations between the chosen paths through a system helps us uncover new risks and potential problems. Testing requires thinking. Checking, or blindly executing test cases, does not. If executing test cases doesn't require thinking, you may as well program a computer to run them because humans are famously bad at precisely following instructions.

An important difference between exploratory testing and scripted testing is that scripted testing blinds you to everything else going on in the system while exploratory testing aims to help you see more. To use a literary example, author Paulo Coelho posted a short story on "the secret of happiness" that illustrates this point. (NOTE: please read that story before continuing here - it'll only take a few minutes. I'll wait.)

I don't know if that is the secret to happiness, but I do know that in the first run through the palace, the young man was so focussed on the task that he missed everything else - this is exactly like scripted testing. The second time, the young man took in everything but forgot about the spoon - this is random or haphazard testing. Many people think this is exploratory testing but it is NOT! Exploratory testing would be how the wise man described the secret of happiness - complete your task and take in your surroundings.

It sounds hard, doesn't it? You know what, it IS hard. Good testing requires thoughtful effort and practice. If good testing was as easy as we are often led to believe then we wouldn't have all the software problems we have today, now would we?

Okay, so if doing good testing, exploratory testing, is hard, who can do it? Good question.

From one perspective, many people do this kind of testing naturally. BUT WAIT! Many people do this style of testing naturally, the same way many people can solve rate and calculus problems intuitively in their heads whenever you try to catch a ball. The mathematics behind the motion of a ball through the air (gravitational, kinetic and frictional forces) coupled with your movement relative to the ball in order to catch it is really quite complex. Not many people would say they understand or can do the math, but most people can catch the ball. So, part of your brain knows how to do the math even if your brain doesn't tell you how it does it.

It's the same with testing. There is a method to the madness. When someone goes looking for information, it is usually in response to some question in their head. Either someone asked them the question, or they thought it up based upon some related thought. That question drives you to poke, look, observe and evaluate what you learn to answer that question. That is testing. It has important elements: the question, intentional test design, observations, and analysis of results.

Some people are good at all of these elements, some are good at some of these elements, and some suck at all of them. To the latter group of individuals I say: please step away from the keyboard, and avoid management roles. Please.

There is an interesting side note related to Agile Software Development. Practitioners and coaches of agile methods may be familiar with the Agile Testing Quadrants. You will see that "Exploratory Testing" appears in quadrant 3, so what's that all about?

Funny you should ask. It is a bit misleading.

You may think that ET in Q3 means that it is something that is only done to critique the product with some business-facing tests. Not so. Exploratory testing will be performed in any and every quadrant as long as the person doing the testing is thinking, intentionally designing their tests, and learning from the results. Last time I checked, that happens in all the quadrants.

For example, when a programmer is creating unit tests to drive the development (Q1), they are thinking about the feature and design and making choices about what to automate. There is a lot of learning going on in this process and I would very much consider this discovery process as "exploratory". However, when the unit tests are coded and running automatically with every build, these are now "checks" and no more learning is taking place. So, executing these checks that were created in an exploratory way is no longer an exploratory testing activity. Get it?

Same thing with functional tests (Q2). You start off learning and exploring but once you decide upon and document a specific set of test cases, these test cases are no longer exploratory.

Quadrant 3 is an interesting place. It is the catch-all space for the million other tests that the system users and stakeholders may be interested in. The problem here is that complete testing is impossible and there is an infinite number of perspectives one may use to examine a particular system. The human brain is uniquely qualified to process a lot of different factors really quickly, integrating and adapting to new information, and eliminating and ignoring aspects that are not a priority to the stakeholders.

Computers cannot do this. Not even close. That's why the bubble in the corner of the matrix says "Manual" - because our brains are the most efficient tools to perform this kind of testing! Of course, we make use of tools and automation to help us gather information when appropriate; we just can't let ourselves fall into the trap of thinking that computers can do this for us.

So, while exploratory testing is a means to an end in the other agile testing quadrants, it is the primary approach in this particular quadrant (Q3). Got it?

So, if you fumble your way through the other three quadrants on your agile project and you are wondering why your quality still sucks, you may need to take a serious look at finding an awesome tester with some mad exploratory testing skills. Sorry to say that this is not widely taught in schools yet, so we are still something of a rare breed.

Does this help clear a few things about Exploratory Testing? Please let me know. Cheers!

15 comments:

  1. Hi Paul,

    Congratulation for this amazing post. It really makes clear the idea of ET thanks to the helpful examples.

    I share your opinion that ET is close to the Scientific Method, actually I published an article about it.

    As you know I'm discovering agile testing and your comments about the testing quadrant are illuminating and help me to see how I can improve the value of my testing.

    I really enjoyed reading this post and particulary your reference to ET as the "20 Questions game" and how you used P.Coelho's story to show difference between Scripted,Adhoc and ET. Brilliant!!!

    Thank you very much for sharing your ideas with us.

    Take care.

    David GR

    ReplyDelete
  2. Great post - I really like the diagram of going from vague understanding to specific understanding

    ReplyDelete
  3. Carsten FeilbergMay 25, 2012, 5:10:00 AM

    Wonderful post. I appreciate you for your ability to write up pretty much what I would so wish I could write myself. Thank you, Paul.

    I just made this blog post mandatory reading for my testers.

    Carsten

    ReplyDelete
  4. Hey Paul,

    There is some brilliant stuff in here and is the reason why Chris McMahon is encouraging/begging people to update the software testing pages in Wikipedia. I hope you and others will give it a go. I particularly like the carts you've included.

    Thanks for the post!

    ReplyDelete
  5. Thanks Paul for posting
    This is very well written I like especially the part of converting from ET to test cases. It sets perfectly were and when to use test cases.

    Thanks
    Pascal Dufour

    ReplyDelete
  6. Paul, Its a good post. Your ET theory properly Integration tested with a story made the whole idea Functional keeping the Load away and yet made me Exploring :)

    ReplyDelete
  7. For those interested in the Scientific Method I can recommend a book called "What is this thing called Science?" by A.F. Chalmers. I studied it for a Philosophy of Science module at Uni, and it's helped me understand why we understand that we understand everything we understand. Very much agree with the scientific method comparison. Makes me feel that 100% scripted approaches are pseudo-scientific.

    ReplyDelete
  8. How about ET being all testing required to give the desired level of comfort that the software is ready to release outside of executing existing test cases. Isn't that what ET is?

    ReplyDelete
  9. @Auto Tester: you could say something like that. Not all testing is exploratory though. I think that being aware of your learning and the intentional test design separates good ET from other testing activities. I have seen many times when people generate information and act upon it with thinking about it much. I wouldn't call that ET.

    ReplyDelete
  10. "An important difference between exploratory testing and scripted testing is that scripted testing blinds you to everything else going on in the system while exploratory testing aims to help you see more."

    I think it's more correct to say "Scripted testing *focuses* you on what's in the script and decreases the chance that you will see things outside the script". A session charter (whether formally described, or just your own sense of mission) will also focus you on different things, leading you to be more likely to see particular problems. The amount of focus an individual has is finite. An exploratory test primes your focus differently to a scripted test, but it almost certainly will cause you to *not* see particular things.

    Understanding the psychology at work helps you to tailor an approach the weaknesses and biases of your preferred methods, the overall goals of the test effort and the constraints you are under.

    ReplyDelete
  11. @Jared, very well said. Thank you. I like your wording much better. When I teach testing I usually introduce Psychology, assumptions and biases before I get into the mechanics of ET. All observations are limited in some way and we are always swimming in a sea of doubt.

    I appreciate you for the clarification. Cheers!

    ReplyDelete
  12. Amen.

    We've done quite a few studies on the topic, but measuring a ratio # test steps (as in clicks, entries) vs bugs found, we've found that exploratory finds 10-20 times more bugs than scripted. And that was with untrained testers in the "art of exploratory"...

    Scripts are typically blind and narrow, and only a few end-to-end tests should be scripted and automated.

    ReplyDelete
  13. Paul,
    Two times the last months people have referred to your post, and I was completely sure I had read it. But going back to it and reading it throughout carefully I am not so sure. It must have been something else that you wrote I thought about.

    This is absolutely fabulous! Thank you, I especially liked the figure with certainty/uncertainty and how you got the regression checks into it =)

    Cheers!
    Sigge

    ReplyDelete
  14. Hi Paul!

    What a fantastic blog!! Loved the references you used to explain ET(eg: 20 questions). As a fellow tester, I hope that many Managers read this post & encourage practicing ET.

    ReplyDelete
  15. Hi Paul!

    What a fantastic blog!! Loved the references you used to explain ET (e.g : 20 questions). As a fellow tester, I hope that many more people read this post especially Managers.

    Thank you for the post!

    ReplyDelete