What is Exploratory Testing?

What is Exploratory Testing (ET)? I am asked this every once in a while and I hear a wide range of ideas as to what it is. This is one of those topics where Wikipedia doesn't really help much.

For some, ET is just "good" testing and the reason we say "exploratory" is to distinguish it from bad testing practices. Unfortunately, bad, lazy, haphazard, thoughtless, incomplete, and incompetent testing is quite popular. I won't go into the reasons or supporting evidence for this disgraceful blight on the Software Development industry at this time. Suffice it to say, I don't want to be mixed in with that lot either, so I am happy to describe what I do as something different - something that is far more successful and rewarding when done well.

Okay, so if ET = [good] testing, what is testing then? According to Cem Kaner, "software testing is a technical investigation conducted to provide stakeholders with information about the quality of the product or service under test." This definition took me a while to absorb but the more I thought about it the more I found it to be a pretty good definition.

If you ask Elisabeth Hendrickson, she would say that "a test is an experiment designed to reveal information or answer a specific question about the software or system." See, now I really like this definition! I studied Science in university and I love the way this definition reminds me of the Scientific Method. The more I learn about testing software, the more I find similarities with doing good Science. (By the way, if you want to learn more about how to do good testing, I highly recommend you read up on the Scientific Method. So much goodness in there!)

So, is that all there is to it? Testing = Science, blah blah blah, and we're done? Um, well, no, not really. ET has its own Wikipedia page after all!

Testing is a Medium

In a few days I will be giving a presentation to the local Agile/Lean Peer 2 Peer group here in town. The group has a web site - Waterloo Agile Lean, and the announcement is also on the Communitech events page.

I noticed the posted talk descriptions are shorter than what I wrote.  The Waterloo Agile Lean page has this description:
"This session will introduce the basic foundation of Exploratory Testing and run through a live, interactive demo to demonstrate some of how it’s done. Bring your open minds and questions and maybe even an app to test. If ET is new to you, prepare to get blown away!"
The Communitech page has this description:
"Exploratory Testing is the explosive sound check that helps us see things from many directions all at once. It takes skill and practice to do well. The reward is a higher-quality, lower-risk solution that brings teams a richer understanding of the development project.
This session will introduce the basic foundation of Exploratory Testing and run through a live, interactive demo to demonstrate some of how it's done. Bring your open minds and questions and maybe even an app to test. If ET is new to you, prepare to get blown away!"

Quality Agile Metrics

I was asked recently what metrics I would collect to assess how well an agile team is improving. I paused for a moment to scan through 12 years of research, discussion, memories and experiences with Metrics on various teams, projects and companies - mostly failed experiments. My answer to the question was to state that I presently only acknowledge one Metric as being meaningful: Customer Satisfaction.

We discussed the topic further and I elaborated some more on my experiences. Regarding specific "quality" metrics, I explained that things like counting Test Cases and bug fix rates are meaningless. I also referred to the book "Implementing Lean Software Development" by Mary and Tom Poppendieck (which I highly recommend BTW) which warns against "local optimizations" because they will eventually sabotage optimization of the whole system. In other words, if I put a metric in place to try and optimize the Testing function, it doesn't mean the whole [agile] development team's efficiency will improve.

It needs to be a whole team approach to quality and value. Specific measurements and metrics often lead to gaming of the system and focus on improving the metrics rather than putting the focus on delivering quality and value.  If the [whole] team is measured on the customer satisfaction, then that is what they will focus on. I have long since stopped measuring individual performance on a team.

I haven't stopped thinking about this question though, so I put this question out on Twitter this morning:
Aside from Customer Satisfaction, are there any other Quality metrics you'd recommend in an #agile environment?