What I learned about Testing from a crazy ex-girlfriend

I was reminiscing with a tester colleague today about how our mothers used to mess with our stuff when we were younger and how it really got on our nerves.

Picture the scene: you have a desk in your room that's plastered with papers and stuff everywhere. And you know precisely where everything is. It's your mess after all.

Enter the mom. She looks around, maybe she's come in to drop off some laundry or to complain about the state of your room or whatever. You aren't around. She starts to tidy. She tidies the papers on your desk and arranges your action figures/books/pencils/Lego/rubber band collection/whatever into a neat arrangement of some kind.

You return. "Ahhhh! Where's my stuff?!?! You changed the order! I can't find anything now! Don't touch my stuff!!"

Your mom, now hurt because she was "only trying to help," vows to never touch your stuff again unless someone's life depends on it. Maybe. We'll see next week.

What skill does Exploratory Testing require?

I've just been challenged with a sobering reality.

I've heard the term "Exploratory Testing" used many times over the last few years by developers and testers at various gatherings. I've practiced it myself for over 6 years in various black-box system testing efforts. When training new testers on my team, I provided them with foundational concepts in context, risk, scientific method, test techniques and communication. Then over the course of several weeks, I reviewed their test sessions and provided feedback during debrief sessions to improve their understanding and application of the various testing skills required to be efficient and effective.

People have told me that I have really high standards, and perhaps I do. To me, testing is a passion and fun, and quality is an ideal achieved through effective communication and interactions with all the stakeholders on a project.

But that's all besides the point. If the question is "what is Exploratory Testing and how do you do it?" then my standards and expectations from team members are irrelevant.

ET is simply an approach to testing software where the tests are not predefined and the focus is on iterative learning, test design and execution (to paraphrase a simplified definition).

How someone learns, how someone designs tests, how someone executes those tests - these things are not defined by any standard; they are applied differently by different people. ET can be performed by anyone. There aren't any requirements for how well or thoroughly someone should perform it.

To quote from the animated movie "Ratatouille": "Anyone can cook. But I realize, only now do I truly understand what he meant. Not everyone can become a great artist, but a great artist can come from anywhere."

So, when I hear the term ET thrown around, I have about as much understanding of how they're testing as I do from a development shop that uses the term "Agile". That is, I don't know anything about what it means to them, how they're applying it, how effective it is, or how it compares to my standards/expectations.

I've been reading articles and research lately comparing ET and Test Case-driven Testing (TCT) approaches, and it never ceases to amaze me how stats and research may be twisted to support everyone's beliefs about which is better than the other.

Developers and Product Managers who have worked with me understand the quality of the information and feedback that my testing style provides. They have said that it is a whole new level of testing feedback they've never seen before. It makes me feel good to hear that - that I'm providing a valuable service.

But when I read one of these comparison articles, I have to assume that the ET applied in the research studies aren't at the same level that I apply it. I have to accept that. I may not like it, but that's the reality. To me, the same research applied would likely show that Agile and Waterfall aren't really all that different in terms of produced output. Sigh.

Am I missing something?