SBTM is not ET

There's a subtle but important distinction that I'd like to talk about. Session-Based Testing is *not* Exploratory Testing. Please stop using those terms interchangeably because they're not.

Exploratory Testing (ET) is a testing approach that puts the emphasis on real-time learning, test design and test execution, as opposed to a more "scripted" approach that puts the emphasis on the separation of these activities - separated in time, space, and usually with copious amounts of documented artifacts.

When I first started in I.T. over 20 years ago, any testing I did as part of my programming contracts were exploratory in nature. I didn't call it 'ET' at the time and I certainly didn't approach it with the same discipline and formality that I do today. Back then, Programming was my main focus and testing was just something I did as required along the way. Ten years later (or about 12 years ago depending on your perspective), I took a workshop class on "Test Case Design" with Ross Collard. That was an amazing class that opened my eyes to a whole new world of analysis and problem solving that I didn't know before. Cool!

After that workshop, I had plenty of opportunities to practice what I learned, try new techniques and tools, and explore additional testing ideas thrown out onto the just-budding software testing mailing lists. One of the things we discussed in Ross' class was the role of "ad hoc" or informal testing. I don't have access to the data, but some study-or-other at the time (90's sometime?) showed that ad hoc testing failed to produce the same amount of testing coverage that formal test design analysis would.

Okay, I buy that. To paraphrase: guessing ideas off the top of your head consistently produced less coverage than having some structured analytical approaches/techniques/heuristics/models at your disposal. Okay. I don't need a formal study to tell me that.


So what's different with Exploratory Testing? Well, when I first learned about ET at the turn of this century, it instantly clicked with me. Rather than the "guessing" attitude normally associated with "ad hoc" testing, ET clearly defined the testing approach in a way that made you think. You learn something; you design something; you test and observe something; repeat. Note that nowhere in there does it say "take a wild-ass guess and call it good, complete or even 'good enough' testing coverage."

Before I was introduced to ET, I had spent several years practising and training other testers on test design techniques. That helped me fill in the "test design" step of ET. That step is the weakest link with most of the testers I have met and spoken with over the years who have tried and given up on (i.e. failed with) ET. You can't really fake your way through test design. That's why I make it an important part of my hiring/interviewing process (you can read the article online).

So, what *don't* I like about ET? There's just one thing really. The ET approach formalised the learning, test design and test execution aspects of testing, but not the interpersonal communication aspect of it.

The 'scripted' (waterfall) approach to testing relies on the documenting (and maintenance) of hundreds or thousands of test cases, each with their own set of pre-conditions, steps, expected results, and so on. While the value of these documented test cases may be questionable, one thing going for it is that you can share these test ideas with other people quite easily. (They're documented; pass it on.)

In ET, not so much. If the important parts of testing takes place in your head as you process all of the inputs and information, and compare them with explicit and implicit requirements and expectations, in order to assess the quality of the A/SUT, then when/how do you share those test ideas with other people (testers, developers, business analysts, etc.)? Well, you don't. Or rather, ET alone doesn't give you any advice for communicating test ideas or testing coverage with others.

Enter "Session-Based Test Management" (SBTM) or just '"Session-Based Testing".

Aha! After a year or two of using ET, I instantly found the merit in SBTM. SBTM provides the framework that you can wrap around an ET approach. It is a way that you can manage the testing effort. It has four main elements: develop specific charters, time-box an uninterrupted work session, create a reviewable result, and review/debrief the session afterwards.

Here's the catch: it is *not* a testing approach! It is a test management framework. Actually, when I teach/describe it to others, I sometimes refer to it as "Session-Based Task Management."

I have taught SBTM to programmers as a way to help them manage their time and reduce the number of interruptions during a work day. I have also successfully implemented SBTM in a waterfall organisation where very little ET was ever performed.

Yes, you read that correctly. I have even wrapped SBTM around a *scripted* testing approach.

Eek! Egad! Gadzooks! Isn't that blasphemy?

Well, actually, no.

You see, I have found that SBTM is an incredibly powerful tool for a test manager. It gives you insights into aspects of testing that you might never have without it.

The four main SBTM elements provide a solid foundation to managing your work, and can be transferred to activities other than just ET. For example: programming, writing, organising/cleaning your basement, any consulting work, and so on.

The original SBTM framework included some Perl scripts that I have long since stopped using. The original archive included a session sheet template, but like any template you can modify it and tailor it to your needs. (If in doubt, just ask James Bach for his thoughts on Test Plan templates! :)) That's the main reason I rewrote the SBTM scripts in Ruby - so that I could customise the session sheets to the needs of the projects I worked on. So, for one project I added a section to the session sheet, and for another project I completely removed the TBS metrics; my Ruby scripts are flexible and can handle the changes easily. (ASIDE: I haven't made this customisable script publicly available on my site yet. Send me a note if you are interested in trying it.)

In fact, if you follow the intent of SBTM, you don't need to use the session sheet template or scripts at all - as long as you have some agreed-upon reviewable result that you can later debrief. In this way, I have heard of some test teams that have implemented SBTM using Wiki's, and others that have integrated old Test Case Management systems into the process. Sounds cool and innovative to me!

So, what's my gripe? In the last several weeks, I have read several times that Exploratory Testing includes time-boxed, chartered sessions with reviewable results. Umm, no, I'm pretty sure you're confusing the framework with the approach, the wrapper with the content, the book format with the story.

If you have implemented SBTM on a project, I can make no assumptions about what testing approach you are following. Likewise, if you include ET in your overall testing strategy, I won't assume you are using SBTM to manage that effort.

If you want to talk about ET or SBTM, please try to describe them in the correct context. It will make it less confusing for beginners and other interested parties. Granted, together you have a very powerful combination. But Superman is still super in a different suit. =)

3 comments:

  1. Very good point regarding ET weakness in
    'how do you share those test ideas with other people (testers, developers, business analysts, etc.)?'

    I am still surprised that Exploratory Testing hasn't had a concept for light-weight communication of test ideas.
    Just a list of one-liners could suffice.

    SBTM seems more focused on test execution (as opposed to early test idea generation, refinement, and sharing.)
    Maybe the input to sessions could be rapidly documented, so it can be reviewed by testers, developers, stakeholders prior to the test execution.

    ReplyDelete
  2. Thanks for the note, Rikard.

    What I have done in the past is to start with an "Exploration & Analysis" session when looking at a new feature. We look for the important elements, potential risks, and generate some initial charters that we believe will give us good coverage of that feature.

    We then make this list of charters available for anyone to review. It's usually at a high enough level that people in other depts can provide useful feedback and input. There's no standard template for this, and I have heard and seen people do it in different ways in different companies.

    That's not necessarily a by-product of ET... I'd say it comes out of a combination of ET and SBTM.. and you'd need to know how to do it. I know to do this from experience and not because I've seen it documented as a "best practice" anywhere.

    I don't know of any easy way to communicate test ideas that are generated during a test session. I/we think of so many tests, and the point is *not* to document them all. There are, of course, different ways to record the testing performed, but you would have to find the balance between documenting (one-liner) tests and actually doing the testing that works for you.

    Cheers!

    ReplyDelete
  3. Hello, Paul.
    I found your blog a few hours ago, and I have been reading it since :-) I came across it via STAQS.com when googling on SBTM tools.
    I was looking for better SBTM tool than the perl script based one on satisfice.com.
    I really liked your Lessons learned in Session-Based Exploratory Testing.

    I have used SBTM for several years now and I was very interrested in your SBTM experiences. I share many of your ideas regarding SBTM, test plans, communicating test et cetera. Many of my own test ideas comes from people like Bach, Bolton and Kaner.

    I read that you have developed the scan tool further in Ruby. I am very interrested in testing how that works, If you would like to share it to me. Like I wrote in the beginning, I was looking for a better tool than the original scan-tool in perl.

    I would gladly share my experiences from SBTM if anyone is interested.

    ps I have also once tested SBTM as a framework in waterfall organisatins with test cases :-)

    ReplyDelete