Showing posts with label ET. Show all posts
Showing posts with label ET. Show all posts

Agile Testing vs Exploratory Testing

This month I will be doing two sets of training sessions - one for Agile Testing (AT) and one for Exploratory Testing (ET). I have extensive experience with both and I enjoy teaching and coaching both activities. This is the first time that I have been asked to do both training sessions at basically the same time, so I have been giving the relationship between the two some thought.

Over the past decade I have sometimes struggled with the difference between the two. I admit that I have on occasion even described them as being the same thing. Having given it some thought, I now think differently.

The insight came to me when I had a conversation with someone about both topics recently. The courses I'm doing have different audiences, different outlines, and different expected behaviours/outcomes. Yes, there is some overlap, but not much.

I have written previously about what I believe is ET (it's an interesting post - I suggest you take a moment to read it if you haven't already). Near the end of that article, I mention Agile Software Development and the Agile Testing Quadrants, so there is some relationship between the two.

ET is sometimes described as an exemplar of the Context-Driven Software Testing community. If you read through the seven basic principles of the Context-Driven Testing (CDT) School you may see where there are similarities in the philosophies between it and the values & principles listed in the Agile manifesto. Things like:
  • CDT: People work together to solve the problem
    • Individuals and interactions over processes and tools
    • ...face-to-face conversation
    • Business people and developers work together daily throughout the project
  • CDT: Projects are often unpredictable
    • Respond to change
    • Welcome changing requirements, even late in development...
    • Deliver working software frequently...
  •  CDT: No best practices
    • Continuous attention to technical excellence and good design ...
    • The best architectures, requirements, and designs emerge from self-organizing teams ...
And we could probably make a few more connections between Agile and the CDT principles.

So what are some of the differences?

What is Exploratory Testing?

What is Exploratory Testing (ET)? I am asked this every once in a while and I hear a wide range of ideas as to what it is. This is one of those topics where Wikipedia doesn't really help much.

For some, ET is just "good" testing and the reason we say "exploratory" is to distinguish it from bad testing practices. Unfortunately, bad, lazy, haphazard, thoughtless, incomplete, and incompetent testing is quite popular. I won't go into the reasons or supporting evidence for this disgraceful blight on the Software Development industry at this time. Suffice it to say, I don't want to be mixed in with that lot either, so I am happy to describe what I do as something different - something that is far more successful and rewarding when done well.

Okay, so if ET = [good] testing, what is testing then? According to Cem Kaner, "software testing is a technical investigation conducted to provide stakeholders with information about the quality of the product or service under test." This definition took me a while to absorb but the more I thought about it the more I found it to be a pretty good definition.

If you ask Elisabeth Hendrickson, she would say that "a test is an experiment designed to reveal information or answer a specific question about the software or system." See, now I really like this definition! I studied Science in university and I love the way this definition reminds me of the Scientific Method. The more I learn about testing software, the more I find similarities with doing good Science. (By the way, if you want to learn more about how to do good testing, I highly recommend you read up on the Scientific Method. So much goodness in there!)

So, is that all there is to it? Testing = Science, blah blah blah, and we're done? Um, well, no, not really. ET has its own Wikipedia page after all!

The Future(s) of Software Testing

This topic keeps coming up in various discussions so here are some thoughts on what I think a future may hold for us one day.  What does it mean to predict the future? Does it mean it is inevitable? If it is something appealing, maybe it is something we can work towards.

What does it mean to talk about the Future of Software Testing without talking about "Quality" (whatever that is)? I believe that Testing is a means to helping you attain the desired quality but that on its own it is not an indicator of what the delivered quality will be. I think it is fair to speculate on the practice of Testing in isolation of the idea of Quality. Just like it is fair to speculate on the kind of vehicles you use for travel without saying it is a clear indicator of the quality of the journey.

When it comes to predicting the future, how far ahead do we look? It used to bother me that H.G. Wells chose to go 800,000 years in the future in his famous work, The Time Machine. Some authors go 100 years or even just 5 or 10 to tell their tale. I will not be writing about what will happen in 5 years time and I really hope it doesn't take 100 years. I don't know how long some of these events will take to manifest. I do have some idea of 'markers' that we may see along the way though.

When it comes to Software Testing, two thoughts jump immediately to mind: 1) Software Testing is an inseparable part of the creative Software/Solution Development process; and 2) there will be many different possible futures depending on how you do testing today.  Put another way, there is no one right way to solve a problem, and creating software is a complex problem-solving activity performed in shifting organisations of human interaction so there are many ways to do it. In my opinion, the technical 'fiddly bits' like code, tests and user documentation pale in comparison to the difficulty of those two primary challenges: (1) solving the real problem, and (2) working with others to get the job done.

When we ask what is the future of software testing, we are really asking what is the future of software development. So what will Testing look like in the future? Well, what does Testing look like today? What does Development look like today?

Using MS Outlook to support SBTM

Okay, to recap, Session-Based Test Management (SBTM) is a test framework to help you manage and measure your Exploratory Testing (ET) effort. There are 4 basic elements that make this work: (1) Charter or mission (the purpose that drives the current testing effort), (2) Time-boxed periods (the 'sessions'), (3) Reviewable result, and (4) Debrief. There are many different ways that you might implement or apply these elements in your team or testing projects.

Let's take a look at tracking the testing effort from strictly a Project Management perspective. Years ago, when I first became a test manager, I was introduced to the idea of the 60% 'productive' work day as a factor to consider when estimating effort applied to project schedules. That is, in a typical 8-hour workday you don't really get 8 complete, full hours of work from someone. I don't believe it's mentally possible to get that. The brain needs a break, as does the body, and there are many natural distractions in the workplace (meetings, email, breaks, support calls, stability of the code or environments, and so on), so the reality is that the number of productive working hours for each employee is actually something less than the total number of hours they're physically present in the workplace.

That 'productivity' factor changes with each person, their role and responsibilities, the number of projects in the queue, and so on. Applying some statistical averaging to my past experiences, I find that 60% seems about right for a tester dedicated to a single project. I have worked with some teams that have been more productive and some much less.

So what does this look like? If we consider an 8-hour day, 60% is 4.8 hours. I'm going to toss in an extra 15 minute break or distraction and say that it works out to about 4.5 hours of productive work from a focussed employee in a typical 8-hour day. Again, it depends on the person and the tasks that they're performing, so this is just an averaging factor.