Showing posts with label agile testing. Show all posts
Showing posts with label agile testing. Show all posts

Agile Testing Automation

As an Agile/Testing consultant, trainer and coach there are certain questions I hear often. One of them is: "What tool should we use for test automation?"

My response is usually the same: "What do you want to automate? What kinds of tests or checks?"

Very often I see people and teams make the mistake of trying to find the silver bullet.. the one tool to automate everything. Alas, there is no such tool and I don't believe there ever will be. At least, not in the foreseeable future.

It's about this time that I usually look for a whiteboard or piece of paper so I can sketch a few diagrams to help illustrate the problem. There are two diagrams in particular that I like to use. (Aside: for reference, I first publicly presented this at the DevReach 2012 conference, and it is part of the Agile/Testing classes I teach. This is the first time I am blogging the topic since I am often asked about it.)

Agile Testing vs Exploratory Testing

This month I will be doing two sets of training sessions - one for Agile Testing (AT) and one for Exploratory Testing (ET). I have extensive experience with both and I enjoy teaching and coaching both activities. This is the first time that I have been asked to do both training sessions at basically the same time, so I have been giving the relationship between the two some thought.

Over the past decade I have sometimes struggled with the difference between the two. I admit that I have on occasion even described them as being the same thing. Having given it some thought, I now think differently.

The insight came to me when I had a conversation with someone about both topics recently. The courses I'm doing have different audiences, different outlines, and different expected behaviours/outcomes. Yes, there is some overlap, but not much.

I have written previously about what I believe is ET (it's an interesting post - I suggest you take a moment to read it if you haven't already). Near the end of that article, I mention Agile Software Development and the Agile Testing Quadrants, so there is some relationship between the two.

ET is sometimes described as an exemplar of the Context-Driven Software Testing community. If you read through the seven basic principles of the Context-Driven Testing (CDT) School you may see where there are similarities in the philosophies between it and the values & principles listed in the Agile manifesto. Things like:
  • CDT: People work together to solve the problem
    • Individuals and interactions over processes and tools
    • ...face-to-face conversation
    • Business people and developers work together daily throughout the project
  • CDT: Projects are often unpredictable
    • Respond to change
    • Welcome changing requirements, even late in development...
    • Deliver working software frequently...
  •  CDT: No best practices
    • Continuous attention to technical excellence and good design ...
    • The best architectures, requirements, and designs emerge from self-organizing teams ...
And we could probably make a few more connections between Agile and the CDT principles.

So what are some of the differences?

What is Exploratory Testing?

What is Exploratory Testing (ET)? I am asked this every once in a while and I hear a wide range of ideas as to what it is. This is one of those topics where Wikipedia doesn't really help much.

For some, ET is just "good" testing and the reason we say "exploratory" is to distinguish it from bad testing practices. Unfortunately, bad, lazy, haphazard, thoughtless, incomplete, and incompetent testing is quite popular. I won't go into the reasons or supporting evidence for this disgraceful blight on the Software Development industry at this time. Suffice it to say, I don't want to be mixed in with that lot either, so I am happy to describe what I do as something different - something that is far more successful and rewarding when done well.

Okay, so if ET = [good] testing, what is testing then? According to Cem Kaner, "software testing is a technical investigation conducted to provide stakeholders with information about the quality of the product or service under test." This definition took me a while to absorb but the more I thought about it the more I found it to be a pretty good definition.

If you ask Elisabeth Hendrickson, she would say that "a test is an experiment designed to reveal information or answer a specific question about the software or system." See, now I really like this definition! I studied Science in university and I love the way this definition reminds me of the Scientific Method. The more I learn about testing software, the more I find similarities with doing good Science. (By the way, if you want to learn more about how to do good testing, I highly recommend you read up on the Scientific Method. So much goodness in there!)

So, is that all there is to it? Testing = Science, blah blah blah, and we're done? Um, well, no, not really. ET has its own Wikipedia page after all!

Testers, Learn about Agile (and Lean)

Let me tell you about something called Dramatic Irony. You see it in movies, television shows, plays and in many other places. It happens when you (as the audience or observer) see or understand something that the main characters don't. Often times this is funny, sometimes it's not. Personally, I am one of those that likes to laugh when I see this happen.

On my learning/education quest over a decade ago, I took many different positions and roles within various IT organisations so that I could learn different aspects of Quality. I went through various phases, and the one I am least proud of was the "Quality champion." This wasn't a job title so much as a belief that (mis-)guided my actions. The role/part/perspective came mainly from believing what my employer(s) told me at the time - namely that "the QA/Test team was responsible for quality."

If you have worked in Software Development for a while, and perhaps for a larger organisation, you have likely seen someone who believes they are a Quality Champion. They don't want to see any (known) bugs go out; they check up on everyone in the team to see that they have done their reviews or had someone else inspect their work before passing it onto the next person/team; they join committees to create, document, maintain or present processes that will increase the quality of the delivered products/solutions; and so on.

Ah, the poor misguided fools.  Bless their hearts.