Agile Testing Automation

As an Agile/Testing consultant, trainer and coach there are certain questions I hear often. One of them is: "What tool should we use for test automation?"

My response is usually the same: "What do you want to automate? What kinds of tests or checks?"

Very often I see people and teams make the mistake of trying to find the silver bullet.. the one tool to automate everything. Alas, there is no such tool and I don't believe there ever will be. At least, not in the foreseeable future.

It's about this time that I usually look for a whiteboard or piece of paper so I can sketch a few diagrams to help illustrate the problem. There are two diagrams in particular that I like to use. (Aside: for reference, I first publicly presented this at the DevReach 2012 conference, and it is part of the Agile/Testing classes I teach. This is the first time I am blogging the topic since I am often asked about it.)

The first diagram I draw is Brian Marick's Agile Testing Matrix. In his blog post, he drew a 2x2 matrix as a way to categorise the different kinds of testing you might do on an agile development team. With these categories, you can see the relationships between the type of work certain team members usually do and the kind of value they provide to the project.

This model was popularised by Lisa Crispin and Janet Gregory in their 2009 book "Agile Testing" and it took on the name "Agile Testing Quadrants." Here is a version of their quadrants for reference:


This is a fascinating way to categorise the different kinds of testing and I use this to help people in different roles relate to the different feedback activities performed to deliver working software. When I look at this matrix, I often highlight certain features. One of them is the vertical line separating the right and left halves:

The testing activities on the Left side are what we sometimes call Verification tests, or the kinds of tests we perform to check that we "built it right." We do these activities to ensure high levels of internal quality of the software. If we are going to look to automate any kinds of testing, it's most likely to be from this side of the box. (For those of you in Testing circles, we might call this side "checking.")

The Validation tests on the Right side help us check that we "built the right thing." These cover externally-visible aspects of quality. A lot of the feedback on this side has two very noticeable traits:
  1. We generally can't automate these kinds of tests.
    • We need people to drive them and surprise us with their feedback.
    • We might leverage computers, tools and automation to help us do pieces, but the design, assessment and decision-making definitely requires human effort and experience or skill.
  2. Feedback here very often leads to design changes in some way, shape or form.
There is an important distinction between the two V's, and any good release requires that we do testing from both sides to ensure internal and external quality. (Aside: I'm not going to do a quadrant-by-quadrant breakdown at this time.)

The next diagram I like to reference on the topic of test automation is Mike Cohn's Test Automation Pyramid:


This is a simplification of the problem of test automation to help us understand the relationships between the different kinds of automated tests or checks. In general, we should have many more unit or code-level tests than high-level end-to-end tests that run through the UI. We expect the low-level tests to run quickly and often, so they are naturally our first line of defense to help us build maintainable working software.

The top of the pyramid represents the more complex and slower-to-run tests that Testers generally create and maintain. These automated checks are important to help us know that the system appears to be running as expected, however, they are also harder to troubleshoot when something goes wrong.

So how do these two models fit together? Like this:


We can see that just about everything in Q1 is covered by the test automation pyramid. Devs, if you aren't automating your unit and service-level tests, you are doing it wrong. If you want to talk about tools for test automation, this is likely the best place to start. Automation here has the biggest ROI to help you on your agile transition.

The next important observation is in Q2, the place where Testers typically self-identify themselves. The key here is that NOT EVERYTHING CAN OR SHOULD BE AUTOMATED HERE! The closer you get to business-facing tests, the more you require human intelligence to interpret results in real-time. Sorry people, the human brain outperforms machines most of the time with these kinds of problems, so this is where good Exploratory Testing practices can really help you to learn how to test things efficiently and effectively. Gotta learn to use your brain to do the best work here and make good choices on what kinds of checks are worth automating.


The tests in Q4 are interesting. For the most part, specialists or experts may leverage tools and automation (as required or applicable) to help them test more, but very few tests become regular automated checks. For example, a Performance Testing specialist may leverage automated scripts to simulate certain user usage patterns so they can monitor loads on various system resources (e.g. memory, CPU, network, and so on). Just because we leverage automation doesn't mean these tests are automated. However, there are a small number of checks that we might automate for benchmarking purposes to help us detect potential performance threats from one build to the next.

One important point to note is that the automation tools differ widely depending on purpose and technologies, so they change from one quadrant to the next and from one layer to the next. Occasionally, we might be lucky enough to leverage the same automation tool for a few different test design approaches but caveat emptor!

Don't let the snake-oil salesmen fool you into buying tools that they say can test everything. Similarly, you have my permission to ignore those ignorant managers who decree that you must use the same automation tool for every kind of testing you need to do. Reality doesn't work that way, so don't waste your time and the company's money on fool's errands.

Know what you need the automation tools to do (ask if you are unsure), and consult with the skilled practitioners in those fields of development (yes, testers, this includes you) to help you assess fitness for use.

So, what tool should we use for test automation? The right one.

Let's do some research into the problem we're trying to solve and learn from others who have done work in this field. Allow yourself time to learn and experiment with tools to check their suitability. Tools are rarely perfect, so expect some time for customising tools to your specific needs. Demo what you learn to your team and get the whole team working together to fit the tools into the development cycle.

We succeed as a team, so never go it alone in an automation effort. Remember that test automation is a development task -- i.e. programming is involved and all the supporting activities that go with it (like maintenance, version control, coding standards, and so on). Always be sure to include and involve your developers in any automated testing evaluations and efforts.

Test automation enables development teams to go faster. Treat your test automation code on par with your production code. It may save you, your company or your customers one day. Give it some thought.

8 comments:

  1. Good article and agree 101%. I like this paragraph
    "We succeed as a team, so never go it alone in an automation effort. Remember that test automation is a development task -- i.e. programming is involved and all the supporting activities that go with it (like maintenance, version control, coding standards, and so on). Always be sure to include and involve your developers in any automated testing evaluations and efforts"
    The success of an agile team is collaboration and lot's of communication. The separation of devs and QA's must come to an end and start working together to release an awesome product while enjoying working with your team mates!

    ReplyDelete
  2. Do not confuse people. A tester that automates develops in the language of the automation tool; s/he doesn't develop in the language of the application being tested. So QA doesn't end.

    ReplyDelete
    Replies
    1. I'm sorry, Any, I did not intend to confuse anyone. Yes, automation is a programming activity, though sometimes testers may automate in a language that is more natural. It's about the interfaces.

      For example, Fitnesse uses tables to capture/communicate the testing conditions in a structured format that clarifies the business rules and testing techniques. Cucumber + Ruby is a good tool that testers may use to create Executable Specifications. You are testing using the business language with automation in the background. Depending how the team works, a tester may never see/maintain the back-end automation. Is that what you mean?

      Delete
  3. Interesting post Paul, I like how you've overlayed the models to give a different perspective.

    1 question - what about the bottom left hand corner of the pyramid? How does that fit into the overlayed model please?

    It seems the bottom right corner does have significance, does the left corner have any?

    Duncs

    ReplyDelete
    Replies
    1. Good question, Duncan. I have thought about that too. Sometimes I treat the quadrants/matrix as a closed set of 4 boxes for categorizing things, and sometimes I look at it as an x-y graph with an outside boundary drawn for convenience.

      If you think of the former, a closed set of categorization boxes, then anything outside the boxes doesn't make sense (to me, at least). If you think of the latter, an x-y graph, then that left corner sticking out is just a part of what's in that quadrant. When I look at that bottom left corner, I usually take the x-y graph view, so no, it doesn't have any significance (to me, right now).

      Cheers!

      Delete
    2. Thanks for your response Paul, greatly appreciated.

      I'll take from your answer the rough idea of how the models can be overlayed & try not to sweat the details too much :-)

      Duncs

      Delete
  4. Very good article and I also like the describing pictures :-).

    I have a question about Q3.
    When you write
    "The next important observation is in Q2, the place where Testers typically self-identify themselves. The key here is that NOT EVERYTHING CAN OR SHOULD BE AUTOMATED HERE! The closer you get to business-facing tests, the more you require human intelligence to interpret results in real-time."
    Are you then talking more about the tests in Q3, where you as a tester do more alternative tests to try to criticize the product (not the base flows)?
    I agree that not everything is worth automating in Q2, but we find (in our company) that our regression test suit (on UI-level) is quite good to have and is relatively stable.

    //Jessica

    ReplyDelete
    Replies
    1. Hi Jessica, thanks for your question. When you perform the kinds of tests that are more Business Facing (i.e. at the top of the chart), your team members/testers are dipping more into the psychology and personas of your end users. This adds complexity that is very difficult to automate and is rather fun for the creative mind.

      The human mind is the most efficient and effective tool for quickly learning and exploring systems from different users' perspectives. When that learning has happened, we can select certain touchpoints along the user journey and choose to automate those checks for future reference (i.e. regression testing). The kind of UI-level test you describe is definitely Q2 and is what I would happily place in the automation part of Q2.

      Q3 is more of an exploration of solution fit so it requires real user/expert/specialist feedback, and the effect of this feedback is usually a design change somewhere in the developed product/software/system. This kind of feedback cannot be automated.

      Q2: Business Facing + Verification = some test/check automation possible.

      Q3: Business Facing + Validation = no test/check automation possible as computers require predetermined expected results. (Computer-assisted data gathering highly encouraged when possible.)

      It's a tricky line. Once you've understood & agreed upon a design requirement (i.e. from Q3 exploration), future checking of that requirement places the [automated] check in Q2.

      Delete