Test Management is Wrong

Test Management is wrong. There. I said it.

I can't believe it took me this long to notice the obvious. If you are doing Software Development in any fashion, and are worried about how to manage your testing to develop a "quality" product, stop it.

Let's be clear about what I mean here. If you consider any software development life cycle (SDLC) process, you will find activities like the following arranged in some fashion or other:
  • Requirements gathering, specification
  • Software design
  • Implementation and Integration
  • Testing (or Validation)
  • Deployment
  • Lather, Rinse, Repeat (i.e. Maintain, Enhance, Fix, Mangle, Spin, and so on)

These activities aren't Waterfall or Agile or anything else, they are just activities. HOW you choose to do them will reflect your SDLC. I don't care about that right now. The part I'm picking on is the Testing bit near the middle, regardless of whether you do them in an agile, Waterfall, or some other way.

In particular, I am picking on the fallacy or myth that a good Test Management plan/process is what you need to develop and release a high Quality product.

In almost 25 years of working in the Software/IT industry, I have never worked at a company or heard of any company that had a Test Management process so solid that it lead to high quality, I don't believe there ever will be either.

Here's an analogy. Let's say I want to make some Rice Krispies treats. The ingredients are: butter, marshmallows and rice krispies. You heat 'em up, mix 'em up, flatten 'em out, cool, slice and eat. That's it.

Ya, but you know what? I have this really excellent Marshmallow Plan that will produce the Bestest treats anyone has ever tasted.

How does that sound to you? Interested to learn more about the plan? Be truthful.

If you said "Yes", I don't know if I can help you. If you said "No" there may be hope for you yet.

Why is it that Software companies are attracted to the idea that Testing is somehow equated with Quality? Testing is an activity that you may choose to do or not do and still have a great quality product. The Agile Manifesto doesn't mention testing anywhere and yet the Manifesto's signatories, thought leaders, and practitioners produce great quality stuff for people. How does that work?

Why is it only the Testing phase/activities/part that equates with Quality? Why aren't companies and managers everywhere promoting super awesome Requirements Management processes and plans at Requirements Conferences to help deliver high Quality? What about Design Management? No, no, wait, I got it. We need Deployment Management. That's it! I've solved it. Aha!

No, these are all stupid suggestions. And you know it too.

Is it because Testing/Verification activities are part of the other 'phases' or steps in software development, so surely we should be able to manage all that, right? Well, actually, no. That's the wrong approach.

Test Management systems manage and measure some of the testing activities performed on a project. Completing your testing activities tells me nothing about the overall "Quality" of the product. By definition, it is an incomplete part of the picture.

By putting your faith in Test Management plans or systems you are effectively saying "I don't know how to measure what you want (i.e. Quality), so I will measure what I can do (i.e. Test)."

Does this mean it's pointless to track the testing you've done? NO! I am not saying that. If you do something that you believe contributes to the value of the project, then please track what you are doing so that others can see what you have done. Preferably, make your progress visible in some way.

What we need to focus on is what the customer needs. Ask yourself: What problem are they trying to solve? What are we trying to deliver that is of value to them?

What we need is Value Management. There are successful (i.e. "quality") products built and released to customers that never had any independent testers on the development team. Cool. These teams get it.

I have a friend and colleague who started up a company and within hours had sold a product that didn't even have a single line of code written. Way cool! He gets it. Did he worry about Test Management? Bah! He didn't even need to pay a programmer to sell something of value to a customer.

I have mentioned it before, if you are doing any kind of testing on a software development project, you can likely place it somewhere within the Agile Testing Quadrants. Heck, you don't even have to be on an Agile team or project to see that your testing fits somewhere on the chart.

So, if you somehow create a super Test Management Plan or system that tracks all of these activities for a given project, then would you have a Quality product? That would be really cool, by the way, but no, not necessarily. And don't fool yourself into thinking that it will.

The problem is one of relationships. Human relationships.

The definition of Quality I like to use comes from Jerry Weinberg and it is based on your understanding of relationships between people. The definition of "software" that I use when teaching/training/coaching is that software reflects your understanding of the relationships between people and the systems they interact with to meet particular needs. The definition of Testing that I like to work with is the intelligent use of models and heuristics to explore the relationships between the people designing a system and those who intend to use it.

In short, if you want to understand what provides "quality" or value to your customers, look to how you are managing the human relationships. Forget Test Management. You might have better luck implementing a CRM system in your development departments as a predictor of Quality.

Testers: if you want to do a great job, focus on the testing activities that explore the systems from the perspectives of the people who matter. Never hide or bury your work (i.e. in documents, spreadsheets, test management systems, etc.). Make it visible - use dashboards, mind maps or other visual mediums because they assist with team collaboration and understanding. Keep detailed records archived somewhere if you need them, but don't worry about "managing" to those fiddly bits as an indicator of "quality" - because they're not.

Forget about the Test Plans though. Disregard Test Management systems. AVOID providing any metrics on testing coverage as an indicator of Quality... especially if your development team isn't producing equally misleading metrics about code complexity, requirements reviews, and other esoteric development activities.

Anyone (ALM Vendors take note) who sells a "Quality Management" system based upon managing test cases is lying to themselves and to you. They are doing it wrong. Don't be a victim.

You can't kludge a development process enough to make a Quality or Test Management system produce meaningful, valid indicators of value to your customers and stakeholders. It just doesn't make sense.

Approach your development teams from a human relationship perspective. Focus on how people collaborate and work together, and high quality products will emerge as a by-product. Manage the relationships with your customers, and deliver working software frequently to help them see what you can do and how you learn from past experiences. This isn't easy. It's definitely worth it though.

So. You want to deliver Quality? Test Management isn't the answer. Test Management tells you about a specific task's management. It's about as useful as Marshmallow Management in making Rice Krispies treats. That is, it may have a place in the big picture, but it's certainly not the right way to look at the problem.


  1. This fits in so well IMO with Gojko Adzic's new book on Impact Mapping. Too many business experts ask us for software features without really knowing what value those will bring for the biz. We should help them understand the "why" first. What brings biz value may or may not be a s/w feature.

  2. A long time ago, in the 1970's I believe, someone decided to define testing as an emotionless, mechanical exercise. It was also decided at some point that testing should be called 'quality assurance' because it seemed related to the QA processes in manufacturing.

    We as testers have been living with the consequences of this ever since. It is very easy for upper management to look at a department labeled 'quality' and assume they provide it and are responsible for it, hence the existence of all of the 'quality management' suites.

    Anyone in testing, and especially anyone who has a testing group reporting up ton them should read this.

  3. but but but how will we know the project is on track if we dont know what %ge of test cases have been written, run, passed, failed ? How can we move through the Quality Gate to the next phase if we don't know that we have 100% requirement coverage ? Do I still need to produce a RACI matrix ? How are we going to decide how many test cycles to run ?

    I've worked on projects where the 'test managers' never used the product, had no idea what the customers would do with it - and had no time to train their testers as they were too busy running reports on how testing was going

    Like Lisa, I'm also reading Impact Mapping and working towards getting a better understanding of what the customers want and value

    great post

  4. I must say you hit the nail on the head and exposed the truth about TM. Still it does not hide the fact that many managers wrongly believe ie ‘what can’t be measured can’t be controlled’. Measurements are the illusion of control and knowledge. If a tool is lying then where is the truth? If you cannot trust the people on the ground then a tool changes nothing. TM or ALM tools put you in straight jacket that works against the way we do testing or perform any other activity within a project. It cannot capture human interaction with the software which has many facets such as bias, emotions and behaviour that effect our testing. There is no room to record activities outside testing that testers are dragged into. If it is a tool to measure progress then it fails in providing feedback and improvement, comparing results release by release, links to improvements by activities and vital intelligence to offer better way of doing things. It is no brainer at the end of the day that relies heavily on human intelligence to attach validity to results. Software development is a social science that is outside the scope of TM/ALM toolset and dependency on them doomed to failure.
    We are all exposed to quality syndrome to some extent and made responsible for it. Don Reinertsen (http://www.amazon.co.uk/Principles-Product-Development-Flow-Generation/dp/1935401009/ref=sr_1_1?s=books&ie=UTF8&qid=1361719696&sr=1-1) said, ‘Watch the product work not worker work’. As Mary Poppendieck said in one of her quotes, ‘quality’ is not the responsibility of a tester or a developer but of the system. A system thinker may be one who is responsibility for value-mapping with better view of the system and value it should deliver. In system’s thinking, a system consists of everyone that interacts with the system directly or indirectly, human or non-human. Surely, we as tester hardly ever know the full list of stakeholders and value software deliver to each. Even if we know then how a test management tool would record those attributes. Impact Mapping has changed the concept from stakeholder to actors/players for European benefits who have problem using stakeholders. How would TM tool handle that change?
    Jerry Weinberger (http://secretsofconsulting.blogspot.ca/2012/09/agile-and-definition-of-quality.html) definition of quality is as good a definition as any other you can find that has stood the test of time. How do you, through your testing, break down the value for each stakeholder and record it for everyone to see? A Kanban board or post-it notes provides better visibility and status than a TM tool would ever do. One twitter message recently said it all ‘Jira does not speak for itself that post-it note would do. Low tech dashboard (http://www.satisfice.com/presentations/dashboard.pdf) is more effective in delivering visibility than cumbersome TM tools from major vendors would do.
    Unfortunately, I plan to attend two one-day ALM events this and next month from HP and other vendors to find out more what’s on offer and how can they improve SDLC.

  5. I read your post and I'm left with little idea what you mean by test management. In part of your post you seem to be attacking the very idea of testing, in another part you seem to be attacking the very idea of managing.

    The word "management" means many things. One of the problems I keep seeing in Agile projects is the lack of management-- by which I mean time spent getting the infrastructure and agreements together by which we can get the work done, then maintaining the status of those things over time. This is not a huge deal on tiny projects. But for testing, on a project of any significant size, I keep seeing testers with no coherent test strategy under too much pressure to think about tomorrow.

    Shallow testing does not require foresight and preparation. But deep testing does, and that's where you will find it necessary to have activities recognizable as management.

  6. Interesting post, Paul. I'm not sure your title really reflects your argument, though. There's a difference between "Test Management" and managing testing (at least there should be!) and that's not clear in your title.

    "Test Management" as you define it here is all about so-called processes and systems, human or otherwise, that go under the guise of test management. I agree with you that these are superfluous and actually detrimental to delivering value. So, I would argue, is a lot of what is called Project Management -- a bunch of PMI-propagated systems that are more about controlling cost than delivering value, which often gets lost in the morass of reports and control mechanisms. (A couple of good articles about this are Tom DeMarco's "Software Engineering:
    An Idea Whose Time Has Come and Gone?", and Sylvain Lenfle/Christopher Loch's "Lost Roots:
    How Project Management Came to Emphasize
    Control Over Flexibility and Novelty".)

    I do believe, though, that there is a place for managing testing, as there is for managing projects.

  7. This is, BTW, essentially the same thing I argued in my keynote at STPCon in October 2011, "Are you managing testing - or The Test Process". (also at KWSQA just before)

    The more of us who get out there and say this -- in blogs like this, in mainstream publications and at mainstream conferences -- the greater the possibility that we can begin to make a difference and discredit these pernicious practices.

  8. I think I agree with what you're saying, but I'm not sure I agree with the way you've expressed it.

    Management per se is not bad. It can be done well or badly, tightly or loosely; managers can inspire or demoralise. There always needs to be some form of management to see the big picture; to coach, direct and lead testing by personal example; to report, co-ordinate and assign work; to deal with other managers and stakeholders; to protect the testers from crap so that they can be productive. None of that is bad, and it is all test management.

    We need to be clear about the distinction between managing testing and managing the testing process. It's a vital distinction that applies equally to other disciplines in software development. The difference is that it's possible to blur the difference where testing is concerned. Managers can spend all their time managing the process but doing nothing that could honestly be called testing. That's maybe because testing is poorly defined and understood. Sure, the thought leaders of testing are crystal clear about what they are doing, but do they influence a majority of testers? I'm not sure. They certainly influence those who are listening and whose minds are open. But there are huge regiments of testers working away in traditional environments, drawing up detailed plans and scripts, then plodding their way through them in test execution.

    Managing the activities of testers working like that has very little to do with testing. The managers in such companies often don't really get testing. They confuse the process with the reality, seeing testing as a process that must be followed, and ticked off on the project plan before the application goes live. Project managers don't want test managers to manage testing, they want them to manage the testing process so that the project can be tracked neatly through the MS Project plan.

    Sadly testing is an activity which it is possible to totally fake (as James Bach memorably put it). Test managers can run through the process entirely plausibly, without any real testing. I find it hard to imagine that happening with other IT disciplines. The folly would be immediately apparent. That's why we need to be clear about what test management is, and angry when managing the testing is confused with managing the process.

    Phil Kirkham is right. I too have known that feeling of futility and despair when I realised that I was adding nothing to the real testing even though I was the test manager. I was spending most of my time producing reports for people who didn't really need them, didn't understand them and certainly didn't understand the reality. If the reports had been scrapped all that would have been missed was the chance to tick the box "report produced".

    I had a great discussion with Fiona Charles about this a couple of years ago at the Test Management Summit in London. Fiona worked it up into a marvellous talk for STPCon. http://www.quality-intelligence.com/documents/Are%20You%20Managing%20Testing%20or%20The%20Test%20Process_Fiona%20Charles.pdf

  9. I prefer to use the term Test not QA as the latter indicates that test are there to verify a products quality, which we most definitely are not. We verify that the product works for the tests we have performed and that is all.

    Some of this post I agree with, but you still need to have a process that is managed, but managed in away that doesn't hinder the process, no processes for processes sake.

    Using Test Case Management Tools are essential in my opinion. Without them how can you know what's been tested and what hasn't, especially when you have a large team all testing the same product, how do you prioritise what needs to be tested first.

    Yes I think its very true that this shouldn't be used to give metrics on Quality or coverage, but do need to be used to efficiently manage the testing phase.

    I've seen mind maps used, but with the products we test these really wouldn't fit the bill due to the massive amounts of functionality that has to be tested.

  10. James & James,

    I am neither attacking Testing, nor management in general. Management is an important role/function within a company as it (done well) helps people to work together better. This is important if the team is to deliver something of value to their customers.

    The main idea I am trying to get across here is the false 'best practice' of using testing as an indicator of customer value. This is about more than invalid/misleading metrics like test case counts. Sometimes stakeholders (or even customers) may demand to see test cases as a demonstration of good quality - more false indicators. (I wrote about such a time in my Tea Time with Testers article last April.)

    Let's look at the big picture and what everyone on the team contributes to Quality. There is no Test Management system that can provide all this info, although it should contribute to part of the picture (assuming the testing is done well and also contributes something of value).

    This is a synthesis problem. I have a thought from Russell Ackoff in my head that may express it better, but I currently lack the exact words.

    So, no, I am not against teams using some system/tool to help communicate and collaborate within their team. An ALM tool vendor that packages a Test Case Management tool as a "Quality" Management piece is doing it wrong. They are doing their clients and our industry a disservice.

    Saying that you will assure Quality through [only] Test Management is wrong.

  11. Thanks for the follow up Paul. I think we're pretty much in agreement.

  12. So we're all pretty much in agreement that testing and quality assurance are not the same thing and that test management isn't counting test run and defects raised.

  13. I believe it's all about balance. What Mohinder says about measurements being the illusion of control and knowledge is an interesting point. Measurements are part of the picture. Feelings, instinct and intuition are other parts of the picture that no ALM tool will capture. But to be truly knowledgeable and in control you have to take and analyse data from many different sources.

    I like the analogy of the car. You could be one of the best drivers in the world. Yet in different conditions, different roads and even different cars you're still going to look at the dashboard to check the instruments and confirm that you're driving within the speed limit. Yet conversely if you're always looking at the dashboard and monitoring your speed to make sure you're bang on the speed limit you could well be ignoring other obvious factors like the road being wet (which isn't measured on the dashboard).

    Ultimately as a test manager you want to be taking information from many sources. One of those sources may well be a test management tool. Yet you better not focus on that completely at the expense of indicators like instinct and intuition. Some of the best developers I've worked with talk about code not feeling right. And some of the best testers I've worked with have the feel for identifying bugs. All part of getting the right balance to ensure a successful project.

  14. Handling the actions of evaluators operating like that has very little to do with examining. The supervisors in such organizations often don't really get examining. They befuddle the procedure with the truth, seeing examining as a procedure that must be followed, and checked off on the venture strategy before the program goes stay. Project supervisors don't want analyze supervisors to handle examining, they want them to handle the examining procedure so that the venture can be monitored nicely through the MS Project strategy.

    Email compliance

  15. A very interesting article on testing management. Thank you for posting.

  16. Perhaps. I believe that people and their mindset matter more than processes and tools. We can come up with the right processes and tools we need if we have the right people collaborating with the right mindset. You can build high quality things without a single documented process. And I'm not talking about a Cowboy/Hero culture either.

    The vast majority of current IT/Development cultures are broken. Test processes exist as a kludge/band-aid solution to the broken relationships and culture. If we fix the culture we shouldn't need those processes anymore and our tools will be different.