Quality Agile Metrics

I was asked recently what metrics I would collect to assess how well an agile team is improving. I paused for a moment to scan through 12 years of research, discussion, memories and experiences with Metrics on various teams, projects and companies - mostly failed experiments. My answer to the question was to state that I presently only acknowledge one Metric as being meaningful: Customer Satisfaction.

We discussed the topic further and I elaborated some more on my experiences. Regarding specific "quality" metrics, I explained that things like counting Test Cases and bug fix rates are meaningless. I also referred to the book "Implementing Lean Software Development" by Mary and Tom Poppendieck (which I highly recommend BTW) which warns against "local optimizations" because they will eventually sabotage optimization of the whole system. In other words, if I put a metric in place to try and optimize the Testing function, it doesn't mean the whole [agile] development team's efficiency will improve.

It needs to be a whole team approach to quality and value. Specific measurements and metrics often lead to gaming of the system and focus on improving the metrics rather than putting the focus on delivering quality and value.  If the [whole] team is measured on the customer satisfaction, then that is what they will focus on. I have long since stopped measuring individual performance on a team.

I haven't stopped thinking about this question though, so I put this question out on Twitter this morning:
Aside from Customer Satisfaction, are there any other Quality metrics you'd recommend in an #agile environment?


Here are the responses I received:

  1. Churn or team turnover. (Real case: Product delivered on time, customer happy, whole team left.)
  2. Escaped defects, inbound support calls/emails
  3. Value created - Reference: "Lean Startup" by Eric Ries (I'm reading it right now)
  4. Number of contributors to each story - not because exact count is meaningful but because it encourages collaboration & review.
  5. Profitability of the project is one. Are the goals (whatever they may be) of the project met?
  6. Lines of code changed during regression. That can expose some severe problems.
  7. Production Defects in 15 days after 'Go Live'.
  8. Cost of rework due to requirement changes.
  9. Code churn as a measure of "quality" (e.g. System Defect Density) - Research, Code example, Discussion, and Sample Stats.

Hm, almost a Top 10 list. I am happy with the responses here and think many of them may be worth exploring further to see what insights they provide. Cautionary Note with all Metrics: Beware the impact they have on the team. If behaviour or performance starts to change in a negative way, STOP immediately!

One of the things I talked about in the conversation was the importance of Retrospectives to allow the team to own their improvement activities. If the team uses these opportunities, their improvements should be observable over several iterations. I think a Happiness Index reading during Retrospectives might be an interesting indicator of the overall effectiveness of the improvement strategies employed.

What do you think? Anything else I should consider?

1 comment:

  1. currently I'm thinking about the same thing.

    I am splitting it up for internal (team) and external (management).

    The goal is to improve. The internal metrics are not interesting to management but needed for development. Indicators I use are test coverage and Cyclomatic complexcity.

    External defect founds after release and an indicator on the internal metrics.

    ReplyDelete