April 7, 2022

How to Measure Agile Maturity

agile maturity
image by de zaak.nl

 

When I coach, consult and teach classes, I often get questions about creating an agile maturity tool or defining agile maturity metrics. Agile Maturity metrics are used by many businesses to better understand their present agile practices and monitor to improve them. I understand the need for this kind of metric. I also noticed that the purpose and value of agile maturity metrics are misunderstood. This article will help you to find agile maturity metrics that bring value and offers a free maturity tool to download.

Agile comes at a price

Going agile costs money. At a first glance, it makes sense that we want to know what the return on investment is and with that, what the cost and possible cost reduction of going agile is and it seems sensible to monitor our progress. 

ROI implies that we spend money (cost) and expect some kind of value in return. In fact, the cost part is not so interesting and also complex to measure. As Taiichi Ohno once said: “Costs do not exist to be calculated. Costs exist to be reduced”. His reasoning is that calculating the cost for a certain initiative or department is a waste. Remember that an organisation is a complex system. It is an over-simplification to think we can easily allocate or calculate cost versus revenue pre-, and post-agile. Reality is messy: People will need time to learn, people will leave, new people will be hired, we will start experimenting and allow for failures, our optimisation goals will change, etc. I don’t suggest we should not look at costs at all, I think it is more valuable to look at cost at a company level, not at the individual initiatives we launch. In addition to this, I want to encourage you to focus on the creation of more value, rather than on reducing costs. Cost is only expensive if the returned value is low.

From the perspective of investing in agile adoption, we might want to know when the agile adoption comes to an end, so we can lower or dissolve the funding for the agile transformation. Here’s some news:  An agile transformation journey starts, but will continue eternally. Introducing agile means introducing continuous improvement towards perfection. The agile adoption is not a project, it is a pivot and is successful if it never ends. 

People also might want data on the progress of the agile programme. Measuring progress implies we know our end-state. With agile, the end-state is a mindset, a new way of doing what we do and how we do it. Measuring progress toward such a goal is hard to grasp in numbers. If there is any kind of metric to measure progress, it needs to monitor changes in behaviour and mindset. 

Agile brings new behaviour and new roles. We need to create performance assessments for new roles and adapt them for existing roles to suit the new agile context. Pre-agile assessment tooling no longer applies. For example, in a pre-agile world, it would make sense to monitor how well tasks are allocated to employees, or how timely timesheets are submitted. Such metrics no longer apply or even hinder agile adoption.

Two Main Agile Metrics

Agility is a means to an end, it is not a goal. Measuring agility, therefore, boils down to two main questions: 

  • Are we delivering more value? 
  • Are we improving the way we deliver value?

Monitored using two metrics:

  • Customer satisfaction
  • Performance of the delivery process

If our product is any good it will create a change in the behaviour of our customers. They will stop doing certain things because they start using our product to satisfy their need. That is a measurable behaviour change. The data we collect this way is extremely valuable because it is objective and unforgivingly real. We measure the impact of our effort outside of the development system, ruling out any way to compromise the findings. Examples of such metrics are NPS, conversion rate, usage time, etc…

Delivering successful products is essential and goes hand in hand with knowing how good we are at creating the product: our performance. I suggest resisting the urge to measure our performance as a cost. There are many useful metrics available such as speed, quality, predictability, etc that monitor our performance. A word of caution is needed to decide which metrics are valuable and which are not. For example, Velocity is not suitable to compare team performance. Although it can be a valuable metric at a team level, intended for the team to monitor its own speed. However, velocity does not add up to give you a number on your organisational speed. 

Some suggestions for useful metrics: cycle time, release frequency, product index, innovation rate, etc. For a more extensive explanation of sensible metrics, have a look at the Evidence-Based Management guide. Or refer to an earlier blog post on metrics that can help you decide how and what you want to measure.

Measuring how well we perform in delivering value to the customer also serves as a metric for organisational change. How? If it takes multiple sprints and 16 hand-offs to ship an integrated product, we can monitor how we are doing in trying to deliver that integrated product without hand-offs in a single sprint. If the number of handoffs of a team goes down, their ability to deliver Done goes up, which is a metric of organisational improvement.

Measuring Agile Maturity

In addition to the two main metrics mentioned above, what would be a good reason to measure our level or maturity of agility? What value does it bring to know how agile we are?

Knowing the agility level of people is important because it will show their ability to act on the results delivered by the two main agile metrics.

Empiricism is about inspection and adaptation. In other words, collecting data with a metric only makes sense if we do something with it. We need a system that acts on the data that the metrics provide. This is essential, and exactly the “thing” most organisations lack when they “go agile”. We need to have metrics in place that measure the right things (see above) and in addition to that we need an organisation that knows how to interpret the collected data and acts on it. 

To understand how agile we are, we need to collect information about behaviour. The best data for our purpose is qualitative. Why? We cannot express behaviour (change) very well in numbers. 

Manager: “What is the agile maturity of team A?

Scrum Master: “42!”

And we all know numbers are less reliable than we’d suspect. 

Torture numbers, and they will confess anything

GREGG EASTERBROOK

We should measure without effort, without a burden on the people who do the work. This offers a challenge because answering qualitative, open questions takes time and can be experienced as wasteful or hindering. My first suggestion to dampen this effect is to not overdo the frequency of measuring. We are measuring organisational change, there is no need for a daily update.  Secondly, I suggest integrating the collection of data in the existing development process by performing the questionnaires in Sprint Retrospectives every now and then.

Metrics must be In service of the people who do the work. The team members should perform the measurements themselves. If they cannot or do not want to, they are not ready. If people don’t see the value of measuring agility, Scrum Masters should bring the value of maturity metrics to the table. 

Scrum Team members should cross-examine all Scrum roles. I also see value in having cross-team reviewing of the results to increase learning. Another good option is to have agile coaches help to collect, interpret the results and drill down to find root causes.

When leaders wish to understand what level of agility their organisation has, I suggest they use the practice of “Going to the Gemba” or “Gemba Sprints”. 

To monitor agile maturity, people need to answer the questions individually first. Then, the whole team reviews the answers with a peer. By doing this in regular intervals, a good understanding will emerge of what needs to be improved to achieve more agility. 

The outcomes are always okay but for best results, an experienced agile person should moderate the review sessions.

Examples

In the excel files included, you will find examples of questions I propose to measure your (teams’) agility. Thus far, I use three categories: Questions for developers, scrum masters and product owners. I guess this concept can be improved by making more questionnaires for different perspectives that apply in your context. Also, the questions I propose in the sheets are a starting point. I encourage you to improve them over time and tailor them to your need.

Product owner self-assessment

Customer focus

Give a few examples of feedback you have received from stakeholders or customers.

When was the last time that end-users or customers attended the Srpint Review?

Give a description of the course (Agenda, highlights) of your last Sprint Review.

(… more in the download)

Developer self-assessment

Collaboration

How are working arrangements made in this team?

When was the last time there was a conflict/misunderstanding in this team? What was it about? Describe how you dealt with it and how you solved it.

When do you give each other feedback?

How many people usually work on a task at the same time?

(… more in the download)

Scrum master self-assessment

Scrum process

What is inspected in each of the Scrum events and adjusted if necessary?

For each ritual, give an example of adjustments that have recently been made.

How is it determined how often refinement is needed?

(… more in the download)

Downloads: 

Product Owner

Scrum Master

Developers