Skip to main content

Velocity alone doesn’t measure success

May 29, 2020

Neither does Throughput

running towards success

Many scrum teams use a metric called velocity as their primary metric. Velocity is commonly defined as the number of story points you finish over time, usually measured by sprints.

The input of story points is assigned by scrum teams, often using the Fibonacci sequence, to pieces of work based on their relative complexity. A simple piece of work might be assigned 1 story point while a really complex piece of work could be assigned 13 points or more. A team’s velocity would represent how much complexity reached their definition of done in a given sprint. Measured from sprint to sprint, it becomes a sort of gauge of success. If it stays the same or goes up, that’s great. If it goes down repeatedly, there’s a problem that needs to be investigated.

Teams using Throughput, another rate metric that counts the number of individual work items delivered instead of relative complexity, do the same. They measure throughput from across time periods, often sprints, and use it to determine their success is increasing, staying stable, or waning.

No single metric defines success

Unfortunately, success is more complicated than that and it can rarely if ever, be measured with a single metric. Most teams and organizations don’t just want to deliver things. They want to deliver good quality things and deliver them quickly on top of regularly delivering things. If you don’t see a distinction, let me share the story of a team I started coaching in 2019.

This HR team came to me with a (perceived) Scrum process in place. They were using 2-week sprints for planning and retrospecting. But, they were still completely overwhelmed and unhappy. Though it could be higher, their throughput at least looked pretty consistent.

Throughput Run Chart with a pretty consistent running average (via ActionableAgile for Jira).
Throughput Run Chart with a pretty consistent running average (via ActionableAgile for Jira)


To dig deeper, we looked at their Scrum board. The good news is that they understood their workflow. Their board had columns representing their entire workflow from planning to delivery. The bad news? It was chock full of work items. They were close to the end of their sprint yet many items were still in the sprint backlog. Despite the obvious fact that many of the items planned for the current sprint would not be finished by the beginning of their next sprint, they had already filled up the planning column for their upcoming sprint.

Looking from multiple angles gives you a well-rounded picture

In the end, it was clear that they suffered from a “too much WIP” (work-in-progress) problem. They started so much work that everything took much longer to finish than it would have needed if they had just started less work at one time.

This was something we noticed almost immediately when we looked at a metric called Cycle Time. This is a measure of how long it takes a piece of work to be completed (you can define your start- and end-points for this metric). Using a chart called a Cycle Time Scatterplot, we could see that 85% of their work took up to 35 days to complete, while the remaining 15 took even longer!

Cycle Time scatterplot with 85% line at 35 days (via ActionableAgile for Jira)
Cycle Time scatterplot with 85% line at 35 days (via ActionableAgile for Jira)


Now, remember their sprints were 14 days long. Houston, we have a problem! This team had unfortunately settled into a pattern of delivering a consistent number of work items — but they were very old work items. They hardly ever finished work items within the same sprint they were started in. By adding the Cycle Time metric to our arsenal we were able to get more insight into what was happening, find a problem, and start experimenting with solutions.

In Summary

The moral of the story is that one metric can be misleading at best, but it can be immensely harmful at worst. If you were to optimize for one metric without keeping an eye out for unintended consequences, you could end up causing bigger problems than you had when you started.

When measuring team success, try to consider multiple aspects of success and establishing measures to represent each. Then look at them together so if you see one improve, you can make sure a negative impact didn’t happen elsewhere.


For the curious, here’s what came next for our HR team:

We identified 3 major things that would help this team improve their Cycle Time metric, and maybe even their Throughput metric too:

  1. Break work items down into smaller valuable pieces. Many of the items took so long because they simply had too many deliverables in them. Many process dysfunctions can be helped by breaking work down.
  2. Remove board columns that weren’t necessary. Every board column is a place to store more work-in-progress. There’s always a balance to be struck but it is ok to get rid of columns if they cause more pain than gain. This also meant we talked about not planning too early.
  3. Implement WIP limits. Using Velocity or Throughput as a planning tool for a sprint provides a WIP limit for your sprint backlog in that you limit the number of items you allow into a sprint. However, in a sprint, limiting the number of items you start at one time can often cause items to finish earlier and at a steadier pace throughout the sprint.


What did you think about this post?