Skip to main content

Where to Measure Throughput in the Sprint Backlog

August 17, 2020

With the launch of the Kanban Guide for Scrum teams in 2018, its 4 flow metrics have gained more popularity amongst the Scrum community. It has helped Scrum teams shed a new perspective on how to manage the flow of product backlog items (PBIs) through their sprint backlog all the way to a production environment.

One of those metrics, throughput, is defined as “the number of work items finished per unit of time”. As this definition focuses on the measurement of items, it doesn’t offer guidance on where to measure your throughput in your workflow. I’ve found this can become a problem for Scrum teams who value and apply a Definition of ‘Done’ in their teams. 

According to the Scrum Guide, the Definition of ‘Done’ “is used to assess when work is complete on the product Increment.”.  So should throughput only be measured when the PBIs are in production and are completed according to the Definition of ‘Done’? Or should it be measured when the development hands off the PBI to another team/group?

As both guides do not offer guidance on this, I’d like to offer three scenarios I’ve encountered in the past. In this post, I describe where I’ve measured throughput of PBIs based on when they are moved outside the development team realm of accountability.

Close to production every time

In my first scenario, we have a Scrum team that ships almost daily done PBIs into a production environment. There sprint backlog workflow looks something like this:

To better understand their sprint backlog workflow, here’s a short description of each column:

  • Ready: PBI is ready to be worked on during the sprint. It either meets the sprint goal or is deemed important by the development team.
  • In Development: PBI is coded.
  • In Review: Engineering tasks like code review through a pull request are done here. Code is merged into the main branch.
  • Pending deployment: A staging column where PBIs are kept there from a few minutes to a few hours. A CI/CD tool set might be in the picture to help them automate this process.
  • Done: PBI is in the production environment.

In this situation, their Definition of Done will be fairly close to the one from the Scrum Guide. I would recommend they measure their throughput on the “Done” column.

Another team moves completed work into production

In the second scenario I’ve encountered, we have the same workflow as above but there’s a strong difference on the “Pending deployment” column. In this organization, another team (ex: the infra team) is responsible for moving the work of Scrum Teams into the production environment. 

So when the PBI is moved to “Pending deployment”, it actually means another team will take it and move it to the organization production environment. Scrum Teams build standardized “packages” which are then moved into production at a scheduled date and time by the other team.

In this situation, I would recommend measuring throughput on PBIs coming out of the “In Review” column. Once PBIs are in the “Pending Deployment”, the Scrum Team has very little control over them. Excluding the “Pending Deployment” column from the Sprint Backlog could also be an option to keep focus on the PBIs the Scrum Team has control over. On the other hand, the Scrum Team might want to keep an eye on their PBIs in the “Pending deployment” column as they can inform their PO on the status of PBIs being moved to production.

More work downstream 

In my third scenario, I’ve found it to be more present in large organizations. In this situation, more work happens on the PBI after it is coded by the Development Team. In other words, there’s more work done on the PBIs downstream. Examples I’ve seen in the past are:

  • A quality assurance team tests the PBIs developed by the Scrum Team.
  • Acceptance/business tests done by end users and/or SMEs to approve PBIs.
  • The PBIs are part of a greater integration, which is done quarterly for example.
  • Training is done on a whole set of PBIs before they are moved to the overall pool of users.

In this third scenario, I would recommend measuring throughput on PBIs coming out of the “In Review” column. In my opinion, this is where the Sprint Backlog of the Development Team ends. After that point, their work is complete. As their PBIs can go into production in a few weeks, waiting for downstream work to be done, measuring the Scrum Team throughput on PBIs moved to production doesn’t make sense. 

Conclusion

The goal of this article was to help readers who struggle with measuring throughput at the right place in their workflow. As we’ve seen in the scenarios above, I would recommend measuring a Scrum Team throughput at the point where their work ends. This can be confusing when we apply the Definition of Done as it invites the Scrum team to assess when their work is complete on the product increment. This can mislead people to always measure throughput when PBIs are moved into production.

While the DevOps philosophy bridges downstream work within the Scrum Team realm of action, Because of the legacy systems in organizations, I believe there will be cases where the Scrum Team has to let go of its PBIs before they go into production. In those situations, I find it is important to have a common understanding of where we should measure a Scrum Team's throughput.

If you’re interested in learning more about Kanban in another context, visit KanbanGuide.org.


What did you think about this post?