Skip to main content

Quality Metrics - Relationships between Agile Velocity and # of incidents

Last post 05:42 pm December 2, 2019 by Daniel Wilhite
9 replies
01:23 pm November 29, 2019

Hi There,

We've been working in devops mode for a product for around 30 sprints and we were asked to provide a sustainability quality metric for the scrum team.

We measured # of incidents and tested different items to check if we can find some correlation to measure.

We tested with # of features, velocity of sprint , velocity of prev.sprint. Velocity of sprint had a moderate correlation with # of incidents.

Before we go further, I want to ask the community, if you heard a KPI or a study which measures incidents for a product and the relationship with scrumteam volume of work.

 

Thanks


07:18 pm November 29, 2019

We measured # of incidents and tested different items to check if we can find some correlation to measure.

Have incidents been correlated in any way to the Definition of Done? If not, why not?


12:44 am November 30, 2019

Has technical debt reduced with time? 

Is there a correlation between the number of Done increments (going up) and the number of bugs (going down)?

There are a number of ways to measure this but the goal should be to correlate the number of Done increments, with quality, with usage, with customer feedback, with sales/subscriptions, with return on investment. Keep your charts and reports simple. Don't try to oversell leadership.

Always include the most important metric. The customer!


11:32 am November 30, 2019

One metric that we use at my company is the defect rate. We use it for bugs, but you could use it for incidents.

We cannot bind it to velocity, as teams don't use story points; but even if teams did have a velocity, we would still choose to base it on throughput. I won't explain why, but the argument against using velocity for anything other than planning is well documented, and I recommend you look into it.



For us, the defect rate for a given time period, is the number of reported bugs, divided by the number of items deployed to production.

Given the nature of our product, we can expect some noise in the metric, from legacy bugs and known issues, and obviously some bugs are not noticed immediately after the deployment, but the metric does show trends and variations, which can drive useful inspection and adaptation.

The advantage of a rate is that in general we expect a proportional increase in defects if we get more done, and a rate takes account of that. Whereas just measuring the absolute number of reported bugs would misleadingly suggest a reduced quality.

Of course absolute numbers do have an impact on the business too (e.g. time spent on customer support, or managing the backlog), so we measure them as well, but not as a quality metric.

 

After introducing the defect rate, we have seen it reduce dramatically. Some of the reasons were obvious, such as it driving people to look for underlying trends, and resolve the root cause; but interestingly it also led to useful discussions about what counts as a bug.

We set more sustainable expectations about what we support, and have eliminated the waste of fixing things that aren't important enough.

It also encouraged engineers and the support team to challenge bug reports that were based on a misunderstanding of the product. As bugs were often treated as business-as-usual, functionality was often being improved beyond its original design, but it hadn't been subject to the normal consideration and prioritization by the Product Owner.


03:14 pm November 30, 2019

How does the team define the “Quality”? How does the organization defines the “Quality”? 

Is the organization looking for quality sustainability metric for the product or the Scrum team? 

Do you think that the defects or incidents completely reflect the trajectory of quality evolution even if there is a correlation?


06:50 am December 2, 2019

Answer to Ian Mitchell: The correlation has been measured between 'DONE' velocity for a print vs. # of incidents that occurred in that sprint.


06:58 am December 2, 2019

Thanks Simon Bayer for the comments. I understand velocity is not a good choice for being part of a KPI, however that metric had the highest correlation within our data set. (30 sprints). 

Can you point me to an article etc. where I can assess the disadvantages of using velocity as a metric?

 

Hi Aditya, Quality from stakeholders' point of view is the sustainability of the application.

The correlation is positive between volume of work and # of incidents. The higher the velocity is , the higher the number of incidents.


04:05 pm December 2, 2019

Can you point me to an article etc. where I can assess the disadvantages of using velocity as a metric?

Sure! Here are a few:

https://www.agilelearninglabs.com/2013/08/should-management-use-velocit…

https://metrictale.com/2019/02/12/why-its-a-really-bad-idea-to-use-velo…

https://www.scrum.org/resources/blog/agile-metrics-velocity

To be clear, I'm not advising you to stop using velocity as a planning metric.

Although I personally believe there are better options for planning, my strong advice is to not use it as a performance metric.


04:28 pm December 2, 2019

Hi Ozgur, Thanks for the clarification. 

May I point out something from your answer that may help you in your efforts? 

You have mentioned that the definition for "Quality" in your scenario is the sustainability of the application and not the Scrum team. Does it not imply that the stakeholder wants your team to deliver on one or more of the following quality parameters?

  1. Perfection
  2. Consistency
  3. Eliminating waste
  4. Speed of delivery
  5. Compliance with policies and procedures
  6. Providing good, stable product
  7. Doing/delivering it right the first time
  8. Delighting or pleasing customers (by meeting or exceeding agreed expectations)
  9. Total customer service and satisfaction

Like Ian, what I would like to draw your attention to is the fact that the velocity (a work estimation technique of the Scrum team) does not really reflect on the quality of the product even if it appears that there is a strong correlation. Also, note that all correlation metrics are unitless and have a built-in inherent risk of failure since they are functions of estimation.

At the end of the day, customers or stakeholders do not care much whether the velocity over time has improved or not, but they do demand satisfaction on at least one of the nine parameters I mentioned above. As Ian and Mark rightly mentioned the right metric should incorporate DoD plotted against other parameters that you may think are relevant to delivering one of the nine ( or perhaps more) criteria. 

 



 


05:42 pm December 2, 2019

Given the nature of your work have you considered measuring the time it takes to get from incident creation to resolution? I feel that is the one thing I would want to know from my DevOps team. You could also look at the number of times you have to touch the same code.  That points towards potential for defects or potential for refactor. 

Don't try to make data fit a metric or a metric fit the data.  Find a metric that makes surface to report given the reason you want to surface it and then determine what data points are best used to communicate your desired message


By posting on our forums you are agreeing to our Terms of Use.

Please note that the first and last name from your Scrum.org member profile will be displayed next to any topic or comment you post on the forums. For privacy concerns, we cannot allow you to post email addresses. All user-submitted content on our Forums may be subject to deletion if it is found to be in violation of our Terms of Use. Scrum.org does not endorse user-submitted content or the content of links to any third-party websites.

Terms of Use

Scrum.org may, at its discretion, remove any post that it deems unsuitable for these forums. Unsuitable post content includes, but is not limited to, Scrum.org Professional-level assessment questions and answers, profanity, insults, racism or sexually explicit content. Using our forum as a platform for the marketing and solicitation of products or services is also prohibited. Forum members who post content deemed unsuitable by Scrum.org may have their access revoked at any time, without warning. Scrum.org may, but is not obliged to, monitor submissions.