Skip to main content

Dealing with a "post-release" sprint capacity, best practices.

Last post 09:21 pm November 22, 2017 by Raphy Abano
5 replies
07:42 pm October 5, 2017

Hello everyone! I just wanted to get people's thoughts on this subject.

So currently in my employ, it feels like we are running into instances (well, 2nd instance in my case since I joined the company) where the sprint after a major release is deployed, the planned work tends to get less precedence over what comes in at runtime. Mind you, the runtime work would be customer issues that came from the release. And mind you as well, our total velocity last time this happened was still pretty good, except the ratio between planned vs. unplanned was something like 1:3.

We have sort of established that we would accommodate these customer issues at runtime (especially if they were urgent/critical) as we don't want our product to be impacted by these issues. And as such, I've seen that as long as 80% of what we planned in a sprint is done-done, then things are okay.

However, by this trend, I forecast a possibility in the future that planned work gets compromised with unplanned work (which I also feels shouldn't be the case if we these issues don't occur again in regression). Given that, should we forecast for smaller than our average velocity after a release to compensate, so that we can give way to customer issues while at the same time still provide a minimum viable at the end of the sprint?

I'm curious to hear of what you guys have seen as best practices for this. Thanks!


08:18 pm October 5, 2017

How long on average do you go before releasing software and getting customer feedback?  Might a shorter feedback loop work better, where you get a smaller amount of 'runtime' issues each each small incremental release?

How long are your sprints?  Ask the team if one week sprints may work, and allow the 'runtime' issues to go into the backlog and get prioritized sooner for the next Sprint?  Can the product owner wait a one week?

Have you talked as a team as to the root cause of these 'runtime' issues?  By 'runtime' work, are you referring to defects?  If these are defects are you looking to improve your Definition of Done each retrospective, and starting to look at incorporating XP practices?

Just somee ideas.  All the best,

 

Chris

 


08:28 pm October 5, 2017

Is all of the work - whether planned or unplanned, project or runtime - accounted for on the Product Backlog and managed and prioritized by the Product Owner?


10:32 pm October 5, 2017

Thank you, Chris and Ian, for your feedback. To answer your questions:

Is all of the work - whether planned or unplanned, project or runtime - accounted for on the Product Backlog and managed and prioritized by the Product Owner?

The PO does review the work in the Product Backlog (as well as unplanned work that come in from our Internal Testing) and sets a Sprint Backlog Item Ranking (usually an abstracted number).

When we look at our Sprint Backlog during Morning Scrum, we review if anything new came in, assess SBI rank compared to planned work in a Sprint for whether to pick up said work immediately, shelve for later, or let the work item slide to the next sprint. My teams are pretty good at negotiating these decisions with the PO.

For runtime work that is reported from our operations and customer support, we tend to pick these up immediately to resolve it to provide quick response for customers.

Recently, we've made a decision to have our QA review the work from operations/support to compare against our existing test cases - they'd act as a buffer for development.

How long on average do you go before releasing software and getting customer feedback?  Might a shorter feedback loop work better, where you get a smaller amount of 'runtime' issues each each small incremental release?

This is something we're currently trying to work on. The first time this "large post-release work" happened, it was off a release that happened after about 3 months (a far better cycle than the previous one I'm told). This current instance was a bit better being 2 months, so we're getting better at shorter release cycles.

How long are your sprints? Ask the team if one week sprints may work, and allow the 'runtime' issues to go into the backlog and get prioritized sooner for the next Sprint? Can the product owner wait a one week?

We've been running 2 week sprints - we don't mind this cadence to handle "runtime" issues, and PO understands that, by virtue of our SBI rank usage, some work won't get to done state.

Have you talked as a team as to the root cause of these 'runtime' issues? By 'runtime' work, are you referring to defects? If these are defects are you looking to improve your Definition of Done each retrospective, and starting to look at incorporating XP practices?

So with runtime work, this has to do with work that spawned additional work (i.e. user stories that needed to be broken down, bugs encountered while working on user stories), internal testing (from either acceptance testing or from testing a release) and as mentioned above, what was reported by our operations and support teams.

We have talked about how we can improve our being "done", such as:

  • QA intercepting issues reported by ops/support to see if these have been resolved (adding to user/customer training), or if there are gaps in the releases (gaps in our code); doing this would hopefully lessen externally-found issues, and improve our test coverage in each release
  • In addition to that, we're measuring code coverage as well
  • With complex work, we've had pair programming/peer review as well

 

 

 


05:46 am October 6, 2017

The PO does review the work in the Product Backlog (as well as unplanned work that come in from our Internal Testing) and sets a Sprint Backlog Item Ranking (usually an abstracted number).

Shouldn’t a Product Owner be capturing and managing all remaining work on the Product Backlog, rather than trying to set priorities on a Development Team’s Sprint Backlog?


09:21 pm November 22, 2017

Shouldn’t a Product Owner be capturing and managing all remaining work on the Product Backlog, rather than trying to set priorities on a Development Team’s Sprint Backlog?

Just to update on this.

As of now, we've limited the re-prioritization on the Sprint Backlog with the "runtime" work getting placed in the Product Backlog to be planned/prioritized in the next or future sprint. And when I say limited re-prioritization, if (and only if) an issue is critical enough that the customer's in flames does the PO ask for it to be brought into the sprint.


By posting on our forums you are agreeing to our Terms of Use.

Please note that the first and last name from your Scrum.org member profile will be displayed next to any topic or comment you post on the forums. For privacy concerns, we cannot allow you to post email addresses. All user-submitted content on our Forums may be subject to deletion if it is found to be in violation of our Terms of Use. Scrum.org does not endorse user-submitted content or the content of links to any third-party websites.

Terms of Use

Scrum.org may, at its discretion, remove any post that it deems unsuitable for these forums. Unsuitable post content includes, but is not limited to, Scrum.org Professional-level assessment questions and answers, profanity, insults, racism or sexually explicit content. Using our forum as a platform for the marketing and solicitation of products or services is also prohibited. Forum members who post content deemed unsuitable by Scrum.org may have their access revoked at any time, without warning. Scrum.org may, but is not obliged to, monitor submissions.