Skip to main content

We're failing to deliver the release on time and it's not being deployed

Last post 03:29 pm October 6, 2021 by Daniel Wilhite
9 replies
01:00 pm October 5, 2021

For the 4th release in a row we have not managed to cut the release at the end date, it has rolled on for weeks due to quality issues.

We have not deployed the last 3 releases to customer either.

What should a Scrum Master do?

 


01:35 pm October 5, 2021

The Scrum Master should get some more information about what's going on:

  • What is the relationship between release and your Sprint cadence? Are you supposed to be releasing during a Sprint or at the end of a Sprint or every couple of Sprints?
  • What types of quality issues are being found? Is work not getting to a Done state because of issues found? Is a stakeholder performing some kind of acceptance testing on the product and rejecting it?
  • What is being done at Sprint Retrospectives to find and address the underlying problems causing these quality issues?

04:24 pm October 5, 2021

Everything that @Thomas Owens said. This is not a Scrum Master's problem to solve but it is one where the Scrum Master can play a pivotal part in facilitating the team's ability to solve it.  You have a symptom of a problem...code is not being deployed to production.  Now facilitate the fact finding to understand the root cause then help the team determine their solution.  And this may be an iterative process taking time find the actual root cause. Use the Retrospectives, the Sprint Reviews to find it.  It will most likely take time outside of those two events to figure it out.  To put this into terms that the Developers can relate to, present it as "we have a bug in our process".  Same type of techniques used to find a bug in software can apply to finding a bug in a process. 

Remember that Scrum Masters do not solve problems, they enable teams to solve problems. 


05:30 pm October 5, 2021
  • What is the relationship between release and your Sprint cadence? Are you supposed to be releasing during a Sprint or at the end of a Sprint or every couple of Sprints?

4 sprints in a release, releases are 3 weeks. We find issues after the final sprint, this drags on for weeks. Issues are not always associated to recent changes. Testing does happen as part of development also.

  • What types of quality issues are being found? Is work not getting to a Done state because of issues found? Is a stakeholder performing some kind of acceptance testing on the product and rejecting it?

Solution issues found e.g. a feature added, issue found with a scenario which is later deemed essential or in an area which was not thought of being affected before.

  • What is being done at Sprint Retrospectives to find and address the underlying problems causing these quality issues?

These issues are not discussed in reviews and retrospectives. They appear outside of the release, teams have already moved onto the next release, the issues come into sprint and are dealt with without RCA.


05:46 pm October 5, 2021

Thanks @daniel. 

We do not have responsibility for Deployment, this is a problem, its with another group. It can take some time for the release to be deployed, in the interum we are on the next release, having to return to the previous one to fix the issues.


05:58 pm October 5, 2021

For the 4th release in a row we have not managed to cut the release at the end date, it has rolled on for weeks due to quality issues.

We have not deployed the last 3 releases to customer either.

What should a Scrum Master do?

The work is not Done, there are quality issues. Yet the team rocks on regardless and without empirical feedback: each Sprint is fake. Go to the Developers and shine a light on their accountabilities.


07:35 am October 6, 2021

The work is not Done, there are quality issues. Yet the team rocks on regardless and without empirical feedback: each Sprint is fake. Go to the Developers and shine a light on their accountabilities.

 

I agree that it's not Done.

 

I know that it's not Agile and it's not Scrum!



Developers in the scrumteams are doing a great job, testing as much as they can with the help of the testers. However when the integrated code gets exercised properly it seems to run into issues. We do need to pull this integration testing into the sprint.

 


12:33 pm October 6, 2021

The cadence described (4 Sprints between releases, 3-week releases) doesn't make much sense. Do you mean 3-week Sprints with 4 Sprints between releases (so a release is every 12 weeks)? Or that you have 4 Sprints, hand something off to a downstream team, they work on it for 3 weeks while the teams go on to other things? However, this doesn't really change anything else that I have to say.

You say that testing happens as part of the development and that the Scrum Teams are going a great job testing. This doesn't align with finding issues in the integrated code. What is being tested if it's not integrated code? Perhaps it makes sense to test the changes associated with a Product Backlog Item independently to confirm that it is correct, but as soon as that is done, that set of changes should be integrated into the rest of the product and then the necessary testing can be done to make sure that the system is properly integrated. That is, do at least some level of integration testing after completing each Product Backlog Item. Since you're in the software space, use test automation to shift some of the load of this repetitive testing away from people so they can focus on things like exploratory testing or usability testing that is difficult to automate. If you can write automated tests as part of each Product Backlog Item, you can integrate those tests to grow and maintain the test suite with every change.

When you find issues outside of the Scrum Team that are critical enough to have to work on immediately, take the time to do an RCA on them. You don't need to necessarily use a lot of formal tools, and five whys or three-legged five whys would be plenty, with the involvement of the right people. Figure out what the team missed that caused the downstream problems and make sure the team changes how they work. There are underlying reasons why a scenario is found to be essential even though the team didn't realize it in planning or development. There are also reasons why integration is leading to defects. Get to the root cause and fix it.


02:13 pm October 6, 2021

Thanks Tomas.



4 x 3 week sprints.



Yes there does seem to be a difference in what is being testing during a sprint than what is being tested after a release. I am unsure what the difference is.



RCA, yes, I've done this before and have just met with teams to discuss starting this initiative again, good article here and the use of categories.

https://medium.com/propertyfinder-engineering/root-cause-analysis-as-a-…


03:29 pm October 6, 2021

You may want to see if the problems being found downstream are actual code defects or if they are caused by a different environment makeup.  It is not uncommon that a Development or Test environment will not be configured like the Production environments.  In my QA days I always made sure we had a test environment that was configured the same as Production but on a smaller scale.  

Root Cause Analysis is a great way to start diagnosing how and why you have this situation.  And don't be surprised if you find a problem, address it, only to find that another problem has been uncovered. 


By posting on our forums you are agreeing to our Terms of Use.

Please note that the first and last name from your Scrum.org member profile will be displayed next to any topic or comment you post on the forums. For privacy concerns, we cannot allow you to post email addresses. All user-submitted content on our Forums may be subject to deletion if it is found to be in violation of our Terms of Use. Scrum.org does not endorse user-submitted content or the content of links to any third-party websites.

Terms of Use

Scrum.org may, at its discretion, remove any post that it deems unsuitable for these forums. Unsuitable post content includes, but is not limited to, Scrum.org Professional-level assessment questions and answers, profanity, insults, racism or sexually explicit content. Using our forum as a platform for the marketing and solicitation of products or services is also prohibited. Forum members who post content deemed unsuitable by Scrum.org may have their access revoked at any time, without warning. Scrum.org may, but is not obliged to, monitor submissions.