Skip to main content

What could we do to fit everything within 2 weeks (Sprint) like regression bugs triage, bug fixes, UAT, Go/No go, release?

Last post 06:28 pm October 30, 2023 by Oleksandr Khorunzhykevych
6 replies
08:55 am May 2, 2023

Currently, we have a two-week cycle of Sprint which starts on Thursday and ends on Wednesday, last two days of the sprint (Tues, and Wed) are for regression testing and UAT (which is done by our Product team) and we work on those regression / UAT bugs in the next sprint which kind of ruin the next sprint because we didn't plan for it, even if keep a placeholder for it assigning few points, it's not working. Then, we have a Go / No Go call for that release in the next sprint if the issues found in regression / UAT are not fixed by then it delays the release. I can not increase the sprint duration or start regression a little early because it'll then affect the development (they only get 8 days to complete all the sprint work). I'm kind of stuck here and don't know to fit in everything in that 2 weeks of duration so that we don't have to take one sprint work to another. And we can complete the sprint with the release and everything is done (regression / UAT and their bug fixes) within those two weeks.



* We do manual testing, It's not automated yet.


09:39 am May 2, 2023

Hi Kajal,

I can't give you a cut to size solution, but since I'm just starting on a new job where the teams have similar issues I thought I might share my view on it.

From a scrum perspective the team should be able to bring items to a done state. Which in your case seems to be after the UAT. If that is the case I would assume the product team is part of the scrum team, otherwise you're team can't get things to done on its own.

If they are on the team (or you can get them there ) you can just make sure testing is done as soon as a developer is done with his part of a story and let them work together to bring the story to completion anywhere during a sprint.

If they will stay independent than I would say your definition of done should be altered to a point where you can deliver independently. So in this case before UAT.

What you get back from uat will still haunt you're next sprints, but now the team can focus on how to deliver to UAT without getting stuff back, so make sure the quality is a high as you can get it before handing it to UAT. After all if you deliver directly to a customer you will also have feedback to incorporate in you're next sprints unless your team makes sure they deliver what is needed with a high enough quality.


09:44 am May 2, 2023

There are a few things that you can do, but none are easy.

First, I would prioritize automating your testing. Minimally your regression testing, but making the development of automated tests for newly completed Product Backlog Items is important. This will enable you to run the tests multiple times more easily, maybe even as part of the Definition of Done for each PBI. It would stop defects from entering the product if each change had a full set of complete automated tests.

I would also look at moving "UAT" out of the Sprint. Without knowing what your product is, it's a warning that the development organization is performing UAT. User Acceptance Testing is a validation activity, which requires a deep understanding of the intended use of the system. It seems like the Product team doesn't trust the Developers and is likely redoing testing. Perhaps find ways to incorporate their work into the automated testing so that there's no duplicated effort and reduced manual effort, letting the team take those days back for working on meeting the Sprint Goal.

The ultimate desire for there to not need to be a Go/No Go call. At least once during the Sprint, the team releases an increment that is of sufficient quality to release. If the increment is being rejected due to quality issues, I'd suggest root cause analysis to find the problems preventing you from getting there.


10:05 am May 2, 2023

Why is it concentrated on the last two days of the sprint? Can you elaborate on that? 

Does the team have a definition of done? Acceptance testing should definitely be part of it.

Another thing to consider is to decouple the release cycle from the sprint cycle. Even if you only do it as a thought experiment, it can help you to uncover and resolve problems.


10:14 am May 2, 2023

We do manual testing, It's not automated yet.

There's your problem. Manual testing is laborious, error prone, and you're deferring significant risk around having a Done increment until the end of the Sprint. It's a mini waterfall project and these are the consequences.

Why don't the Developers reduce the amount of work they pull in each Sprint, so appropriate automation can be put in place, and regression testing of an integrated increment occurs continually?


04:34 pm May 2, 2023

This is can be acceptable practice. 

last two days of the sprint (Tues, and Wed) are for regression testing and UAT (which is done by our Product team) and we work on those regression / UAT bugs in the next sprint

The problem is this part of that explanation

which kind of ruin the next sprint because we didn't plan for it,

It sounds like you are planning your next sprint before you complete the current one.  Since the regression and UAT testing are done on the last two days of the Sprint, you should know any issues/defects found by the time you reach Sprint Review.  Those discoveries can be part of the discussion in the Review that affects the Product Backlog.  Then after the Sprint Retrospective completes the current Sprint, your next Sprint Planning will take into account all of those defects/issues for the next Sprint. 

Remember that there is nothing in the Scrum Guide that says you have to release every Sprint.  It only states that at least 1 usable increment be created each Sprint.  The word "release" only appears twice (once as "release" and once as "releasing").  But neither of them are in reference to how or when something is released to Production. (This is my opinion and I know that others have different opinions)

I will echo everyone else's suggestion to automate your testing if you can.  Remember that a large part of testing can/should be done closest to the code that executes the functionality.  So, investment in unit, system, integration tests will help a great deal. Not having this capability is technical debt which is work that will improve the product so there should be items for it in the Product Backlog. The Regression testing can then occur every time the code is submitted for build or deployment.  I have 25+ years in software quality assurance, as a manual tester and coder of automation.  I started doing software testing before it was possible to automate anything through the UI. I have worked with organizations to completely remove their regression cycles with these practices.  I still believe that the UI can benefit from human eyes so manual testing is something I advocate at that level.  However, that testing is only for the UI.  Did all the information show up in a readable format? Can all of the buttons be accessed and clicked at the appropriate times?  There is no reason to test that if I enter "XYZ" and submit it returns "ZYX".  Or that appropriate error messages are being returned for data validation.  That is because there are already automated unit, system, and integration tests that validate all of that. 

 


10:17 pm October 28, 2023

Manual testing is laborious, error prone, and you're deferring significant risk around having a Done increment until the end of the Sprint.

Let me disagree on this. While implementing wide automation and a "proper" CI/CD definitely has its own benefits, it doesn't guarantee you success. Based on my personal experience, there are plenty of different cases why automation can make a situation even worse. E.g., a complex embedded system that has insufficient RAM/Flash memory to store all necessary code/API to perform wide automation testing (of course, there can be different workarounds like simulators/emulators, special hardware to perform debugging, etc., but then it's a question to the team/project whether it's affordable for a project budget). The second example is large and complex systems. Please do not forget that automation doesn't mean that you constantly move forward. It also means that you have to support this automation and/or CI/CD. You have to keep updated software, OS version, drivers, libraries, etc. On the other hand, you need to support the "emergent architecture" paradigm, and sometimes small but necessary changes in architecture may lead to huge changes in automation. 

From the Scrum Guide 2017:

The purpose of each Sprint is to deliver Increments of potentially releasable functionality that adhere to the Scrum Team’s current definition of “Done.”

I would like to highlight "potentially releasable".

From the Scrum Guide 2020:

Work cannot be considered part of an Increment unless it meets the Definition of Done.

If it is not an organizational standard, the Scrum Team must create a Definition of Done appropriate for the product.

Mainly, it means that even a very complex system that requires regression testing, UAT, performance testing, vulnerability testing, passing a release checklist and etc, and etc, but a whole Scrum Team follows DoD where there is no word about passing those tests - increment borns.

 

Mainly, I prefer to say that automation and DevOps practice are nice to have practice, but every team/project has to decide whether it's applicable in their own case or not.

The word "release" only appears twice (once as "release" and once as "releasing").  But neither of them are in reference to how or when something is released to Production.

Fully agree with this statement. The Team should create a usable increment by adhering to DoD. Then it is mostly a question to team/project/product/company/external factors/etc. Let's recall FDA certification. The team can perform all necessary actions from their side, but they cannot release/deploy increment unless FDA approves it. The same can go with company policies and their agreement with end-users. E.g., a company requires to run performance/regression/pass release verification/perform audit by a third-party company (or any other activity before release). Until this requirement isn't written in DoD and it's acceptable for all sides - increment borns.

 

When some side requires to pass some extra steps and include this step in DoD, then this topic has to be widely discussed. E.g. PO wants to run full regression on every increment because the Defect Rate on client env is increasing, then the whole team should discuss this and create a plan for how to achieve this goal and set proper expectations. Only then it can be a part of the DoD


By posting on our forums you are agreeing to our Terms of Use.

Please note that the first and last name from your Scrum.org member profile will be displayed next to any topic or comment you post on the forums. For privacy concerns, we cannot allow you to post email addresses. All user-submitted content on our Forums may be subject to deletion if it is found to be in violation of our Terms of Use. Scrum.org does not endorse user-submitted content or the content of links to any third-party websites.

Terms of Use

Scrum.org may, at its discretion, remove any post that it deems unsuitable for these forums. Unsuitable post content includes, but is not limited to, Scrum.org Professional-level assessment questions and answers, profanity, insults, racism or sexually explicit content. Using our forum as a platform for the marketing and solicitation of products or services is also prohibited. Forum members who post content deemed unsuitable by Scrum.org may have their access revoked at any time, without warning. Scrum.org may, but is not obliged to, monitor submissions.