Final Sprint and Testing
A client of mine is using scrum and iterative sprints to a certain extent but the team still falls into the process of doing all of our work (identifying what we think is ready to go) but still allocating a final sprint to any regression bugs or final changes that come up from the client. This is especially in the case of updates to an existing product.
The final sprint doesn't have new user stories in it but we can't do a release because of these bugs.
Is this just part of the process (I don't think so) or where are we going wrong?
This isn't a case of not defining done, as we are going through a bug triage meeting to determine if these are things worth fixing (ie risking) in the "final release", ie, need to create another sprint.
To give more details for explanation, imagine a four sprint process for upgrading the product:
Sprint 0: Fixing previously identified issues ( 2 weeks)
Sprint 1: Major User Stories / new bugs (2 weeks)
Sprint 2: Major User Stories / new bugs (2 weeks)
Sprint 3: Minor User Stories / new bugs (2 weeks)
Sprint Final: Bugs / new regression bugs, etc
Sprint Final always seems to drag on. Should we simply be saying "tough, sprint final becomes Sprint 4, 5, 6 until the client decides to go.
If so, the client runs the risk of losing members of the development team due to inactivity.
Thoughts ? (I couldn't find a post that had a similar issue)
This isn't a case of not defining done
So you are saying that you have an adequate definition of done? If that is true, than it is a case of not following the definition of done. However, I would recommend to also question the existing definition of done and look for ways to improve it.
"Stabilization sprints" are a clear smell which show you that agile practices are not being performed in the team.
You need to do a root cause analysis in order to identify where in the process the bugs emerge and how they can be found as soon as possible, latest before sprint end.
This kind of analysis is usually done in the retrospective.
The advantage is that you can give a better forecast to the customer, when a feature will be finished, and it can be in any sprint, not only after five sprints.
> The final sprint doesn't have new user stories in it but we can't do a release because of these bugs.
Each sprint increment must be potentially releasable. If you can't do that because of bugs then your implementation of Scrum is broken. As Ludwig says you really need to critique your Definition of Done, since this effectively asserts the quality of your sprint deliverables.
The approach you have taken is sometimes referred to as a pre-release "hardening sprint". Up until that sprint technical debt is allowed to accumulate along with waste such as defects. Although very common in industry, in agile terms it is an anti-pattern.
Thanks for the great responses.
I agree that the definition of "Done" needs to be critiqued especially with regards to building up the user stories for defining the work. That would be the number one problem. Each sprint done has been "potentially" releaseable (based on an internal definition of done) except when the final client was involved, as opposed to the Product owner who acted as the "voice" of the client.
But let me correct my earlier statement - It's not that we COULDN'T do a release; it's that the politics involved would not let a release go without certain items.
But your responses have been very helpful on this. This is an organization where change is difficult to achieve so being able to bring this type of response into the retrospective will be very helpful. It is helpful to identify to the organization that the assigned product owner was not able to act as a good representative of the client and this leads to the inherent problems in the delivery.