Is it a new Bug, PBI, or something else (i.e. "Issue")
What is the best practice for handling unforeseen issues that come up during testing? I'm referring to what may be debated as either a bug or a late-breaking discovery of a existing, yet undiscussed, requirement.
Must support installation on SQL Server.
The above requirement is satisfied according to the definition of done in the current sprint. However, testing reveals that non-default SQL Server instances (i.e. something other name MACHINENAME\MSSQLSERVER) must be supported as well. This is not a "new" requirement, per se, but it is new relative to the definition of done for the current sprint.
So, I think it’s a bug if it’s a piece of delivered value that’s defective based on the acceptance criteria, or a new PBI if it represents some new/additional value to be delivered. My vote is for the latter in this case.
There are some suggestions out there on how to handle this case, including creating a third type of backlog item. I don't know if I agree with what this guy says here under "New Definitions": http://www.axisagile.com.au/blog/quality-testing/bah-scrum-bug-how-to-m…
the best way to handle this is to improve the collaboration between PO and Dev team on understanding the requirements, and improving the DoD and the PBIs accordingly and continuously. DoD in the retrospective, PBI in the PB refinement.
For the specific case, I would say if the acceptance criteria of the PBI are not met, it's not done. If the acceptance criteria are not explicit or not specific enough, and the story fulfills the definition of done, the PBI is done and the "new" requirement should become a new PBI.
This should be seen neither as a bug in the product, nor as a new requirement, but rather as a defect in the process. It shows that rework is happening in some cases. That rework is waste, and the team should seek to improve the situation no later than the next Sprint Retrospective. A test-first approach can help avoid the late discovery of pertinent acceptance criteria.
I don't see how TDD would solve this problem, given that the requirement was unknown (or at least not called out explicitly) ahead of time.
I do agree that it's a defect in the process. My team is just hung up on what to call this thing - a bug or a PBI.
(aside, no such thing as a "Best Practice" in a complex domain like software dev -- everything is contextual)
Mark, has your team tried going through this flowchart to see what their answer was?
Thanks Charles!!! That's EXACTLY what I was after!
Happy to be of help, but if you get a minute it would be great to get feedback on what your team thought of the chart and or their issue. I suspect you might hit the 10 minute rule on previously understood requirements... OR... it might be quite clear that no one realized that was a requirement... this is part of complexity -- sometimes you can't uncover a requirement until you peel the layer of the onion above that... sometimes you should have known... and sometimes it means you need to improve your process...but sometimes you couldn't have predicted it without peeling off that top layer... all possible scenarios in a "complex" domain.
Anyway, happy to be of help, and best of luck!
I will likely not get you an answer today, but I will make sure I bring this up in our retrospective this week. I have a laundry list of process-related items like this :)
As an aside, I wonder if part of the "is it a bug?" calculus is the perception outside of the team (i.e. exec mgmt) on the bug metrics. That is, if we keep calling things "bugs" that really may be incompletely understood requirements, we'll get undeserved "bug demerits".
That can definitely affect it, but more important than anything is that each "takeaway lesson learned" will be different depending on the root cause of the issue.
So, for instance, the PO is responsible for describing requirements to the level needed, but most PO's(hopefully PO is from the biz side) would have no way in heck of predicting or knowing the requirement you uncovered. Sounds like your team may also not have been able to predict...
But the net net of it is that you should classify it, then retrospect on it to see if there is a way to do things better in the future. Sometimes, usually rarely, the answer is "we couldn't have known it until we tripped upon it", or "yes, we could have known it if we did 30 hours of analysis on the problem, but it was actually more efficient to do 2 hours of analysis and then just give it a whirl, then take the extra 10 hours to implement the "discovered" requirement - thus saving 18 hours of effort!" So, actually, the way we did it was fine!
Many other times, if we're honest... the answer is "we can do better." Just remember... that's NOT always the answer in a complex domain. (Google the "Cynefin" framework for more about what I mean wrt "complex domain")
I just watched this video on the subject: https://www.youtube.com/watch?v=N7oz366X0-8
What a wonderful way to explain (albeit a little "complicated") Emergent Practice, as it applies to Scrum. I totally see the truth in it.
Thank you for this valuable resource. Where did you find it?
Cynefin is a well known model in the Scrum trainer community, and lesser known but still known within the Agile enthusiast community. We use it sometimes to describe that Scrum really thrives in the complex space (where s/w dev lives), and how you might not have as much success with Scrum in the complicated space (where Network support might live -- Kanban might be better for net support, IT help desk, etc). Manufacturing might fall into the obvious(formerly known as simple) or complicated space.
One of the main problems with traditional PM/PMI techniques is that they assume s/w dev is in the obvious or complicated space, which is why those techniques are 3X less successful than Agile/Scrum in s/w dev. Pick the pright process for the right kind of work kind of thing.
> I don't see how TDD would solve this problem,
> given that the requirement was unknown (or
> at least not called out explicitly) ahead of time.
Think about how testing caused the requirement to be discovered. If the test is identified first, rework and waste may be avoided.
Love the chart!
One note on the thought. In your chart, it looks like a "Requirements Bug" is not really a bug, but a missing feature.
Would you consider a "Design Bug" in the same manner?
For example, "Software doesn't work under _____ High Contrast Color Schemes". The actual software works fine but it is missing support for this type of environment.
One could be nit-picky and go back to the PO and say "well, you never asked for this" but realistically, isn't this something that a member of the "Team" should have brought up during Sprint Planning?
So we did our retro on this past sprint. Here's what was captured by the SM:
> Unplanned work (bugs) is not following the process and is causing sprint deliverables to be at risk. We need to make sure that new / unexpected work is going through the same process as planned work. The team will have a daily triage session to review and edit (if necessary) in preparation for a future sprint. Product Owner will determine severity and priority.
The SM referred the team to the "official" flowchart, which is probably copy/pasted from some Microsoft resource:
I don't think this state diagram captures anything useful in terms of deciding on whether or not to commit NOW. That's really the esssence of the whole problem.
While your flowchart is useful, and provides substantially more value, I think the SM was hesitant to offer up an alternative to the "official" how-to-triage-bugs procedure documented internally. All this procedure documents is what fields in TFS to fill in :(
Ultimately, the fix-it-now vs. fix-it-later decision lies with the PO, with guidance from the team on effort, severity, etc.
> One could be nit-picky and go back to the PO and say "well, you never asked for this" but realistically, isn't this something that a member of the "Team" should have brought up during Sprint Planning?
The PO is responsible for all decisions wrt requirements. If no requirement is ever created (by whomever) to make your system work on a particular platform/environment, then the requirement does not exist, and thus, if there is no requirement, then there is no bug. If no one(including the Dev Team) thought to add it as a requirement, then it's simply a "missed feature". I would hope that a good Scrum team is collaborating well such that they might bring this up, but pointing blame at the PO or Dev Team is not the important part here. What's important is "Do we have a good shared understanding of what platforms/environments we support?" If this issue brings that point to light, then let's get to fixing that problem! Let's make a list of all platforms/environments we support!
Also, there are likely to be lots of "technical thingies" that a business side PO would never think of, and this is why the PO + Dev Team collaboration is so important. This is also why it is perfectly fine for the Dev Team to come up with requirements. In my classes I sometimes say that the 'PO has the "last say" on requirements, but that doesn't mean they have the "only say."'
As an aside, this kind of requirement is something I consider to be a "non functional requirement", which is perfectly well suited as PBI and/or on the DoD. See here for more:
> I don't think this state diagram captures anything useful in terms of deciding on whether or not to commit NOW. That's really the esssence of the whole problem.
Agreed. it's simply a diagram that explains how a tool works, and the diagram is useful for *that* purpose only.
> All this procedure documents is what fields in TFS to fill in :(
Again, not exactly helpful from a Scrum self organization perspective. It's more of a "here's how you fill out the red tape form -- don't forget your TPS cover sheet!" (movie reference)
> Ultimately, the fix-it-now vs. fix-it-later decision lies with the PO, with guidance from the team on effort, severity, etc.
Completely agree, so long as we use Scrum's mechanisms to decrease sprint scope when new unplanned work comes in. The PO should also be communicating to all interested stakeholders every time one PBI is given higher precedence than another mid-sprint. (Otherwise certain stakeholders will try to game the system by only bringing stuff in mid sprint) The SM and PO should be making it plainly transparent every time unplanned work causes a sprint scope change -- in other words, make the tradeoff highly visible to all stakeholders.
And lastly, like the chart says, we should constantly retrospect on ALL of those occurrences to find root causes and try to fix those root causes.
> And lastly, like the chart says, we should constantly retrospect on ALL of
> those occurrences to find root causes and try to fix those root causes.
Indeed. It is important to triage defects when they arise and to deal with them expediently, but they are still rework and waste.
Thanks Charles - I figured you would say something like that (not in a negative sense). It's really critical to ensure all groups are working together in the scrum meetings.
So do I have it right that an item like the one discussed gets raised in daily scrum and then put into discussion for the next sprint backlog where the PO can decide which way to go? And as noted, retrospect it so similar items can be identified earlier.
If it was truly a "missed feature", then I would think it could get raised as soon as it is found. Also, the PO might or might not be at the Daily Scrum. I would expect the Dev Team to raise it to the PO as a new PBI using some sort of typical communication mechanism like verbally, email (CC the whole team), etc. No need to wait for a Daily Scrum...
See these patterns for a little more detail on what I'm saying here:
"The After Party"
"Anti-Pattern: Save All Obstacles For The Daily Scrum"
My response above was for "Andrew" ...