Is Estimating a Product Backlog Item mandatory in Scrum ? What if the team wants to explore no-estimate ? Would it mean breaking Scrum rules ?
Would it mean breaking Scrum rules ?
No, it won't as long as empirical data guides your future Sprints.
Out of curiosity, when you say estimates are you referring to the use of story points?
Even the assessment during Sprint Planning whether an item is small enough to fit in a sprint, is a form of estimate.
What if the team wants to explore no-estimate ?
Can you clarify what you understand the "no estimates" approach to mean, and how predictability would be achieved?
I mean no estimates of any kind not just story points. The team is working on niche cutting edge technology and as such facing too much technical and solution uncertainity. The team is doing sprint with a mix of architectural spikes along with stories. However due to evolving nature of technology and solution, estimates for stories are actually too way off. After experience of few sprints, the team now feels there is not much value in estimating the stories as it is taking time while not really helping with predictability or productivity.
The Scrum Guide does state that:
Product Backlog items have the attributes of a description, order, estimate, and value.
However, there are no details as to what form the estimate takes. Teams can use methods such as ideal time, clock time, story points, t-shirt sizes, or something else entirely. The guide does go on to say that the team adds estimates to Product Backlog Items during refinement.
The description of Product Backlog refinement does contain one more piece of useful information:
Product Backlog items that will occupy the Development Team for the upcoming Sprint are refined so that any one item can reasonably be "Done" within the Sprint time-box.
A single Product Backlog Item can be finished, per the Definition of Done, within a Sprint.
How does a team determine if a Product Backlog Item is expected to be finished within a Sprint? By estimating it.
There's nothing that says that the attribute of an estimate cannot be as simple as "yes, this item is likely to fit within a Sprint".
On the matter of #NoEstimates, it's not about not estimating. It's about reducing or eliminating the waste in the estimation process. In my experiences, teams spend a lot of time talking about if something is a Medium or a Large, 4 hours or 7 hours, 5 points or 8 points. I believe, and I suspect many of the #NoEstimates people would as well, that these discussions are simply unnecessary. That level of detail is often not helpful and doesn't add value to the downstream process. It would be more beneficial to spend more time doing things like understanding the requested work, decomposing the work into smaller and thinner slices, or even getting on with doing the work.
So, no. I'd say that #NoEstimates is compatible with Scrum, as long as refinement produces Product Backlog Items that can fit into a Sprint. Perhaps your team has a different metric - maybe Product Backlog Items are things that can be done in 4 hours, 1 day, or some other unit of time. That's not only within the bounds of Scrum but fine within #NoEstimates as you are focusing on understanding and decomposing the work rather than providing an estimate of it.
the team now feels there is not much value in estimating the stories as it is taking time while not really helping with predictability or productivity
How do they propose to forecast how much work they can take on in a Sprint, so a Sprint Goal can be met? Would they forecast a raw count of the number of items they hope to complete (i.e. a forecast of throughput), for example?
I guess objectively looking from Scrum guide -
there is an expectation of some kind of estimate for each PBI in a Sprint.
Agree, it's up to the team to decide the kind whether it is days, story points, t-shirt size or something else. The primary reason for estimate is to help with Predictability. In our case, estimation of any kind is not helping with predictability.
Throughput is definitely one possible option. Using this option, team can "Simply estimate the total number of PBI they can take in a Sprint" without attaching any kind of estimate to each PBI. This helps with Sprint planning and team spending more time on actual detail of work than estimation discussions. However predictability beyond current Sprint could be a challenge suing this option.
Perhaps, Sprint throughput divided by remaining PBI in Product Backlog is one way of measuring predictability. However this way assumes all PBI to be of equal size - that may or may not be true. But for time being this may be better than nothing. Thanks you all for your kind responses.
"However this way assumes all PBI to be of equal size -"
Or you can take the hypothesis that the future PBIs will be splitted in, let's say, between 3 to 5 PBIs before considered "ready" by the Dev Team.
If you can solve the discrepancy between the size of items for the current sprint and the future (and Olivier has pointed you in the direction of a possible solution — another method might be to calculate based on how many items you had prior to splitting), then you might want to look into the Monte Carlo method.
It embraces the fact that throughput is somewhat random, and so instead of just taking a simple average, it simulates the likelihood of a certain throughput over coming sprints, weeks, months or years, based on what has happened in the past. e.g. if on average one in 4 sprints tends to result in a really low throughput, the calculation considers how likely it is to get unlucky and encounter several sprints like this.
It doesn't require all items to be an equal size, it just requires that your historic data is representative of the future. e.g. a stable team and workflow, and a consistent policy for how items are broken into smaller ones.
With a really stable workflow, you might even want to use daily throughput as an input, and anticipate the likely throughput for a sprint of perhaps 10 working days.
"...and a consistent policy for how items are broken into smaller ones."
Please could you clarify what you meant by consistent policy ? Our team and workflow is steady.
Just so I am clear on splitting aspect - Is it splitting PBI into tasks ? or splitting PBI into smallest possible function ?
I'm talking about splitting one PBI into multiple PBIs. For this level of predictability, it doesn't matter how the Development manage the work for each PBI, as long as they don't change something that can affect throughput.
One policy for splitting PBIs could be, "we will break it down into a smaller PBI if we believe it will take more than a week from development work starting, to it being Done".
An alternative one could be, "we will break down each PBI in the way that we believe will maximize feedback opportunities within the Sprint".
To be clear, when I talk about a stable workflow, I mean things like the amount of WIP and the cycle time are reasonably consistent from one sprint to the next.