Estimates at Feature and Story level - How to avoid double counting for burn-ups?

Last post 03:34 pm June 10, 2021
by Daniel Wilhite
4 replies
Author
Messages
11:12 am June 9, 2021

Using the iceberg principle I aim to have a Product Backlog defined at a typical Epic>Feature>Story>Task type hierarchy. But what should happen with estimation at each level and avoiding duplication or double counting?

Epics are containers for one or more features and a release backlog would be comprised of a level of completeness of definition and estimates at feature level (based on what is known at a given point in time). Those features higher up the backlog will be broken down into stories, as part of progressive refinement and in relation to the proximity of being commited to in forthcoming sprints. It would be wasteful to decompose features into underlying stories for the entire release backlog, as this is against JIT principles and those lower down features may be subject to change

My question is around what happens to those feature estimates, once the feature is broken down into stories? Example: the feature is estimated at 21 story points. It is decomposed into 3 stories, say of 8, 8 and 5 points respectively. Fine. Sometimes the sum of the parts might also be more, or less. I assume we would still retain the feature in the backlog, for clarity and visibility and maintaining the relationship to the relevant Epic, along with the new and now underlying stories (and by further extension, the tasks related to the stories). But should we now zero out the feature estimate, else we have a duplication of the 21 points? The implication is the effect on release burn-up charts. If the burn-up chart is configured to tally everything within the release backlog as the overall scope, then the related feature will need to be zeroed out, so as not to be double-counted. Alternatively, if the burn-up chart were configured to only tally feature level estimates as scope, then those story level estimates would then not be picked up (and they might have proved to be different to the original feature estimate).

What's the best approach to this? I want to promote good practice around feature/story splitting, as well as having a meaningful and accurate release burn-up!

09:26 pm June 9, 2021

What purpose would a so-called "release backlog" serve? Isn't a Product Backlog enough for Product planning?

If a hierarchical decomposition causes the issues you describe, the Product Owner is free to choose some other organizing principle. For example, a team might forecast the conversations (user stories) that ought to happen Sprint by Sprint, and then adapt that forecast based on lessons learned. Hierarchies are a traditional construct but they may not lend themselves well to emergence.

07:11 am June 10, 2021

Sure. I think of a Product Backlog as comprised of stories to be completed for 3 distinct timelines: this iteration, the forthcoming release, future releases.

It is only ever one Product Backlog, but I want to be able to report on that Product Backlog for the release burnup (scope versus velocity versus time, to predict landing zones). I want to avoid, in the most effective way possible, any double counting of estimates for those backlog items which are targeted for the forthcoming release, and hopefully while also retaining the relationships between Epics>Features>Stories.

I use Mike Cohn’s diagram below to explain the notional segmentation of the single Product Backlog.

Iceberg principle for a Product Backlog

07:40 am June 10, 2021

You could put only point on the lowest level item available so ip the epic is not refined at the moment the points are there. as soon as the epic is refined to features you put the points at feature level and remove the points from the associated epic. and if you refine one of the features to in to story's you put the points on the sroty's and remove them form the feature.

But be careful with using velocity and estimated point and reporting landing zones there is the risk that stakeholders will see this as a commitment instead of a forecast for there epic's

03:34 pm June 10, 2021

I'm not a fan of burn up/down charts but some teams I've worked with liked them so I've helped them set them up. One thing that is often missed when defining a burn chart is that you only burn the work.  In your case, the Epic does not define the work. It defines a need.  The work is represented by the stories that are created to represent the work being done in the Sprints.  In my opinion any estimate placed on an Epic is worthless when you start to break it down into workable stories.

It has also been my experience that the epic estimates are usually extremely inaccurate.  As work is done, new information is discovered. If you are using the same scale for your epics and stories, they will never add up. It is much harder to put any kind of accurate estimate on something that is not well known.  

I also suggest that you be very careful making decisions based upon estimates.  An estimate is a guess made based on the information known at the time you make the guess. There are better ways to forecast using data based upon actual performance such as the Kanban measures of lead time, cycle time, throughput, work in progress.  I have had very good success using the Actionable Agile Metrics for Predictability provided by Daniel S. Vacanti. (book- https://actionableagile.com/resources/publications/).  Both of those books are great sources of information for being able to forecast future deliverables.  Read the Actionable Agile Metrics first.  Navigate to the main site of that url for the tools that have been created to provide the insights that he describes in his book.  If you happen to use Jira or Azure DevOps, there is an add-on available for those tools that will their data to produce the charts.