
This week we had a great Q&A on Forecasting and Release Planning. During the "Ask a PST" session, we answered many questions. We got so many questions, that we could not answer all of them. This blog post tries to remedy that and answers most of these from my personal perspective.
Questions answered during the session
The session was recorded and you can either listen to it or read through the transcript here. The following questions were answered during the live session (text formatting is my own):
- Why estimate at all if we work in a complex environment with high levels of uncertainty?
- If I have many teams that make up a release. I can see the estimates for my features for the release, I know how much time we have left, but how do I know its on track using a chart? I don’t know that Burndown/Burnup gives the full picture.
- Given that we are getting high level estimates, with multiple levels of abstraction removed from the code/feature on the ground, what would be the best way to set the tone with the leadership that these are estimates that are subject to change and show the changes and the impact to timelines and cost using change management?
- What are best practices for forecasting and release planning of topics that have interdependencies to other teams? Meaning, the other teams' input is a pre-requisite to move on.
- What are the factors that we should consider to predict estimated release date? A ball park for the stakeholders. What data from ADO or Jira can I use to come up with that?
- Can you speak more on the Release Planning portion of the overall topic? What are some general approaches to planning a release vs. planning a Sprint? Some teams release continuously throughout a sprint, and some wait a few sprints before having a release.
- What is your experience with using the monte carlo simulation for forecasting, if you like, what are the crucial points in implementing it?
- It seems like a burndown chart only works for a set scope of work, such as implementing a specific large feature. Is that correct? We are adding new features, running our platform, and fixing bugs. Does that change anything about estimation and burndown charts and delivery dates?
- Hi - how can a new team forecast? Being a new team they won’t have any empirical velocity performance data, so they cannot use Monte Carlo Simulation, etc?
- my team is fairly new to the agile methodology. Most of the time we end-up extending our Sprint. There are stories that get blocked on a system issue. Is this a good practice? what can we do differently to have release after every two week sprints
- In the case of multiple teams being involved in a project, using a single backlog to drive the progress, the challenge with story point sizing is every team sizes stories differently. What are the recommendations on aligning sizing so that we can actually convert the estimates to dates in a way that works for all teams
- One of the most common challenges we have with forecasting is that some percentage of the team almost always ends up supporting the previous batch of work past the start date of the next chunk of work. This isn’t reflected in the velocity as “misses” from from the previous release get added in as real tracked work, but less of that velocity is actually being applied to work that management sees as “started” and is expecting us to deliver based on the previous estimates. Any advice for managing expectations, preventing this, or getting a clearer picture of the impact earlier on?
- How do you deal with the aligning DoD and forecasting implications?
- Any advice on estimating ‘Key Results’ for a Quarterly OKR? Typically Scrum teams might have only 2 sprint’s worth of refined & sized ‘ready’ backlog.
- You already mentioned rapidly, but can you please explain further: is Scrum and fixed time boxes still relevant (or not) in a DevOps-CI/CD context? What is the best work method to use when delivery is supposed to be continuous?
Questions that went unanswered
It was impossible to answer every single question during the live session. You know, timeboxing is a thing for these sessions as well. You should not have to have your question hanging loosely in the air though. Therefore, I am happy to answer most of them in this blog post. Please note that some are missing: If I did not understand the question, if it was referring to something outside of Scrum or to a specific tool (e.g. JIRA), I did not include it here.
Question (Q) 1: Your suggestion how to do simple Forcasting for upcoming pi planning for scrum 6 teams
Answer (A): Do not "normalize" estimations across teams. Instead, create a joint roadmap with one lane for each team. Let each team - regardless of their estimation method - fill their lane for the next "PI" (or quarter, or Product Goal, or Release, or whatever). Then, look for dependencies. By doing this, you do not have to care for specific estimation methods and you can start quickly. You can fine-tune this approach later.
Q 2: How to count in work that is not yet known, especially over a longer period of time?
A: In general, you cannot "count in" work you don’t know yet. You can do some research of the past though. My advice would be to measure the unknown work that suddenly appeared in the past and keep a free buffer for your future plan. If less interruptions happen, pull in more work during the Sprint. If more happen, adjust your buffer.
Q 3: What do you do if these 200 hours are completely underestimated and how can you avoid these estimates for the entire project?
A: This question relates to a special form of "Burndown Chart", called an "Agile Cone of Uncertainty" shown during the live session. To rephrase this question more generally, it would probably be: "What can I do, if I largely underestimated the scope of the project/release/whatever at the beginning and created timelines etc. on that basis?"
Well, mistakes happen. The more experienced your team is and the more of your environment (technology, scope, people, etc.) is known, the better your estimates will be. You cannot expect to have "good" estimates with a new team, new technology or unknown scope. If you want to avoid bad estimates in the first place, get rid of uncertainty. This is probably unlikely though, since we are in the complex domain and therefore use Scrum. So you might not be able to prevent bad estimates. Instead, readjust your estimates as soon as you know more. In addition, try to measure the amount by wich you are off and factor this into future estimates. Maybe your team is at least consistent in mis-estimating.
Q 4: I am working with quite a new Scrum Team and we do a complexity estimation with Fibunacci.
The developers tend to estimate most stories the same, so it's often a 3 or a 5. Although we split the stories to very small pieces, they somehow don't differ there estimation. How can I teach them to understand the estimation better and to have a "comparison" of stories? we are trying a lot to teach them, but don't really succeed.
A: Have you asked them why they do this? Is it possible that they are afraid of estimating anything different than threes and fives, because they have had bad experiences with other numbers? Let’s assume this is not the case. Then you might benefit from a good practice I use quite often: When estimating in relative sizes, I put up a Flipchart and write the estimation categories underneath each other, e.g. 1, 2, 3, 5, 8, 13, 21, 40, 100. After each completed Sprint, I ask the Developers: "Which estimates of the PBIs we delivered were so good we should include them into our estimation reference?"
The ones named by the Developers then go onto the Flipchart into the lane with the corresponding number. When estimating new PBIs, we always compare them to the reference (=Flipchart) and not to the other stuff in the estimation session. This usually improves estimation greatly over time.
Q 5: Agile is about having a more fixed schedule and resources (with scope being flexible). But when coming up to the release date, there will always be unexpected work that will come up (defects, newly discovered requirements, etc.). What are some ways to take this into account when estimating the work?
A: If unexpected stuff "always" comes up, measure the amount and plan for it by including a time buffer. You might want to consider solving the underlying impediments though.
Q 6: In practice, what tools do you see the most success with for release planning? My organization is struggling with release planning. We have a variety of waterfall and agile teams working on ONE product and we are trying to find a single solution for everything which has been challenging.
A: Something I learned over time is, that if it does not work with paper on the wall, no tool on earth will do the trick for you. Therefore, start with paper on the wall and iterate around it. I would recommend starting with lanes for each team under each other, planning for the future as good as they can. This roadmap does not care if a lane was filled in a waterfall way or by using Scrum or other Agile practices.
Q 7: I love that you are talking about Value and this may be a bit off topic, but I’ve never worked for a company that was able to define value clearly so we can measure it. Any suggestions on how to get there or perhaps any books or blogs you could refer. Again, this is off topic, so understand if you don’t have that available.
Q 8: Which books do you recommend on the topic of "Value"? Which approach do you recommend the most?
A: Value is probably THE most important topic for every company, agile or not. The "Grandfather of Agile", Tom Gilb, has published several books on the topic, my favorite being "Competitive Engineering". You can download it for free here. It is a bit difficult to read, but worth the time. To get you to a quick start in your organization, ask a couple of questions.
1) What does the customer want?
2) What does the customer want this FOR, so what does he/she get out of the product?
The result very often is something like "saving time" or "access" or something else, but never a specific feature. The customer always wants to use the feature FOR something, not in itself. The magic in value is the quantification. For example, if a customer’s primary value (with your product) is "security", this cannot be quantified, because "security" is just a word. If you start asking
3) How do you notice that security is there?, and
4) How do you notice that security is missing?,
they might come up with things like "hackers need more time to breach my system". This is measurable. This allows you for all features in you backlog to estimate the impact on "time to breach the system for an average hacker" and thus quantified value. In my experience, this can be done for every customer and every product. With some practice, it’s actually easy. In my PSPO classes, I always ask the students to do this for "love". Yes, love can be quantified…
Q 9: How do you guide a team that is struggling to balance planned work for Development and QA in a Sprint. We have a 3:1 ratio of designated Devs/QAs. Often Dev finishes their development work way before the end of the Sprint and is asking for more work. We can't bring more work into the Sprint because the QA won't be able to complete it. Here are the options I have suggested. 1. Devs can help with QA tasks when they finish. 2. The team should assess if there are Spikes in the backlog that we can bring into the Sprint that will not require QA tasks. 3. Devs can begin working the next priority backlog items without brining them into the current Sprint. Thoughts on these suggestions or other suggestions?
A: Scrum does not know specific sub-roles in a team. A role such as "QA" does not exist. Therefore, Developers would usually not build an internal waterfall process by pushing work from programmers to qa engineers. What you want to achieve is, that all Developers (Scrum calls everybody Developer who contributes. This includes qa engineers.) work hand-in-hand, as a unit. They will have to find their own way to solve your specific situation. Maybe, some of my experiences can be valuable for you: Are you using Test-Driven-Development? This is a complete game changer for qa. Is quality everybody’s problem, or is it „outsourced“ to qa? If it’s everybody’s problem, qa will educate and train the programmers, so that these can solve the standard qa tasks themselves and the qa engineers only tackle the difficult stuff. What I would not do though, is putting even more waterfall into the process by having programmers work on stuff in this Sprint (no matter what), which the qa engineers will have to take care of the next Sprint, having the programmers fix the issues found in an even later Sprint. Please don’t do that.
Q 10: What do you say to management when they come to you and say: I need an estimate (in hours) for the introduction of a new product and I want to see whether we can introduce it at the end of 2025 with the current resources or whether we need to add new engineers? Then I need to know how many engineers I need to hire. One peculiarity is that we can only participate in a public tender once we have covered the scope that is required. So the scope is very fixed right from the start. Any solution?
A: A difficult situation to be in: complex development while the tender assumes it’s a simple domain. I would view, analyze and estimate the full scope. Then, I would look into past data and get an idea of estimation error margins. This would help me update my estimates. If the numbers then show a high risk of not hitting the target date, I would consider adding more teams. But beware: Adding people is tricky business. You are not automatically faster, because adding people increases complexity. You also will have to up-train people, which takes time. If this leads to a single-team environment morphing into a multi-team environment, problems multiply. You should factor these risks into your estimates and pricing.
One last remark: Not every tender is worth participating in.
Q 11: We have some support tickets that we need to bring into the sprint. These are sized so that a person can work on that ticket for the duration of the sprint. This is messing up my metrics, because they stay open the whole sprint. How can I work around this or show progress on items like "support" tickets?
A: If the support tickets describe the solution of a specific problem (e.g. fixing a bug), you can partition them into (sub-)tasks. For example, the developer will need to do analysis, coding, testing, documentation, etc. These can all be separate tasks that improve transparency during the Sprint. You might also want to consider showing two separate data points per Sprint: Velocity for finished Product Backlog items (without support stuff) and amount of support work done.
However, did you ask yourself the question why "your metrics" are so important to you and if they are of equal importance to your team? If you find metrics that are valuable for the people producing them, they will make sure to organize around them. You could consider using this as a topic for your next retrospective.
Q 12: How do you handle uncertainities or risks during forecasting?
A: I refer to past data or the best risk estimates we have and add them to the estimates. Alternatively, I come up with three separate estimates: best case, worst case and best guess.
Q 13: What about using Monte Carlo as a Probabilisitic Forecast (not an absolute), and with Right-Sizing of sliced stories, though? “i.e. there is an 85% chance we’ll finish by X date..”
A: It’s a good approach. One thought though: How can your "right-size" your stories? Usually, this requires a high level of knowledge and a medium or low level of uncertainty. If you have these, why don’t you just estimate with another method? Or better: Can’t you just same-size everything and relate to counting instead of estimating?
Q 14: I would like to dig into these topics more. Do you have any recommendations for resources on estimation, burndown charts, planning, etc.?
A: There are bunch of great ones in here.
Q 15: I am a software professional with 9+ years of experience, now transitioning into a Scrum Master role. Though I haven’t held the formal title, I’ve supported Agile teams and participated in ceremonies like sprint planning and reviews.I recently earned my PSM I certification, and I’m seeking guidance on how to highlight my relevant experience effectively—especially during interviews or on my resume.
A: Here is a podcast that may be helpful.
Q 16: if you use one backlog for all teams, how are teams organised and/or how do you prioritise work for each team. Do they all pick from the one backlog?
A: Ideally, they are all cross-functional and pull work into teams independently. The real world often is less fortunate though. Usually, I include members of all teams early during refinement, ask for dependencies, and color PBIs which are team-specific in the respective team color. The order of the Product Backlog then is primarily following value, dependencies, risk and costs. Sometimes we add the additional criterion of "enough work for a specific team", but this is usually a "smell" of teams not being cross-functional.
Some final thoughts
Forecasting and release planning is a difficult topic. On the one hand, organizations across the world and leaders within want to know when something will be delivered. At the same time, uncertainties render almost every estimate "wrong", no matter the method used. When approaching this topic, please use common sense. Don't cling to a specific method like a zealot. Use your brain instead.
You should also try to understand the people you are dealing with, both in development and management. Try to walk in their shoes, try to understand what their needs are. If you are successful in walking in their shoes for a mile or two, then it will be far easier for you to have meaningful conversations with them. You might also be able to give a different type of estimate or deliver inaccurate ones on purpose, as long as your stakeholders' needs are still fulfilled.