Skip to main content

Forecasting in a VUCA world

May 30, 2023
A road in the forest leading into fog

Answering the question, “when will it be done?” in the VUCA (volatile, uncertain, complex, ambiguous) environments that Agile was designed for has always been problematic. Afterall, the future is unknown. And yet, we are still expected to provide an answer. Many techniques and fads have emerged from the agile community to try and help teams provide some sort of answer, despite all of the uncertainty and unpredictability.

Agile forecasting approaches

In the early days of Agile, before Agile even had a name, the use of story points came from the Extreme Programming community for assigning work items a relative size of effort. They were a tool for a team to plan how much work to bring into an iteration. Story points were all the rage when I first got into Agile, but they quickly came to be used in ways that they were never intended for. The term velocity came to describe the amount of work completed in a unit of time, and story points and velocity became a way to forecast and set expectations. Story points and velocity became a way to compare teams against each other. They were used as a means for tracking estimates against actuals, and as a way to pressure teams into delivering more and more. Story point misuse has led to them falling out of favour, with even their inventor expressing regret at their creation, although it should be noted that this is more to do with their misuse than them not being fit for their original purpose (see Story Points Revisited by Ron Jeffries for details).

Using story points and velocity to answer the question “when will it be done?” is certainly better than estimating in absolute time up front, however their use for these purposes still led to frustration and unmet expectations as teams uncovered unknown complexity or unplanned work. The difficulty of managing expectations led to a separate movement who advocated for teams to stop estimating at all, with fierce debates ensuing on social media.

More recently, I have seen what are referred to as flow metrics and probabilistic forecasting become more mainstream, with many people in the Scrum community adopting their use as a replacement for story points and velocity for forecasting. These techniques have been around in Kanban circles for many years. Flow metrics include cycle time (the time it takes from starting to work to completing it) and throughput (the number of work items finished per unit of time), and probabilistic forecasts are commonly created with a Monte Carlo simulation to arrive at a time duration and probability of a batch of work being completed in that timescale. I won’t go into a detailed explanation of how Monte Carlo works; Troy Magennis has already provided an excellent explanation for example.

These flow based metrics forecasts appear to many to be the solution to answering the “when will it be done?” question. The fact that they contain a time duration makes them more accessible than the abstraction of story points and velocity, and the probability element communicates the uncertainty involved. To add to the allure, the forecasts are produced based on historical data to forecast completing upcoming work, meaning that teams do not have to spend any time estimating at all.

Are probabilistic forecasts the solution?

Before throwing story points in the trash and fully embracing probabilistic forecasting, it is only pragmatic to apply some further scrutiny.

One of the first concerns that comes up with probabilistic forecasting is the question if all of the work items to be done need to be the same size. There are several factors that influence the time it takes to complete work in VUCA environments. This could include, but not limited to:

  • Interruptions
  • Context switching
  • The amount of other work in progress
  • Changes in prioritisation
  • Delays
  • Dependencies
  • Blockers and impediments

These and other factors are part of the instability and unpredictability of the environment. Unless the relative size of work items are orders of magnitude in difference, size has relatively little impact on the time work takes compared to all of these other factors. 

For example, picture a team that starts two work items at the same time. One of the work items has been sized as 13 story points and the other as 1 story point. In a stable, predictable environment, it could be expected that the first item takes about 13 times longer than the second one. However, the reality in complex environments is different. If the 13 pointer is regarded as higher priority, the team may swarm on it while the other work item is left waiting. Or the 1-pointer may be blocked, waiting for a clarifying question to be answered, or there is a dependency outside the team. The “big” item may actually be completed faster than the “small” item.

Probabilistic forecasting proponents do not see this system instability as a problem. Because a Monte Carlo simulation utilises historic data, the instabilities are baked into the data. Contrast this to story points, where teams are still asked to estimate relative effort of the work item and environmental factors are rarely, if ever considered.

This leads to the question of how much historical data is needed for a reliable probabilistic forecast. Despite my best efforts, I have been unable to ascertain a clear answer, and by this I mean an unambiguous mathematical proof. And in a way, this is reassuring, because every context is different. In simple terms, we can however safely say that the more system instability, the more data you need.

If I am to set an expectation that there is an 85% probability that I will deliver a set of work items by a particular date, to be technically and mathematically accurate, the historic data that I draw upon must contain enough data points to reflect the variation of the system while the work is being done. This might mean hundreds, if not thousands of historic iterations and completed items. In my experience, teams and organisations are lucky to have tens of data points to draw upon. And if they do have a lot of data, then it is likely that much of it is no longer relevant; the old stuff comes from a time when the environment was different from what it is in the present and will be in the future. People in teams come and go, teams move onto different features and products, the tools and policies change etc. Indeed, these changes themselves are reflected in the data, but there can be no certainty that the future will be like the past. To give a probability of delivery by a date with a probabilistic forecast has a danger of being misleading.

A bigger worry than the mathematical accuracy has to be the people element. Like story points before them, flow metrics and probabilistic forecasting are tools. Every tool can be misused. Those that abused story points are capable of doing the same with probabilistic forecasting. Picture a stakeholder, putting pressure on a team to deliver according to a forecast that was based on a backlog of work that has since grown, or where the original forecast was made with only 3-4 historic data points available, or the forecast when made back when the team had no operational support to do or technical debt to contend with. Imagine comparisons being made between teams on their cycle time; some teams might measure cycle time for units of value, while other teams include subtasks, making their cycle team appear “faster”. Or different teams measure cycle time from different start and end points in their workflow.

A question of predictability

Don’t get me wrong, I believe that probabilistic forecasting is currently the best tool we have for making forecasts. I have been using the approach for years before it recently became more in-vogue. However, it is not the silver bullet that many in the community believe, and in my opinion, trying to provide accurate forecasts is not a problem we should be trying to solve anyway. 

I have recently come to believe that if we want to consider ourselves “agile”, we should be moving the conversation on. Whether forecasting based on absolute estimates in days, relative estimations based on story points, or probabilistic forecasts based on historic data, we are still falling into the trap of trying to create an illusion of predictability in an unpredictable world.

I am not dismissing the need for expectation management. I am not advocating for the stance of refusing to provide estimates. The question, “when will it be done?” is a fair one for stakeholders to ask and it should be answered honestly, but in most circumstances, the honest answer is, “we don’t know”. Most environments that I have come across are inherently unstable; unstable teams, lots of unplanned work, changing requirements, changing priorities etc. When the nature of probabilistic forecasts and the underlying assumptions about them are not understood there is just as much chance of them being misused as forecasts based on story points and velocity. 

The questions we should really be asking

There is value in the probabilistic forecast, and also forecasts based on velocity and story points as well. However, instead of focussing on the forecast as the end result, the real value can be in using them as a starting point for a conversation. Taking inspiration from Troy Magennis again, this means using the forecasts to turn the conversation around from trying to predict the future to instead centre the narrative on things more aligned to agile thinking; risk and value.

With a forecast in hand, discussions can prompted by asking these sorts of questions:

  • Where in our environment are the risks that mean that the forecast might be wrong? And what can be done to mitigate them?
  • What are the sources of delay in the system?
  • What blockers have we had in the past, what are the chances of them happening again and the impact on our forecast if they do?
  • If this forecast has a chance of becoming true, what needs to be in place to be able to start, and when do we need to start?
  • If it comes to it, what would need to be compromised in order to deliver according to the forecast? 
  • If the forecast is wrong and we can only deliver a limited amount of value, what is the next most valuable thing we should be doing?
  • If a stakeholder is asking about when a particular work item will be done, why is it important to them? Is it in the right place in the planned sequence of work? 

From an agile point of view, some kind of value, whether for a user or customer should be delivered from the first increment, or the increment at least provides some value in the form of learning for the team. This value is usually only a hypothesis and no one can be completely sure that it is valuable until it is actually delivered. Users or customers may not actually value what is released. If that happens, what would it mean for the rest of the planned work? There may be the need to rethink - and if so all the carefully crafted plans about the future go up in smoke and the work to provide accurate forecasts was pointless. In that sense, any medium to long term forecasting in Agile is somewhat of an oxymoron.

Conclusion

The need for forecasts about the future is understandable, and such requests should not be ignored or wished away. There are various tools for forecasting. All tools can be misused, and no forecasting can guarantee 100% accuracy about the future. However, forecasts can provide a good starting point for a conversation about the inherent uncertainty in the environment, and what can be done to mitigate anything that adds to the risk of the forecast being wrong.

Change the question from “when will it be done?” to “what should we start next?”, and shift effort from forecasting to actually doing the work or removing obstacles and the causes of unpredictability. This helps to get things done as smoothly and as fast as possible. And the sooner this can happen, the sooner the direction of travel can be validated or changed if needed.

Feature image by Katie Moum on Unsplash


What did you think about this post?