Skip to main content
Back
X
Back

Scrum Forum

Weighted Shortest Job First

Last post 06:06 am July 1, 2016
by Nader K. Rad
6 replies
Author
Messages
10:38 am July 9, 2013

HI all

Weighted Shortest Job First (WSJF) is a technique for backlog prioritization recommended by Dean Leffingwell. The calculation involves measures for User Value, Time Value, Risk Reduction / Opportunity Enablement Value, and Job Size::

WSJF = (User Value + Time Value + RROE Value) / Job Size

User Value, Time Value, and RROE Value are meant to be numbers from 1 to 10.

I'm looking for some worked examples of this. In particular, I would like to see how values of 1 to 10 are assigned to the variables on the numerator, and how the Job Size is measured. Is a relative sizing approach taken or are the values absolute?

Thanks

06:57 pm October 3, 2013

Ian,

Check out this article. It will probably give you what you are looking for:

http://scaledagileframework.com/wsjf/

07:08 pm February 12, 2014

Ian,
Check this article out and let me know what you think:
http://agilebuddah.blogspot.com/2014/02/5-steps-to-using-weighted-short…

10:34 am February 13, 2014

Hi Brad

I suspect that using WSJF to order a Product Backlog may invite vanity metrics. The question is, how are the variables to be populated in an unbiased and credible manner?

The article you linked to (thanks!) has the advantage of at least making things more democratic, in that planning poker is used to agree the values that certain attributes will hold. Yet clearly that could still result in "prioritization by persuasion". In Scrum, forecasts are applied by the Development Team to the smallest batch possible - the Sprint Backlog - in order to contain such estimation risk.

I reckon WSJF should be contrasted with more objective methods of backlog management such as innovation accounting, whereby Product Backlog order is informed by actionable metrics derived from A/B tests and the release of MVP.

05:44 pm May 27, 2016

As a feedback-driven approach to exploratory development, Scrum builds in an assumption that we will use small increments to prove or disprove hypothesis and make adjustments based on those results.

Values assigned for User Value, Time Value, RROE Value, and Job Size should be re-assessed based on real metrics. They should also be adjusted when strategic drivers change or if something happens that change the time-sensitivity of some Product Backlog Items.

In other words, give these values a best-before date. Just as every Product Backlog Item has a half-life, if it's been on the Product Backlog for an appreciable amount of time or is constantly prioritised down, it should be reconsidered in its entirety.

03:56 am May 31, 2016

> In other words, give these values a best-before date

Yes, that would be sensible. The value measure given to a PBI does not suddenly disappear if that measure is unaddressed in a validated learning cycle. Rather, it is subject to gradual decay.

06:06 am July 1, 2016

Hi Ian,
I’m not an expert in SAFe, but based on the literature on value management, and the structure of that formula, using relative “Job Size” should be OK; something like story points.

Generally, “value” is simply proportional to the benefits to cost ratio. In cases that do not involve procurement and costs are only for human resources with the less or more same level of salary, we can easily replace cost with “job size”. If we assume that the same number of people usually works on items, then we can even replace “job size” with duration.

There are a few things about the formula that I’m not comfortable with:

1. Risk is only considered for the numerator (benefits), while each risk has the potential to affect both the benefits and the cost; e.g. a risk that may cause you to spend three times more effort on the item.

2. There’s a risk of double counting the benefits with those three elements: User-Business Value and Time Criticality are not independent of each other.

3. There's an issue with the suggested way of normalizing in SAFe: assigning 1 to the item with lowest state in each column practically impacts the weight of that criteria. For example, if items are generally more time critical in our project, the importance of time_criticality automatically reduces in this formula, which is not right. The main problem is that we can simply use relative values with any linear normalization method, as long as we’re only dividing and multiplying them, while this formula is adding them up.

Anyway, I still have a bigger problem with this formula. When it’s difficult to measure something, people usually break down the calculation into smaller pieces and try again. Sometimes it’s not the right way, and we’re only fooling ourselves. In case of measuring value for items in a project, it’s probably enough to simply assign a relative, intuitive amount to “benefits”, and divide it by the size to get the “value”. Some people are not comfortable with it, because it’s subjective. However, the more detailed formula is still subjective, there’s more room for biases, and worst of all, creates the illusion of precision which is always dangerous.