Skip to main content

Walking Through a Definition of Ready

August 24, 2017

“The secret of success is to be ready when your opportunity comes” - Benjamin Disraeli

 

A glance back at “Done”

A few weeks ago we looked at the Definition of Done, which describes the conditions which must be satisfied before a team’s deliverables can be considered fit for release. That’s the acid test of what “Done” ought to mean. Can a team’s output actually be deployed into production and used immediately, or is any work still outstanding? We saw that a team’s Definition of Done will often fall short of this essential standard, and “technical debt” will be incurred as a result. This debt reflects the fact that certain things still need doing, no matter how small or trivial they might be held to be. Additional work will need to be carried out before the increment under development is truly usable. Perhaps there might be further tweaks to be done, or optimizations, or tests, or integration work with the wider product. Any such technical debt will need to be tracked, managed, and “paid off” by completing the outstanding work so the increment is finally brought up to snuff.

In other words, an increment is not truly complete if it is not of immediate release quality, since more work must be done before the value invested in it can be leveraged. Hence a Definition of Done must be articulated clearly, no matter how shoddy it might be. Only then can shortcomings in the standard for release be acknowledged and remedied. A Definition of Done is instrumental to achieving transparency, and of course any “deficit for release” should be made equally plain. A team which runs with a deficit for release cannot be said to be operating in an agile manner, since no increment will be released and inspected and adapted, and this must be recognized. Any technical debt incurred as a result of that deficit may then be tracked, so the nature and extent of it can be understood. This will give stakeholders an improved picture of how much work genuinely remains to be completed, and of the gaps which lie between the current operating model and robust agile practice.

“Done” can be at multiple levels

The Scrum Guide tells us that there can be multiple levels of “Done”. The Definition of Done sensu-stricto must of course pertain to the increment, since that is the artifact which is ultimately subject to release. However, this does not imply that “Done” must be an atomic and indivisible measure. There’s nothing to stop the evidencing of “Done” from being built up gradually as work is performed, and there can be distinct advantages in doing so.

In a car factory for example, quality is likely to be inspected on an ongoing basis as work is completed. The testing of components as they are brought together is not likely to be deferred until the car finally rolls off the assembly line. The risk of finding problems so late in the day would clearly be high, and the opportunity to provide meaningful remedy at such a late stage would be limited. Extensive re-work might be necessary by that point. Testing at multiple discrete points on the assembly line, each time significant value is added, reduces the magnitude of such risk. Defects or other problems can be detected and resolved as close as possible to the time and place of the work being carried out. The opportunity for complex rework to accumulate, and for waste to be compounded, is thereby reduced.

A software development team can also use multiple elevations of “Done” in order to inspect and adapt work on an ongoing basis, and thereby assure quality in the timeliest possible manner. For example, if a user story is being implemented then it is reasonable to expect that all of its acceptance criteria ought to be satisfied. Sometimes this particular level is referred to as “Story Done”. Before that work can be committed to a repository though, it might have to be peer reviewed, documented, or expected to satisfy a particular type or degree of test coverage. Such criteria would represent an additional level of “Done” to be satisfied before work-in-progress can be committed and integrated. The wider “Definition of Done”, which relates to the completed increment, would comprise these and all levels of “Done”. However, it can be seen that the Definition of Done is not asserted atomically just prior to release, but rather at each point where effort has been invested in development and value is being added. Any problems can be swiftly uncovered, corrected, and action taken to prevent their re-occurrence. Rework and waste is minimized. In effect the DoD consists of multiple discrete checks and testable assertions, each of which is applied as closely as possible to the time and place of the relevant activities being carried out.

Towards a “Definition of Ready”

Now, since a level of “Done” may be applied to each station in a workflow, it is reasonable to surmise that this includes the transitioning of work into the Sprint Backlog itself. In other words, before work can be planned into a Sprint, the relevant items on the Product Backlog must be “Done” in terms of being sufficiently well described and understood. The Development Team must grasp enough of its scope to be able to plan it into a Sprint, and to frame some kind of commitment regarding its implementation so a Sprint Goal can be met.

In practice, this standard is often referred to as a “Definition of Ready”. During Product Backlog refinement, detail, order, and estimates will be added or improved until the work on the backlog meets this condition. In effect Product Backlog refinement helps to de-risk Sprint Planning. By observing a Definition of Ready, the chances are reduced of a Sprint starting where Development Team members immediately shake their heads at Product Backlog items they do not sufficiently understand.

In fact, many teams struggle to implement a Definition of Ready. This is partly because refinement is hard. It takes discipline to reserve 10% or so of a team’s time during a Sprint - as the Scrum Guide recommends - so that the Product Backlog is adequately prepared. Some teams will cut this agile hygiene short, or otherwise try to wing it. The result is likely to be a shoddy Sprint Planning session where the body of work selected is not sufficiently well understood, and the team’s ability to frame a suitable commitment is glossed over or faked. Yet even quite mature agile teams can fail to master “Ready”.

How much is too much?

Experienced developers are usually aware that a user story is meant to represent an ongoing and evolving conversation with stakeholders, and not a fixed specification. How then, they sometimes wonder, can a level of “Done” be applied to the refinement of such a backlog item? How much envisioning and architecting, and analysis and design, to say nothing of exploratory spike investigation, may reasonably be performed? How much is too much, and when does all of this activity start to encroach onto actual development work? Might a “Definition of Ready” potentially be an anti-pattern? Isn’t there a danger of the Sprint boundary becoming rather mushy, if backlog items are worked on both before and during a “Sprint”? How do we stop the Sprint construct from fading into irrelevance? If we are to stop things from going to the dogs, how can team members tell when “enough” refinement has been done, and any further evolution must constitute development effort?

To understand the answer, we need to bear in mind that Product Backlog refinement ought to de-risk Sprint Planning. Enough refinement will therefore have been done when a team can plan it into in their Sprint Backlog as part of their achievable forecast of work. The acid test of that condition can be brutally simple. If work has been refined to the point that a team can estimate it, and it is thought small enough to be planned into a Sprint without being broken down further, then that might well be enough. Whether getting to that point requires analysis or design, or both, or even some coding in the form of an exploratory spike, is completely irrelevant. Enough must be done to make the scope of the item comprehensible in terms of its probable size. Any more refinement than that is waste, while any less will not be enough.

This condition - having Product Backlog Items which are sized and sufficiently granular - can represent a belt-and-braces Definition of Ready. It may be adequate for a team to pick up and run with, and it is certainly a condition which they should keep in the back of their minds. After all, if this criterion doesn’t hold, should they really frame a commitment which involves that work?

A team can up their game by asserting further conditions which must be met before work is considered ready for planning. It is reasonable, for example, to expect acceptance criteria to be articulated for a backlog item such as a user story. Without such criteria developers may not really understand the scope of the work or how it will be tested and validated. In fact, it can be hard for developers to truly estimate work at all unless the acceptance criteria are defined. That’s where the meat often is. They may also reasonably expect clear value to be associated with the item, and for this to be expressed in a succinct way which makes it quite evident. The standard user story format of “As a <type of user>, I want <some goal> so that <some reason>” is helpful and a team may reasonably expect something along these lines. They may further expect each item to be actionable in its own right and free of dependencies. Additionally, they may expect enough flex in the item to allow for experimentation in finding the best way to implement it.

Example Definition of Ready

These considerations are often summarized as the "INVEST criteria", and they provide us with a useful Definition of Ready which can be applied to Product Backlog Items. By actively participating in Product Backlog refinement, a good Development Team will collaborate with the Product Owner in making sure that a standard such as this is observed.

I (Independent). The PBI should be self-contained and it should be possible to bring it into progress without a dependency upon another PBI or an external resource.

N (Negotiable). A good PBI should leave room for discussion regarding its optimal implementation.

V (Valuable). The value a PBI delivers to stakeholders should be clear.

E (Estimable). A PBI must have a size relative to other PBIs.

S (Small). PBIs should be small enough to estimate with reasonable accuracy and to plan into a time-box such as a Sprint.

T (Testable). Each PBI should have clear acceptance criteria which allow its satisfaction to be tested.


What did you think about this post?