Skip to main content

Measuring Success, Measuring Value

January 8, 2014

The aim to deliver valuable software is a great, core principle of the agile movement. The difficulty however is that ‘value’ in itself is hardly quantifiable. Yet, I do believe it is imperative to think in terms of value in software development and therefore overcome some fluffiness attached to ‘value’. If we don’t find actionable ways to deal with ‘value’ it might remain meaningless; another buzz word, another way of getting people to think it’s time to move on to the next hype. It is not only difficult to quantify value, it is not even necessary, maybe even undesirable. It is better to measure whether we are delivering value (effectively).

What we thought defined success

For many years we were tricked into believing that software development can be considered a success if we meet 3 criteria: deliver (1) all promised requirements (2) within the planned time (3) for the allocated budget. It is reflected in the famous iron triangle of software development.

Iron triangle

That in turn tricked us into believing that, in order to be successful, we had to exhaustively analyze, detail and describe all requirements upfront, and get formal approval and sign off over them before the actual programming can be done. The underlying motivation is to secure the first element of what we were told defines success, the ‘requirements’.

This beforehand gathered information then becomes the input to careful and exhaustive calculations for the delivery part, often with the addition of some ‘contingency’. The underlying motivation is to set and fix the other two elements that we were told make out success, time and budget.

The result is poured into a plan, often complemented by risk mitigation strategies and other exception procedures. When this plan and all its components and clauses are approved and signed off, implementation can -finally- start. And then, obviously, it is just a matter of following the plan to be successful. Deviations from the plan, seemingly happening sometimes, are unlikely to cause any real problems, because it’s what ‘change request’ instances are for (meetings, documents, approvals, and sign offs).

Unfortunately, this was (and at many places still is) believed to be the only way we can ensure the ‘success’ of software development.

Yet, things aren’t that easy. The installments of the mentioned extra safety nets are already an indication that, despite our firmness, this approach isn’t really giving us the certainty it claims to have. And despite the safety nets, the detailed preparations and the seemingly perfect plans, many projects are plainly cancelled (29% according to the Chaos Report by the Standish Group of 2011) or in the end still fail in meeting the widely accepted success criteria of scope+time+budget (57%).

Some however are actually successful in delivering according to the three predictions (14%). But then, success doesn’t take into account that often quality was lowered to unacceptable levels in order to live up to the success criteria; reason why it’s good to add that as a fourth variable to the iron triangle (see figure). Success doesn’t take into account that the elapsed time might be according to plan, but may business-wise be extremely slow. Note that it is not unusual for projects to deliver software no sooner than 12-24 months after the start! Success doesn’t take into account that by the time the software actually becomes available for use, nobody really sees much ‘value’ in the features anymore; at all rendering them useless or in the functional way the features are implemented. More innovative ideas for them have emerged, or improved technological ways to implement them are discovered. It connects to the finding that 64% of features, produced in this way, are rarely or never used.

The three factors they told us defined ‘success’, fail in doing exactly that. And even if they are met, they still only reflect how successful the ‘delivery’ of the software to the market is, not how the software is being received or appreciated on the market place, let alone that the approach helps us addressing changes or new requests from the market place. The three factors fail in showing how valuable the software is; it only says that a version of software got out the door at certain predicted conditions.

What we say determines success

Agile overcomes the fallacy of traditional, upfront predictions and descriptions with collaboration, communication, incremental progress and continuous improvement. All risks taken are limited in time through time-boxed iterations. Time-boxing allows adaptation and adjustment within the process.

agile value delivery

The agile movement stresses the importance of active business involvement as part of cross-functional development work. The agile movement stresses the need for frequent releases to the market place to learn what actually is appreciated. There is no better risk mitigation than regular feedback from actual users.

Scrum, as leading framework for agile software development, is designed upon the foundation that software development is too complex for a predictive approach. Scrum implements empiricism in software development, thriving on frequent inspections to decide on the best possible adaptations. Inspections however are pointless if the inspected information is not fully known, opaque instead of transparent, and if the inspection is not done against known and agreed standards and definitions.

The target of the adaptations is not only the software being produced, it is also about the tools, the engineering practices, the relationships, the process and the three elements they told us define success, i.e. budget, scope and time. Scrum does not care about the abused notion of ‘project’. The concept of ‘project’ generally only serves the idea that success is possible if software is delivered on time, within budget and has all the predicted features. Scrum focuses on the software product (having a role and an artefact for its owner and its backlog) and incremental sustenance of the product. The foundational belief to this is that the success of software development is defined by the capability to satisfy and impress the consumers of the features, at a reasonable price, at a cost that has an effective return, with creation happening in a way that is sustainable for all people in the ecosystem of the product.

Yes, we can…measure

The success of an organization, in terms of survival, prosperity and leadership, increasingly depends on their ‘agility’, i.e. their overall capability to deliver value in a context of constant change, evolution, innovation, improvement and re-invention.

The difficulty in establishing the value of software or software requirements is that it depends on context, time, technology, market, business domain. It also cannot be reduced to one parameter only. Predicting value doesn’t make much sense either, since it can only be validated by the market place.

But if value defines success, how can we then measure whether we are successful?

Scrum.org has developed the Agility Path framework to help organisations increase their agility and move toward a value-driven existence.

 

 

Agility Path Circle

Agility Path focuses not on the illusive goal of calculating and predicting value or similar magic. With Agility Path the outcome of an organization’s work is measured with metrics to help them think about the way that outcome is achieved. Organizations get a clear indication on whether they are delivering value or not. If from the metrics it seems they are not, they have many areas and domains to inspect further, and do adaptations by installing, removing or improving practices, tools and processes. It is how the Scrum framework is used to manage the change process, the transformation of an organisation required to increase their agility.

The metrics provide the transparency needed to identify the areas where adaptation has the most impact. Think in terms of business metrics like employee satisfaction and customer satisfaction, revenues, investment in agile, etc. Think in terms of delivery capability metrics like cycle time, release frequency, defects, etc.

All metrics are consolidated into an overall Agility Index for the organization. This Agility Index is an indication of how effective the organization is in delivering value. One Agility Index is bound to time, place and context. The exact figure is of no importance. The evolution is of importance. The underlying set of metrics is of importance. The insights gained from the metrics, as a source of improvement, are of importance.

No magic, just hard work

There is no magical formula to calculate value, and even when trying to calculate value to steer development priorities, it is a prediction, an assumption that remains to be validated (or contradicted) when you actually go to market. You can never be sure.

The best we can do is to capture the results of our actions with metrics, and do those measurements regularly. We detect trends and patterns (and prefer that over a naked, singular figure). We relate that back to how we work, change how we work, and measure the change in outcome of that assumed improvement. We do that endlessly and relentlessly. We continuously improve by learning and adapting. We get the most out of what we do.

This flexibility primarily helps organizations respond better and faster to change, it implements openness to change over ignoring or blocking it. Once the thinking gets ingrained and new ways of working are installed, organisations discover that there may be different options to respond, thereby embracing change as a source of information. And, finally, organisations start innovating, be ahead and cause change (for the others to follow). Thus an agile approach harnesses change for the customer’s competitive advantage, and the organization’s agility.
React, explore, lead


What did you think about this post?