Skip to main content

Making AI Work: What Scrum Gets Right and Organizations Get Wrong

April 28, 2026

The empirical principles at the heart of Scrum are precisely what AI-driven delivery requires. Whether your organization actually supports them — and whether the framework itself is ready — is another matter.

I have spent more than thirty years working in enterprise software delivery. Over that time, I’ve seen a great many waves of change: service-oriented architecture, cloud migration, DevOps, the platform era. Each brought meaningful capability improvements, but also a fresh set of organizational habits that slowed things down.

The arrival of AI feels different in pace, but familiar in pattern. Developers write code faster, documentation that took hours is produced in minutes, and certain specialist tasks can be delegated to an AI assistant. These are real gains.

But when I look at the overall picture, at whether organizations are actually accelerating their digital transformation in the ways they hoped and invested to achieve, I am not convinced the progress is as clear. The last few years I have been trying to figure out why by looking at how AI is reshaping product delivery, where it is succeeding in helping enterprises transform, and where it is creating new complexity that organizations are not yet equipped to handle. That inquiry became my new book, Making AI Work for Britain, published this week. The argument at its center is one that will be familiar to anyone who has worked seriously with Scrum: the failures are rarely technical. They are almost always organizational.

Empiricism is precisely what AI demands

AI systems are probabilistic. Their outputs vary. The same prompt, in a different context, with a different user, can produce a different result. Quality is not a fixed specification to be met; it is a distribution to be managed. If that sounds like a problem requiring transparency, inspection, and adaptation, that is because it is. The empirical heart of Scrum is not a philosophical preference. For AI-driven development, it is a practical necessity.

And yet many organizations are running AI projects on assumptions that contradict everything Scrum stands for. They are treating AI adoption as a program to be delivered, with a fixed scope, a defined timeline, and a success metric set before anyone understands what the technology can actually do. The tools are new. The mistakes are old.

The application of the framework will need to adapt, too

But the problems do not just lie at the feet of organizations. Although the foundations of Scrum are sound (empiricism, iteration, cross-functional teams, and continuous improvement), the specific practices through which those principles are expressed are under pressure to evolve, and our collective experience of how to do that is still immature.

AI is entering product delivery in several distinct ways: as a tool supporting Scrum events themselves, as a core part of the development workflow through code generation, as agents taking on roles that resemble team membership, and as the product being built — a system whose behavior is probabilistic and whose risks cannot be fully specified in advance.

Each of these contexts raises different questions for how Scrum is practiced. How will AI affect backlogs and resource management? What does the “Definition of Done” mean when output quality is probabilistic? How does a Scrum Master facilitate a team that includes AI agents? We do not yet have settled answers to questions such as these. What we do have are the right principles to work through them if we are willing to apply those principles honestly to Scrum itself, not just to the products we build with it.

Where the framework meets the wall

Those are questions for the team to work through. But there is a harder set of questions that sits at the boundary between the Sprint and the organization around it, where AI delivery most often fails. You can run a perfect Sprint and still fall short, not because the team got it wrong, but because the organization hasn't done the work that only it can do. Consider what AI-enabled delivery actually requires from the wider organization: a clear and defensible position on what an acceptable AI output looks like; a governance structure that can handle accountability for probabilistic decisions; a product strategy that distinguishes between AI features that genuinely serve users and those added because the technology was available. None of that comes from the Sprint. All of it shapes what the Sprint can achieve.

In conversations with a variety of agile practitioners, a consistent theme emerged. Teams report that they can experiment with AI tools at the task level, but they hit a wall when they try to embed AI meaningfully into their product. The wall is not technical. It is organizational. Too often leadership hasn't decided what it wants from AI. Governance hasn't caught up with what AI makes possible, or what it makes risky. The organization is asking Scrum Teams to run fast through terrain that no one has mapped.

Three things that need to change

There are three shifts I believe are necessary to move forward.
First, organizations need to become more realistic about what they want from AI, not simply consuming whatever AI vendors offer. This is the difference between being a smart buyer and being a passive recipient.

Second, Product Owners need AI literacy that currently sits only with developers. Value judgements about AI features are product decisions, not engineering decisions. If the Product Owner cannot evaluate the trade-offs involved, those decisions get made by the wrong people with the wrong incentives.

Third, retrospectives need to expand their scope. Asking whether the Sprint went well is necessary but not sufficient. The harder question is whether the organization is giving the team what it needs to work responsibly and effectively with AI. In my experience, teams already know the answer. They rarely get the opportunity to say it.

 

Image
Making AI work for britain

My work in Making AI Work for Britain offers more details on what I believe needs to change institutionally, drawing on the lessons of the Government Digital Service, on UK public and private sector experience, and on thirty years of watching organizations succeed and fail at exactly this kind of change. If the arguments in this article resonate, the book is where the full case is made.

Scrum was built on the principle that complex problems require empirical thinking and honest feedback loops. AI doesn't change that principle. What AI is exposing is not a weakness in Scrum's foundations but a set of pressures the framework must now absorb. Meeting those pressures demands the same empirical honesty from practitioners and organizations that Scrum has always demanded of the products they build.

More details and free open access download of  “Making AI Work for Britain” is available at FutureOfAI.uk.
 


What did you think about this post?

Comments (0)

Be the first to comment!