Adapting Agile for the Age of AI: A Conversation with Dr. Alan Brown
In this episode of the Scrum.org Community Podcast, host Dave West is joined by Dr. Alan Brown—professor, executive advisor, and expert in AI and digital technologies—to explore how Agile teams can adapt and thrive in a rapidly evolving AI landscape.
Dr. Brown discusses the current state of AI adoption across industries and the need for thoughtful integration of AI into knowledge work. From model and prompt engineering to context and governance, he outlines the four key engineering disciplines that Scrum teams must understand. The conversation also highlights five critical pressure points organizations face—complexity, change management, alignment, value delivery, and people—and how Scrum can help teams respond to these challenges.
Rather than replacing Agile, Dr. Brown argues, AI augments it—opening up new opportunities for delivering value, improving decision-making, and accelerating learning.
Listen now to learn how to harness AI responsibly while staying true to Agile principles.
Transcript
Lindsay Velecina 0:00
Steve, welcome to the scrum.org community podcast, a podcast from the home of Scrum. In this podcast, we feature professional scrum trainers and other scrum practitioners sharing their stories and experiences to help learn from the experience of others. We hope you enjoy this episode.
Dave West 0:19
Hello and welcome to the scrum.org community podcast. I'm your host. Dave West, CEO here@scrum.org in today's podcast, we're super lucky to be talking to Dr Alan Brown. Dr Alan Brown is professor of digital economy at Exeter University, but actually his kind of day job well, most of his time he spends helping organizations. He's an executive advisor on technology, and particularly at the moment, the impact of AI and digital technologies in large organizations, and how large organizations can adopt these ideas. I've actually known Alan for well over 20 years, I worked with him at Rational Software in the early 2000s and during that time, I was always impressed by how Alan takes complex, confusing, sometimes contradictory ideas and actually shapes them into a simple and elegant solution. He's a he's a really good describer of ideas and a thought leader. So, you know, I I'm really pleased that he's on the podcast, and also his involvement in AI over the last sort of 18 months, I think he can share some amazing insights with you all. So welcome to the podcast, Alan.
Dr. Alan Brown 1:37
Thank you so much, Dave, and thank you for that very generous introduction.
Dave West 1:43
It's all true. Well, it's all true next time we're in a bar anyway. So first round on you No, but I actually do want to, for our audience, share some of this knowledge that you and I have been talking about over the last, I guess, two years really, which is around AI and its adoption. So let's start with the really big question, what is the state of AI adoption, commercial AI adoption, really, at the moment? Well,
Dr. Alan Brown 2:13
as always, that's a trickier question than it sounds. The simple questions are often the most difficult. And the reason that that's a tricky question is that the state of AI is really quite confused and confusing, because of the difference in how people think about what AI is, where they came from, in terms of their history and background, what their current view is of the systems that they're delivering and managing and how they think about the disruptive nature of today's technologies, whether they think they're evolutionary or revolutionary. So with that caveat, I think what you see in most organizations is a mixture. You see, of course, some state of the art prototypes, MVPs, pilots, whatever phraseology they're using, where they're doing some quite sophisticated things, depending on the domain. If it's financial services, they're doing some quite advanced things in wealth management and predictive analysis of where the pound is going to be in the next six months, or whatever those things are. If it's in health, then we know they're doing incredible things, looking at diagnosis, looking at assisting us in understanding images, what they mean, how to diagnose disease, what future plans and care plans might look like, and so on for every domain, but as well as the sort of advanced, I would say, sort of fairly narrow solutions. We're seeing quite a big shift in the broad base of organizations, how they generally operate, to have bit more intelligence in the way in which they're making decisions. They're using data in a little bit more of a an evidential way, thinking of it as a base for making decisions, as opposed to trying to hope to make up for the gaps in the data that exists, and also for the skills and the communication of people who are now more sophisticated in the way in which they do analytics, how they look at regression in data, how they begin to use data in more sophisticated algorithms, so that might be as simple as some operational things on the back end of the systems, so they're able to predict machine downtime or look at robustness of their infrastructure. It might be to do with customer service, how they look at what kinds of customer service requests are coming in, how they take some of the common requests and allow those to be automated in a more effective way than perhaps we've done in the past. It could be to do with things like HR contracts, management, all those heavy duty middle of the road kinds of things that keep the organization going. There's a lot of things going on about how we. Look at skills base, how we look at how we upskill people, how we recruit those sorts of things, and they're using more intelligent technologies in order to do that. So all of these are interesting, but it depends a little bit on whether you're taking a very narrow slice of AI where you're looking at some very sophisticated algorithms supported by advanced compute power, looking at some data streams that are fairly robust in the quality, or whether you're taking a much broader view. So I hate for it to be, you know, a big it depends answer as we start off, but it depends, because this is such a big, sweeping wave of of change that what you see right now depends on where you came from and where you sit today.
Dave West 5:45
So it's, it's interesting. So let's narrow it down a little bit and talk about, actually, before we talk about Scrum teams, let's talk about knowledge workers, because I use AI every, every day, significantly, you know, like we've just building a new contract for a new product to doing some contract work. And obviously we have a GC general counsel. And what I use AI for, though, is to improve, because she's on an hourly rate, they improve the interactions with my GC by reducing the FUD and getting her to actually focus her time where she needed to needs to. So I take general contract, I you know, I get the AI to review it. I get, you know, highlight the issues and the questions I've got to ask about the contract, and then I fill that, you know, in accordingly. And then, by that gives me a much more complete artifact to give to my G set, my general counsel. And then she then focuses on the areas that I need her intelligence. So as knowledge work, I'm assuming that all knowledge, and I consider myself to be knowledge working in that way, right? I'm using information to work right, to use Drucker's kind of definition. So I'm assuming that all knowledge workers are taking advantage of that. But then, you know, I talked to people in companies and or we're not allowed to do that inside, inside, you know, large pharmaceutical company, large bank, large whatever. Is that? What you're seeing as well that there's this, it's kind of lumpy, that kind of knowledge work application,
Dr. Alan Brown 7:34
yeah, so let's start more broadly, that knowledge work. So you're, you're obviously specifically focusing on generative AI and some of the yes,
Dave West 7:43
that's exactly what I'm doing, because I'm Yeah, available, and the interface allows me to generally access this incredible compute power. And obviously the internet is full of really good contracts, it turns out, or really bad ones, depending on your point of view.
Dr. Alan Brown 8:02
So that those sorts of tools, those generative, AI, very broad tools that allow you to consume a lot of information and then do things with it. It's what I call the 4s right now. So we're using them for searching more and more. They're replacing searches with a more intelligent form of search. So the classic, you know, is Google going to be replaced by Claude or Gemini is, you know, is a big conversation right now because of of what is possible. The second is, they, of course, can do summarization very well. So large, complex things can be summarized relatively straightforwardly with different criteria, so that you can get different views of very complex documents. The third is that it can create suggestions, particularly for idea creation ideation, where you want to say, you know, how should I critique this document? How would, how would a junior programmer Look at this? How would an experienced tester look at this depending on what the content is, and the final S is snippets, it gives us some material that we can reuse. And this, of course, has caused the most controversy, perhaps, is what material does it reuse, and do we claim ownership or IP of that, and how much of that is valid to use. If it was our our prompt, but they generated the information? Does that make it our material? Where did it come from? And this has become the those four areas, the suggestions, searches, summaries and snippets. I like you, I'm doing those all the time, and almost everybody I know is doing them all the time, whether they are allowed to do it at work or not. And this is, this is where the second big question comes up, is if you take a general, large language model based Gen AI tool like chat, GPT or Claude or something, and you type something in, and you get up. Plausible answer, even if the answer is correct in some form, however you define correct, most organizations say but you still can't use it because we don't know where it came from. We don't know the liability of that answer. We don't know how that answer is being affected by the data that the tool was trained on. We don't understand so much about the IP issues around it, we don't know where that data was stored or where it was transmitted. And the sovereignty over the data that you could you can give a dozen reasons why, even if what comes out of it is useful, we still feel under pressure. And the classic example, I'm working a lot with government people, for example. And the classic example, which I'll call hypothetical, just just to be safe, is that is summarizing case notes for somebody who's job applicant or requires social services of some form. Case notes can be very complicated. This is a complicated family. They've been dealing with the government for many months, we've got lots of information from different interactions, and they're coming for an interview in 10 minutes, and I'm the case worker, and I haven't read the case notes. What should I do? Do? I throw it through a chat GPT or a Claude and get a summary of the things I should ask or the red flags or issues that I should look at, or should I just not use those tools and talk to them as a human, as a human being, but without knowing their case in any detail? And you could argue both a bad situation to know, go away and read the case notes and prepare properly. That's not that's not going to happen. So we're left in these very difficult situations where perhaps the organization says, no, no, do not put that through chat. GPT, you don't know what you're on the other hand, will you get a better result than not doing it? And we're left in very compromised situations. So most organizations, obviously, I work partly in an education organization. I'm working for some government I'm working from some large corporations. All of them have their sensitivities, and they are providing some sort of guide, guardrails, or guidance or governance around these sorts of questions. But I have to say it's been broken left and right, because we're human beings and we're trying to get on with the job at hand, and sometimes people are going to make decisions that go against policy. And that's part of the challenge. Education is the obvious example, you know where, where students, of course, are using AI and chat GPT all the time. And some some instructors and lecturers and so on, are saying, Yes, please do and then challenge us to show us the best way to use it. And others are saying, oh my god, don't do that. You know, you're completely destroying any educational and pedagogical kind of background and and consistency to what we do. So
Dave West 12:51
can I just lean down on that caseworker example? So imagine that situation. So you're like, Oh my gosh. You know, the these people are coming in. I've got this, you know, huge. I'm picturing this huge brown folder, you know, with like, with seller, with a last about laggy bands around it and and post it notes stuck to it. And they're like, oh, and the digital equivalent. So, yes, yes. And they would so before chat GPT and the technology like that, what you would do is you'd say, Hey, Bob, do you know anything about Mrs. So and so? And Bob would say, I yeah, I actually did and that, you know. And they would give you their view. And then you would go, okay, okay, so what do you think they should Okay, I'll focus on those for three things. That is what we're doing. It's not called Bob anymore. It's called Chat, GPT and so do then, do we not consider so I've just had this very interesting argument of our lawyer about this, about, you know, we obviously serve content from our trainer community and our Terms of Use, you know, if you write a blog or whatever, are very clear, but they do not, because they're written before chat, GPT and the light, they do not explicit. They say, this is your stuff. You're liable for it. We are just, it's a classic digital, you know, license and, and she's like, well, we you got to put them explicitly, saying you can't use AI on it. I'm like, Well, hang on a minute. Everybody uses word type, AI kind of stuff now, I mean, I Grammarly. I use I'm dyslexic, Alan, Grammarly. You know, Grammarly has changed my life. It makes me a better writer, but that's AI.
Dr. Alan Brown 14:44
I think this is the kind of conundrum we're in right now. So let me take a similar example to yours. In the education space, it's common practice and has been for many years that large parts of published. Papers are written by interns and PhD students and junior researchers. And the senior researcher would say to a junior researcher, there's a big issue around this particular thing. I want you to go away and research it for the next week and write me three pages and bring a summary, and then the next day, when they when it's come back three days later, they'd say, Oh, that was great. Now rewrite it again. I want to re version. And after four versions of that, they say, right now, it's a good summary. Now, is there any difference than if I use chatgpt and those sorts of scenarios are exposing, I think, a lot of challenges and weaknesses. So without being too controversial. The example you used where you went and said, you know, Hey, Bob, do you know anything about this or Hey, Sue, do you know about Bob? Or whatever? Yeah, immediately you've got to deal with both implicit and explicit bias, yeah. So, yeah, I do know about Bob. He really, you know, he swore at me last time I met him. So he's a bad guy, obviously, and you need to not let him have his loan, you know, or you know, he comes from a different community than me. So, you know, I look at that community in a different way than I would my own. Or that person's older than me or younger than me, or more handsome than
Dave West 16:14
me, or or a Liverpool fan, or a man united fan, or they, whatever, or whatever.
Dr. Alan Brown 16:19
Yeah, I don't know. Well, maybe all of those things, you know, we what, I think the one of the things I wrote recently was about, don't blame AI. A lot of these things are coming to the surface, and AI has forced them to the surface Exactly, and that's been a huge issue. So again, just as an example, the publishing industry, of course, is in a little bit of chaos. In my humble view, a lot of that was caused by themselves. The way they've been approaching knowledge over the last few years, and the way they've been gatekeeping knowledge, and now that's been broken apart. They're calling foul. And some of it, I think they're right to and some of it, I think they being a little bit, you know, pot calling the kettle black. So. So I think we've got to be very careful here when we start to look at AI and some of the issues and implications, and that's what's causing a lot of the confusion right now, things that we could have done in the past, but they were just too difficult, from a financial point of view, or gathering the data, or bringing everybody together and running the analysis quickly enough we can now do in milliseconds or seconds or minutes, and therefore we're able to do some things that really we never quite got to the bottom of, like some things to do with copyright, like some things to do with what happens if I summarize somebody else's work? What happens if I use data to automate decision making, those sorts of things?
Dave West 17:49
And isn't that the most interesting thing? So a great example. I was looking at a particular document, and I got it. I got I did one of the one of the, I'm going to use those S's, by the way, I think that was really useful. And that's that, you see, I always learn something anyway I did, you know, the s. I summarized it basically. And then I thought to myself, as I read the summary, I'm like, this seems a little biased and and I don't really think about that very often, you know, even though I really should, and I I'm exposed to it continuously. So I said, Okay, let's make that bias explicit. And then I said, Okay, summarize that with a different bias. And then I said, summarize that with a different bias, then compare and show me the differences. It was actually super, super interesting, because, I think, because I'd never, I'd never have time to do that. I'd never even think about doing that. And, you know, I was just drinking me tea, having me, having me cheese me, cheese roll. And I was and I thought, I'll experiment with this. Can I make it more explicit, and can I get that perspective?
Dr. Alan Brown 19:00
Well, I think we see that quite a lot when, if you ask so, this is where I think there are four big, big techniques that developers and some of their development managers are beginning to see are essential. The first one is to do with Model Management. What are those models? How do we understand the models themselves? You know the differences between Claude and grok and chatgpt, and sometimes I use one, sometimes I use another, and people begin to see that they're different in how they work, how they operate, and then how you manage them, distributedly, where they where they sit, where the data resides, what sovereignty means over the data. So there's a whole model model management issue, and then, particularly if you're going to create your own models and bring them to your own desktop or into your own organization, how does that work? And so on and so on. So there's a model management skill, and that affects the results and the answers that you get. So if you can constrain or control. All the ingested information in some way, you'll get different answers. Of course. The second one is about prompt engineering, so model Model Management or model engineering, and then prompt engineering. And as you know, prompt engineering has a huge impact depending on how you ask the question. And I think one of the things we're beginning to see over the last 1218, months is, how do we ask better questions, and how do the questions affect the answers? And again, we always knew that was the case, but going to my intern example, if you've got a room full of really smart interns, They're young, they're energetic, they don't sleep very much because they want to, but they want to please you, so they'll do what they think you want. So if you say, do a critique of this, by the way, I think it's a great paper. By the way, it suggests some wonderful things. I'm going to do a very positive critique, because I want to please you. If you say, do a critique of this, by the way, there's some really iffy things going on in this paper, and I'm not quite sure how you'd actually practically implement that, then I'm going to give you a fairly negative review. So the context becomes really key. So we've got model engineering, prompt engineering, and then the third one is context engineering. How do we understand that context? How do we describe it? How do we make sure it's made more explicit, where the data came from, what biases are in it, how it was influenced, how we would evolve that over time. So we've got, and people are starting to talk a lot about context engineering and what that means. And then the final one, the fourth big idea, is about governance. And just to be consistent, governance engineering, how do we look at the governance of those systems so that over time, they can be challenged when they need to be. They can be disrupted if we think there's problems. They can be tuned. They can be managed in some way, so that we can say, don't answer things like this, and make sure that if you answer these questions, then you don't reveal these things. There's all sorts of governance issues around these. And I'm finding those four skills, if you want to call them, that they're becoming much more critical, and some people are good at some of them, particularly down for developers, because, you know, I'm sure we'll get into talking more at the development level, getting down to what that model management means, and how you use those models effectively, and how you define the storage mechanisms, the performance, the distribution of that data, the way it's being replicated, how it fits in with existing infrastructure. That becomes an incredibly powerful aspect of the architecting of large scale distributed systems that people like you and I didn't get involved in in any substantial way. And nowadays it's becoming much more critical. And then, you know, the other, the other three stages of engineering too.
Dave West 22:50
I think, I think that's super interesting that you call those things out. So I've got an admission. So when I was an analyst and then a research director at Forrester Research, it came to my attention that my biggest skill that I had was I could use Google better than my clients, because you'd get these advisories, and you'd go from one to the other. I could do 1012, in a day, and there's just no way I'd have any prep time, because I'd literally be thrown it from one thing talking about, I don't know, how do I introduce Agile to what's a good user story, to what tool should I buy, to whatever. And I learned very quickly that I was just really good at forming Google searches to get the right information, so I could then talk for 30 minutes in an intelligent way that the customer would go that was a great inquiry, and give me a five or whatever, and that's all I needed for my bonus, right? So the I was just good at that. And the reason why I was good at that was because I had a lot of context. And so it's so funny that you bring up not just prompt engineering, because I'm relating it, it's context is everything. And I did have context. Now, what I find with the AI tools, context engineering is a key skill that we need to bring, that we bring, which brings us, actually, to start talking about Scrum teams. Yeah, you know, because, you know, I this is, this is, I could talk for days to you about this stuff. Alan, um, actually, we have talked for days about it, so that's probably not good, and I'm not sure if the world's any better because of that. But anyway, the you know, we've got a lot of people listening to this podcast are in a scrum team. So what is the impact for them? From your perspective? Now, I know you spend most of your time talking to executives. Obviously, the Scrum teams are being ultimately paid and working on programs that are funded by executives. So I'm. Really interesting, considering that perspective, an executive perspective, what do you think the impact is? You know, we've obviously heard 6000 people laid off Microsoft, you know, Gallup says that we've got 12% increase in the last three months. So the use of AI, you know, the number of use cases, has doubled from two and a half to five. I don't know what the really that means, but so we're seeing this is starting to impact Scrum teams, at least in terms of the numbers of people in them. So what, what? What do you think the impact is going to be for Scrum teams?
Dr. Alan Brown 25:35
So if you don't mind, let me take two, two different shots at that. The first one is more from a managing of Scrum teams and the organization around those Scrum teams. And then we'll take one from inside the scrum team. So from from from the outside of the scrum team, I've spent quite a bit of time over the last couple of months thinking about that based on conversations you and I have had and other things, you know, for some of us, this is not our first rodeo. You know, we've been around this block a few times, and I was trying to think about, what have we learned from this? For those people who are now developing this next generation of systems and solutions in Scrum teams and other forms. And I think there are five big ideas that I've taken away. The first one is that complexity and technical debt kills progress. It always has, and it's beginning to show its head even more so right now, if we can deal with complexity and technical debt in a way that allows us to move forward quickly, we gain huge advantage. And I think this has been an Agile and Scrum message for a long time, different ways of dealing with it, but, but it's been right at the forefront. And in fact, you could argue from the from the days of of rough and and other things, that, yes, agile was a huge reaction to us not placing that as a as if not the highest priority, a particular priority. So that's the first, how do we keep focused on complex, keeping complexity and technical debt down? The second, which is related to that for me, and again, you'll resonate from the Agile perspective, all management is change management, and I think we lost the way some time ago in thinking that you manage from one stable state to another. And of course, you don't. You manage within a constantly changing environment, and the more that you recognize that going in, and you build skills and approaches to constant change management, the better you are. And I still don't think many organizations have got quite gotten there yet, that all management is change management. The third is that, and I say this with all due respect, organizational alignment is incredibly fragile. It's great having Scrum teams being super effective and and development moving quickly in certain parts of the organization. If you can't align that with other pieces of the organization, the energy that's created in that alignment and delivery of new systems gets lost very quickly, and that causes huge tension. It causes people to leave. It causes incredible culture problems, and we saw that consistently over the last few years. So trying to understand that organizational alignment and recognizing that you're not immune from that, whether you're in AI, whether you're in a scrum team, however you are, is really critical. The fourth one is a really key one for for agile, that I think is leading into where we are today with AI adoption, which is, stay focused on value delivery. We in the AI world, we're getting lost as well as we did in the large scale software development world, into this idea of delivering features that are sometimes outside of the idea of value delivery. And that's causing huge headache, because it's a neat technology, because it's new, because whatever it's not where's the value, who's getting the value? How is that value distributed? How do we measure that value? How do we ensure that value consistently moves forward? Again, for those in Agile worlds and managing agile teams, it's a very recognizable theme. It's still not widely understood what value means and how you deliver value, and particularly in an AI world, and the final one, which, again, for many, is obvious, people break before the technology. And we see this time and time again. And we've seen it in most organizations. When things start to collapse, you sense it in the people first before you can measure it in the technology and in the in the code and in the in the system. Programs. And for me, those five things from the outside of a scrum team are really critical, and they're reflected, I think, directly inside the Scrum teams. So your comments about, well, you know, with all of this, AI won't code be generated, and everybody's going to lose their job and stuff, those five things have got almost nothing to do with AI, automating away your job. Of course, we're going to use more AI to generate some bits of code in the way I described. You know, the four S's. That's as true for code development as it is for developing a marketing plan or you talking to your general counsel. So of course, we're going to see changes inside development. But those things I described, I think, are the critical pressure points. And from the outside, if I was a manager of teams delivering systems, that's where I'd focus. And if I was inside, that's where I would focus, different, different perspectives, but, but, but I would, I would view those as being the, the key pressure points for me.
Dave West 31:02
Wow, that's awesome. As I said earlier this you always your ability to synthesize something incredibly messy and create a summary. You should you should be an AI tool, certainly, which is awesome. The problem with those five things though, Alan is we've been trying to sort those things out for at least the 30 years. You and I have been actively trying to improve the world, and maybe now approaching 40 years. But we don't like to say that, because it makes us feel really, really old. But let's just say 30, you know, the complexity example and the technical debt. The reality is that organizations want to be agile, but they have so many systems that add no value. You know, the one you know we don't like to talk about Tesla now for lots of reasons, but ultimately, one of the greatest things Tesla's ever did was deciding what was core and what was context in their products. And they didn't invest any money. Hence the reason why people moan about their knobs and stuff. They're like, this is so cheap, but core and context, right? You know? But most organizations, you say, well, you Why don't you buy a CRM? Oh, we can't buy a CRM. Says, oh, no, we're very unique. We're a snowflake because of the hundreds of years of invested differential. You know, management is change management. Change management is hard.
Dr. Alan Brown 32:39
So the point I'm making, though, is not we now have the answers to these. The point is, these key challenges are not going away with AI. They're not going to be solved by some neat algorithm that will suddenly automate away the need for human intuition and experience and a blend of practical reality with a dose of this has got to be done, so you need some discipline here. Guests, get on with it. These things. We know these are the issues, and we know large parts of what the challenges are. It's a bit like people's health issues or weight issues. We kind of know that why people put on weight, and we kind of know why people are unfit, the challenges are putting that into the context of the situations they find themselves in, with human behavior, with the temptations all around I think, in a software development and delivery situation, we see very similar things, and the context evolves and allows us to push in certain areas that we were never able to before. So the fact that we now have some drugs for helping people to lose weight is very helpful, perhaps, but it just doesn't take away some of the key issues about people concerned about what they actually eat, where it comes from, the effect it has on their body, how they decide to live their lives in order to be effective and be able to exercise efficiently, and I think it's similar with software. We're able to generate large parts of systems now automatically. But that doesn't mean we understand where that fits into the change management process. That doesn't mean we understand the relationship of that to technical debt and how we evolve systems quickly over time that are deployed across 1000s and 1000s of situations where the where people are relying on that software every day. It doesn't mean that we don't understand the value aspects of why each feature is delivered, and the cost to an organization of fielding that new feature might overwhelm the value that they receive from it. All of these things we've seen in the last 30 years, you and I have worked in this, and I'm assuming before then. So I My point is focusing on those challenges, asking those questions, building skills in those areas, in the context of the systems, the technologies, the infrastructure, the languages that are being used today will keep you in good stead. You won't be automated away. You won't be driven out of the the organization for a lack of contribution. You'll actually be one of the people that says this is how these these new ideas, lead to some substantial difference in the way in which we compete, build systems, create some stability and responsibility into our systems and deliver better value to our clients. I
Dave West 35:22
think that is the key, the one of the biggest reasons why I do Scrum, and the reason why I love scrum isn't actually, isn't about delivering value, though I love the fact we do it's about the empowerment of teams. I have seen so many organizations where you've got these amazing people doing work, not doing value and not impact. They know how to fix everything. They know how to make they know how to make reduced complexity and increased debt. If you give them the right transparency and you empower them to do it, I think, I think the you know, they can do it, and you know, and it all harks back to my mum working on the cigarette counter at Sainsbury's. They delivered the cigarettes at four o'clock on a Friday. That was the stupidest time to deliver the cigarettes, right? Because, one, you've got the National Lottery, you've got the people getting their pay. You've got and they've got 1000s of pounds worth of cigarettes arriving. And she knew how to fix it. They just changed the time of the delivery. Just do this, increase the P she knew, but she wasn't allowed to do it. And they lost cigarettes for theft. They lost it's just like so and the one thing about those five things, I think that AI could give us the ability to make them more transparent that we could you know that the fact that I now have tools that can effectively summarize things, can provide me snippets, can help me make choices, you know, by presenting information in different ways, by making things very visible for me.
Dr. Alan Brown 37:09
Well, I think what we've what we've traditionally done from the management point of view, is we've managed by by looking in the rear view mirror, and we've said next quarter is going to look like a lot like last quarter or last year, or what's going to happen in the this region will look like that region, or this new product will have a lot of the features of the last product and the similar architecture. So a lot of what we did was to say there's a certain consistency in what we're doing, and we can deal with the incremental changes. And the data that we had was typically based on past experience will indicate what will come next. The question that arises is, so what happens if the future looks nothing like the past? What happens if what's in front of you isn't what's behind you? It's like you know, if you're driving a car and the road in front of you is very similar to the road behind you, you can more or less just look in the rear view mirror and you'll know what to do, but what happens if it isn't? And I think a lot of what we're learning is that things are changing so fast we can't rely on the next bit of road looking like the last bit of road, and that means we need some new techniques. We need data that's coming much more quickly from what's in front of us. We need decision making to be very, very much faster. We need to not be in in him. What's inhibited by what we've we've done in the past, so that we're learning new things and new ways of dealing with the with what's coming at us, rather than just relying on the things that we used to use in the past. We all of these, I think, are ideas we've sort of understood, I think, the urgency and the critical nature of that is right in front of us. You know, it's what it Bezos is, you know, day one, every every think of every day as day one. There is no day two. You every day, if you think of it that way, you work in a different way and in larger, more established organizations, that's hard. That's really hard.
Dave West 39:07
I think that's and it's, it is interesting that analogy about we don't we're driving a new terrain. You have to build vehicles that are more able to deal with failure, meaning when you go off the road, you need to have that decision Link Loop in place so that you can effectively make quicker decisions you may have to slow down to go further. You know, there's that analogy, that metaphor, analogy I never know, really the difference is so powerful Alan, because that's exactly what Agile is, but the organizations they operate aren't. So you've got these teams that are designed for this, and you know, AI just accelerates their ability to do it, but you've got an organization around them that can't manage, like the one thing that you said organize. Alignment is fragile. That is a key problem with agile teams, but it's actually something they've they're designed to fix, but they're not allowed to, because the epic they're working on is here, and you know, you've got this system around them. Well,
Dr. Alan Brown 40:17
I think the organizations I've worked in where it's been, where I've felt like we're moving really well. We're getting things done. We're delivering new solutions. The organization's joined up. That alignment has been created, sometimes by force of will, sometimes just by happenstance, but whatever it is, it's created a sort of integration and a connection between the different parts of the organization, where you really feel like you're surfing a wave moving forward, and then, for some reason, it starts to collapse. It could be a personnel change. It could be a different situation where the client situation is different. It could be financial problem that has to be dealt with. But something sort of starts to collapse that wave, and it collapses on top of you, and you have to start to rebuild again. And I think most organizations are like that. It's so hard to maintain that, particularly as the organization grows and scales. And that's why, for me, the one of my big interests is, what does AI at scale really mean? What does it mean to deliver some of these solutions in these complex, embedded environments where we've got existing solutions, existing infrastructure, existing skills, we've got large sales team trying to make a quota. You know, all of those things are, are forcing certain ways of behaving that have to be addressed, either directly or indirectly, and as you and I have spent a lot of our time, is trying to overcome some of those challenges and barriers, sometimes bullying our way through it, sometimes going round it, sometimes ignoring them and just going on regardless. You've got to find a way.
Dave West 41:56
You have to find a way. So we're coming to the end of our podcast. Alan, as I said, I could talk to you all night. So we've got Scrum teams, middle managers, Scrum Masters, product owners, some executives listening to this podcast, what would be the last thing that they should hear? What from you? What would be the one thing that they should take away.
Dr. Alan Brown 42:22
Well, I think for most of us, we need to view AI as an opportunity for us to take some of the things we've been learning about, iterative evolutionary development, moving at pace, delivering systems that change rapidly, creating value for organizations. We need to take those ideas and celebrate that they are right at the heart of this AI change. They're not being subsumed by it. They're not being replaced by it. They're actually being augmented by that, and that will be a really positive, powerful way forward for us as individuals and as teams.
Dave West 43:01
Wow. Mic drop. Dr Alan Brown, thank you for that, and thank you for your time. I really appreciate you come first day back from holiday. It seems that lot of energy there that you managed to save up for your holiday, which is which is awesome. So I really appreciate and you taking the time to talk to our audience today. My pleasure. Thanks Dave, and thank you for listening to today's scrum.org, community podcast. We heard Dr Alan Brown talk about AI and its impact on organizations teams, and he talked about the four S's, I'll be using those. He talked about the importance these sort of engineering disciplines of Model Management, prompt management, context management and governance, management. He then left us with the fact that ultimately, what AI doesn't change the challenges that we have. And he gave us those five things around complexity, technical debt, all management is change management. Organization alignment is very fragile. Oh my gosh, that's so true. He then went on to the people break a long time before the technology, and stay focused on the value those five things. I mean, awesome. They should be five very large chapters in a very large book. I think so. What an awesome talk. And if you liked what you heard, please subscribe, share with friends. And of course, come back, listen to some more. I'm lucky enough to have a variety of guests talking about everything in the area of professional Scrum, product thinking, and of course, agile. Thanks everybody. Scrum on you.
Transcribed by https://otter.ai