Skip to main content

AI in Scrum: Value, Validation, and the Human Factor (Q&A -Part 1)

January 15, 2026

In this Q&A episode of the Scrum.org Community Podcast, Eric Naiburg, COO of Scrum.org, is joined by Darrell Fernandes, Executive Advisor at Scrum.org to explore how AI is showing up in Scrum Teams today—and what it really takes to make it valuable.

Drawing from questions raised during a recent webinar: Managing Your AI Teammate: Turning AI from Experiment to Strategic Partner, they discuss practical ways teams are using AI as a research assistant, DevOps helper, and development aid. They emphasize why Scrum’s iterative mindset is critical for working with AI, especially given how quickly models, capabilities, and limitations evolve.

The conversation tackles common misconceptions about AI replacing people, the importance of validating AI outputs, and why teams should consider writing a “job description” for AI to clearly define expectations, measures of success, and accountability. Eric and Darrell also explore how AI may automate some work while creating entirely new roles and opportunities for professionals.

This is Part 1 of an ongoing conversation focused on helping Scrum Teams thoughtfully integrate AI while staying grounded in empiricism, collaboration, and value delivery.


Key Learnings

  • Why there is no single model for integrating AI into Scrum—and why experimentation matters
  • How Scrum’s inspect-and-adapt mindset applies directly to AI usage
  • Practical examples of AI as a research assistant, DevOps helper, and development tool
  • Why teams must validate AI outputs to manage bias, accuracy, and compliance
  • How defining a job description for AI helps measure effectiveness and valu
  • Why AI is better viewed as a teammate or tool, not a replacement for people
  • How AI may eliminate some tasks while creating new roles and opportunities

Links

Webinar - Managing Your AI Teammate: Turning AI from Experiment to Strategic Partner

Whitepaper - The AI Teammate Framework: A Four-Step Framework for Product Teams





Transcript

moderator: 0:00

Welcome to the scrum.org community Podcast, the podcast from the home of Scrum. In this podcast, agile experts, including professional scrum trainers and other industry thought leaders, share their stories and experiences. We also explore hot topics in our space with thought provoking, challenging, energetic discussions. We hope you enjoy this episode.

Eric Naiburg: 0:25

Hi and welcome today's podcast. My name is Eric Naiburg, and I'm the Chief Operating Officer here@scrum.org and I'm joined by Darryl Fernandez, who's an executive advisor, and he'll introduce himself in a moment. What we're doing today is we're talking about some questions that we received during our webinar, managing your AI teammate, turning AI from experiment to strategic partner. The webinar was a few weeks ago, is held back in November of 2025 and throughout the webinar, we received a whole lot of questions, and we weren't able to get to all of them. So we thought, hey, why not? Let's talk about them here. Darryl, you want to introduce yourself real quick.

Darrell Fernandes: 1:07

Sure appreciate it. Eric, looking forward to this. Darrell Fernandes, been around technology since the late 80s. Recently jumped in here with scrum.org to look at some AI capabilities and where AI could play in not only the role of the scrum team, but also in product development and in within scrum.org itself. So really looking forward to talking a little bit more about AI as a teammate and how we might think about AI as we go forward. Great.

Eric Naiburg: 1:39

Thanks, Darrell, and thank you for joining the podcast today. So first question, and there are a lot of questions, as we said, and AI is changing so fast, and what people understand about it, what people know about it, what people are learning is changing so fast, is some of these questions, the answers today may not have even been the answers that we would have given just a few weeks ago, but let's give it a shot anyway. The first kind of area that people focused on was AI and Scrum today, kind of models, fears, lots of misconceptions as well, and thought maybe let's first start with the current landscape, and looking at what's out there today and what exists. Is there a model that you're seeing for Scrum and AI as a framework together?

Darrell Fernandes: 2:37

So we've seen a lot of teams, a lot of Scrum teams, a lot of non Scrum teams, but focusing on Scrum teams, we've seen a lot of teams use AI in very different ways, whether it's aI contributing as a research arm to the product team or the scrum team, whether it's AI as a dev assistant and helping to manage some of the DevOps or even some of the code through something like githubs copilot. There's a lot of different ways that a lot of different Scrum teams are using AI. So is there a single or a model that works? I think there's many, and I don't think we're at a point of maturity that any one model is really the answer today. I think there's a way to think about AI as part of the team, which is what we tried to emphasize here. I think there are tools that you can put in place to help the team take maximum advantage of AI, depending on where they're going, but, but I bring it back to kind of the fundamental of Scrum, which is an iterative, process of delivery. In order to iterate on something, you have to understand where you're trying to go to a degree. So what value are you trying to obtain from Ai? Do you understand that value? And can you question yourself as you go forward? Are you getting that value from an AI, or do you need to take a different approach? I think that's the real the real framework here around how Scrum and AI come together, which is really leaning into that iterative approach, understanding what your goals are, and understanding how you think AI can help, and then being honest with yourself as to whether you're getting that value or not. I don't know. Eric, do you have any different thoughts on that?

Eric Naiburg: 4:23

Yeah, I agree. And I think if we think back to the fundamentals of Scrum, and Scrum is really about, how do we deal with the unknown, knowing that we don't know everything to get started and AI is changing so quickly, there's not one single framework that we would put in place, but using Scrum as a way to build architect and learn how we're using AI, I think can be really effective, because we can work in small iterative increments and deliver a little, learn a little. And change as we're going forward, and really start to use it, using Scrum as a way to deploy and manage those AI projects, rather than thinking of one way to approach it. So yeah, are there any models that you've seen organizations starting to implement. I know you've worked with lots of different organizations, both formally and informally. Are there, are there things you starting to see them adapt, especially around Scrum,

Darrell Fernandes: 5:35

what we've seen, and what we've we've also experimented with within scrum.org is that rag models allow you to improve core results. So if you take a standard model and you apply new new capability through rag overlays, you can improve the quality of answers relative to scrum practices. So we've tested 1000s and 1000s of queries against different engines, different models, if you will, whether they're Microsoft's co pilot, whether they're chat GPT, whether they're Claude, whether they're Gemini. We've, we've looked at all of them. We've We've tested baseline we've tested with different types of content in a rag model. And we believe that if you have certain types of content, and we've got some thoughts on that, but if you have certain types of content, you can really improve the results of native, if you will, language models, large language models, and we've seen some organizations do that. We've seen some organizations ask for some of that insight. However, I think to Eric's earlier point, so much is changing so rapidly, those 1000s of tests can't be done often enough to ensure that you've got the perfect data set. We believe we've got a good one. Could it be better? I'm sure it could be better. Could you get to better answers today than we got the last time we tested? I'm sure you can do that. Will the core models give you better answers than they did the last time we tested? That's an interesting question, because more data doesn't necessarily mean better answers,

Eric Naiburg: 7:16

and sometimes more data can mean worse answers, correct, just like us as humans, the more we know. Sometimes we conflict things and and confuse things. That happens with the models too, and we as humans have to be aware of that as we're interpreting that data and seeing that information. We get asked all the time, well, will you add an ask Ken Schwaber robot or bot to the scrum.org website? And the concern we've had of doing that, and for those of you who don't know, Ken is the chairman of scrum.org and the CO creator of Scrum, and we could add a Ken bot, and it would be cool and kind of interesting, but he wouldn't be right 100% of the time, and he would sound just as correct and just as confident when he's wrong as when he's right. And then who's going to be there to interpret and throw the flag and say, Hold on, Ken, that's not quite the right

Darrell Fernandes: 8:19

answer there? Yeah, I think that leads really well into the next section here, Eric, which is some of the fears and misconceptions. And I don't know that we had a specific question about this, but I'll jump in here and say I think one of the fear slash misconceptions is that AI doesn't need to be validated like it's so important from a bias perspective, from from all the pieces that come into AI to make sure that you're looking at those responses and you're validating them, and you're ensuring that they they kind of make sense, that that you're interrogating them to make sure that they fit within the the boundaries that you expect them to, whether that's a bias thing, whether that's a compliance thing based on an industry standard, there's a lot of reasons to really, to really think through the responses that AI is giving you and ensure you have confidence in promoting them as you go forward.

Eric Naiburg: 9:20

And I think as we're heading we're recording this in the middle of December, we're heading into the holidays and the end of year. And you think about when we're sitting around the holiday table with family and friends and someone says something you might not agree with, or somebody says something that you think is incomplete, or maybe they're even wrong. You challenge them, and we need to do the same thing with these AI models, and it's okay to go back in our questioning, in our prompting, and challenge the responses that we get, and make sure that we're we're going deeper and asking more questions. Shouldn't. And continuing to push and I think what I've seen quite a bit is people not doing that, and people just taking that first response as the answer. And we wouldn't do that if we were sitting next to Uncle Joe at the holiday table or something. Why don't we do that with AI? And seem to do it so quickly, yeah,

Darrell Fernandes: 10:26

and almost lazily. Sometimes, Eric, you know, I hesitate to use that word, because I don't think people are lazy, but it's a very passive engagement with AI, because there's an assumption of authority at times, I think. And I got an interesting argument with AI about Dora metrics, and I won't go into the details, but you can absolutely push back and challenge and dig and ensure that the answer that you get actually is sufficient to the question you were trying to understand. And I think this leads to one of the questions we got, which is around AI as an accessory or replacement. And based on all this, I think you have to be very careful if you start to head down the path of AI as a replacement. Ai needs to be questioned. AI needs to be validated. Ai needs to be kind of engaged with, which, to me, says it's much more of an accessory than it is a replacement. I don't think we're at a point of replacement. I don't know that we'll ever get to a point where AI is a replacement, but it can certainly help fill gaps and and push things forward as an accessory, in my in my opinion, to the team or to the individual at hand. But you know that's that's certainly an opinion, and you can do a lot of AI questioning, and you'll get different responses to that and different perspectives, because I think the world is still trying to figure this out in a very robust way. Yeah, and it's really

Eric Naiburg: 12:06

AI as a teammate and AI as as a member that we're going to work with, that we're going to collaborate with. It's not necessarily replacing us. Now, is it going to help us do certain things that maybe I normally would have done, but now I'm going to hand that off Absolutely. But is it replacing everything that I do, or everything that somebody does today? Certainly the answer is no, but it's going to free me up to do certain things. It's going to allow me to think about other things. It's also going to, you know, in some ways, think about things that I maybe hadn't thought about and hadn't considered. But one of the things that I hear people talk about is, well, people are biased, and AI is not AI has a bias as well, and don't let anybody tell you otherwise. AI's bias based on the data that it has, the questions that it's been asked, the way you ask it questions. There's always a bias that plays in and that's why it is a collaboration. It really is a working together? So are there fears? Yes. Are there misconceptions around those fears? Absolutely. Will it replace some roles, some jobs, possibly, or will it allow at the same time people to become more effective in the jobs that they have.

Darrell Fernandes: 13:45

I think AI much like any, any number of technology advances, any number of automation advances, if you want to go back all the way to the Industrial Revolution, right when, when you you put in things like assembly lines, things got more efficient, people started to be able to do different things or focus on different things. It will change jobs. It will it will give people capacity to do different things in it, in doing so, new opportunities will emerge, as they have throughout time. I don't think this is unique to AI. I think the one interesting thing here is the pace at which AI is driving some of that change is probably faster than any change we've seen in the past. So that's certainly an interesting element here. I don't want to minimize that. However, I see AI creating opportunity. We've been working with an organization that's really looking at this role called the Content Curator, and we'll get into why that's important down the road here in the conversation, but that's a whole new role that didn't exist when you look at how data science is evolving and the need for different capabilities. Those are. Are new skills that are going to have to scale differently than they did two, three years ago, even. And so New Opportunities will emerge, while other capabilities, other other tasks, if you will, get more automated. So I think it'll be a balance over time. So I think there is a fear. And we've all heard fears around jobs going away. And as I said, I don't think that's a new phenomenon with AI. The pace is certainly, as I said, different, but I think new opportunities will emerge as well.

Eric Naiburg: 15:35

And so something you just said poked at a deep memory of mine. And I think this story is actually interesting, because I think it applies my first house that my wife and I moved into after we got married. Our next door neighbor worked on an assembly line at auto manufacturer, and he used to always talk about, and he was very proud of this, how he could stop the line. He was empowered to stop the line if something was going wrong, if there was a problem, if in he worked in the area where the doors were assembled and and put on the cars. And yes, there was machine putting that door on the car, but he was still overseeing that. He was still able to engage and interact with that and the technology was his teammate, but he could still go and push that button, and as he said, I can stop the line, and we're still in that same place today. The technology is different. The technology is much more intelligent than it ever was before, but there's still that human interaction that's occurring, and we're still looking at this all as as a teammate in where we are working, working together. So last part on this one to kind of just touch on maybe a little bit deeper is AI as a teammate, and how do you see that evolving, where AI become does AI take on responsibilities of the team? Is it really more of a we're paired together all the time? How do you see that coming about? Yeah, I

Darrell Fernandes: 17:21

think that's such an interesting question. And I think we see so many teams using AI in such unique and different ways that I think it's almost impossible to kind of pinpoint. What I would say is it's really important we talk about this in the in the webinar, talk about this in the white paper. Writing a job description for AI is so important, because if you're if you really want to evaluate how AI is contributing to the team, a big chunk of that is what did you expect AI to do? And by writing that quote, unquote job description, you're kind of laying out what your expectations of AI are, so that you can actually evaluate it if you're using AI as a research assistant for a product owner or product manager, that's one unique way that you can use AI, and you would use it and measure it to see if you're driving more efficacy of the product owner. Are you getting better deliverables because you're doing that? Are things as the team getting better insights. That's really important if you're using AI as a dev assistant, in a co pilot, in your repository and in your code development, that's a very different set of metrics and a very different job description. In order to understand how AI is helping you, are you producing better code? Are you producing more maintainable code? Is your code more efficient to run in a cloud environment? All real questions that you would explore and that you can measure the efficacy of AI on the team, however, very different from a research assistant for a product owner. So it really depends on how you're setting up AI for success. And there's another element, when we think about Scrum, right? Scrum is all about the team interaction and delivering as a team and and how the team works together. Ai, if it's going to be part of that team, has to be accepted by that team. The team has to see value in AI. The team has to believe that AI is helping so understanding how the team's engaging, understanding that how the team is deriving value, is a really important element to think through as well, and I don't want to lose sight of that as you think about AI as the teammate. Awesome.

Eric Naiburg: 19:34

Well, thank you, Darryl. I think, I think we've covered a lot in this short period of time, and I'll still want to go too deep, because I think we have opportunities for more, and we'll have more episodes where we go deeper and answer other questions as well. So thank you very much, and hopefully we'll see you all for another episode. Ramon, you. You.


What did you think about this content?