Skip to main content

Is AI Actually Making Your Team Better? Q&A Episode 4

April 2, 2026

In this episode, Eric Naiburg and Darrell Fernandes of Scrum.org tackle a pressing question: How do you measure the impact of AI on your team? from a recent webinar focused on AI as a teammate.

They explore how AI’s role differs across organizations—from ideation and analysis to stakeholder engagement—and why its productivity gains are still evolving. The conversation highlights the importance of combining qualitative insights with quantitative metrics, including flow-based measures like velocity, to assess real impact.

They also discuss the growing responsibility of hiring managers to define AI roles clearly, set expectations, and ensure AI contributes value without disrupting team dynamics. Plus, they share how AI itself can be leveraged to help define roles, recommend metrics, and improve performance measurement.

 

Transcript

moderator  0:00  
Welcome to the scrum.org community Podcast, the podcast from the home of Scrum. In this podcast, agile experts, including professional scrum trainers and other industry thought leaders, share their stories and experiences. We also explore hot topics in our space with thought provoking, challenging, energetic discussions. We hope you enjoy this episode.

Eric Naiburg  0:25  
Hi and welcome today's podcast. My name is Eric naberg, and I'm the Chief Operating Officer here@scrum.org and I'm joined by Darryl Fernandez, who's an executive advisor, and he'll introduce himself in a moment. What we're doing today is we're talking about some questions that we received during our webinar, managing your AI teammate, turning AI from experiment to strategic partner. And throughout the webinar, we received a whole lot of questions, and we weren't able to get to all of them, so we thought, hey, why not? Let's talk about them here. Darryl, you want to introduce yourself real quick.

Darrell Fernandes  1:02  
Sure appreciate it. Eric, looking forward to this. Darrel Fernandes been around technology since the late 80s. Recently jumped in here with scrum.org to look at some AI capabilities and where AI could play in not only the role of the scrum team, but also in product development and in within scrum.org itself. So really looking forward to talking a little bit more about AI as a teammate and how we might think about AI as we go forward.

Eric Naiburg  1:34  
Great. Thanks, Darrell, and welcome back. Thank you again, Daryl, for joining me today. Dale Fernandez and I are going to take you through some more of the questions that we received during the webinar. We'll talk a little bit about those questions and hopefully answer some of what you were wondering. So the first question kind of goes around the area of metrics, value, productivity and so on when it comes to AI. And really, a lot of people are asking, How do I measure success? How do I measure productivity? You talked in the webinar about hiring AI as a team member. Will I measure productivity of the rest of my team. Is it possible to measure productivity of AI?

Darrell Fernandes  2:25  
I think it is Eric, but I think much, much like a lot of things we talked about, it's it's very unique to an organization. It's very unique to a role. We talked about hiring AI into your team and writing a job description. The job description really is all about, what role are you asking AI to fill, if you're looking at the ideation, analysis kind of piece of the work that you're doing, and that's really where you need help. Ai plays a very different role in helping with market analysis, helping with identifying edge cases, stakeholder engagement. Can be a use of AI in that kind of phase, and different organizations will quote different numbers of the the productivity improvement in that phase or that kind of work. I'm not going to quote numbers. I think AI is still emerging. We're still learning, we're still seeing how it can add value, but it's all about understanding where that need is in your organization, and then and then how AI can supplement the work that you're trying to do. We do know, in hardcore development, if you will, based on the the work that's been done with copilot that AI does, does actually help deliver value in the development phase, whether that's looking at code as it's being written, adding error handling routines that are fairly generic, those kind of things. We have seen those productivity gains happen happening in industry. So those are fairly easily research numbers that are out there. And then you know, as you think about validation, deployment, maintenance, those are other areas where AI can continue to help, especially in the validation and deployment phases, helping to avoid negative scenarios that you may not have anticipated. If you run your plans, AI does a great job of just evaluating where you may have gaps. Asking those negative questions of AI can be really valuable. So without quoting specific numbers. And anyone can actually go to an AI engine and ask, you know, this is where I am, this is who I am. I'm thinking about applying AI in this kind of way in my delivery process. What kind of improvements could I expect? And you can even ask, is it different from from an LLM, from a model to model? Do different models give me different value in different ways? Really interesting data out there on this, but it's still emerging, I'd say, and there's still a lot to be learned in this space. Absolutely value to be had. But there's also a downside, like if you don't use a. AI? Well, it can be a time sink, because you still have to evaluate the responses. You still have to look at what the engines are telling you, what the models are telling you, so you have to be careful that you're using AI efficiently, which was the whole point of what we're trying to do in the prompting section of the teammate interacting with AI, the better you can get at interacting with AI, setting context, asking questions, the more efficient you will be in the responses you get, and the quicker you'll be able to validate those responses. What we have seen and what we know happens in industry from a quote, unquote failure rate, is those those organizations that aren't taking the time to set AI up for success, the team gets frustrated because the responses aren't great. They have to do more work. There's a lot of iteration back and forth with the with the models and the teams. The teams don't feel like that's actually productive. So there is, there is risk here, and using AI effectively is an important aspect to that.

Eric Naiburg  6:02  
So your experience as a CTO and thinking back to the area of measurement, and how do we measure and look at success? One of the things I think people have always struggled with is, well, we're doing things differently. How does that compare to the past, but we never really measured the past the way we could to figure out if we're doing things differently. So that's great. I'm looking at measurements now of how AI is working, but what is it that I'm comparing it to? Have you had experience in when we measured what it was like before, and now we're measuring kind of more apples to apples, if you will, with a pair thrown in.

Darrell Fernandes  6:52  
Yeah, I think it's so hard, Eric, because every release is different, right? Every cycle the kind of work you're doing, whether you're building upon existing code, whether you're building new data structures and new code, new services, leveraging old services. There's so many variables that go into product delivery that is really hard to find apples to apples, and even if you find apples to apples, it's foodies to Macintosh, right? It's not, it's not a one size fits all. That's the That's the challenge when you're when you're sitting at the executive table as a CIO or a CTO, trying to compare yourself with the chief operating officer who has new account setup in financial services. Right? A NASA is a NASA is a NASA, right? NIGO rates, or NIGO rates, or NIGO rates, NIGO not in good order for anyone who's not familiar, but those are standard metrics and standard processes. Software development follows a consistent set of practices, but the variables are so different each time, it's very difficult to produce apples to apples. That's why I always went that went back to flow and looked at flow velocity as my overall it was the best I could do to look at my overall delivery process and how work flowed through the delivery process, from ideation through deployment, that's the best I could come up with as a CTO when I ran that and all the four factors of flow velocity really became material, things like technical debt really has an impact on your flow. If you're if you have to work through significant technical debt, AI or not, that's going to slow down your flow. If you have too much work in process, that's going to slow down your flow. So it's, it's the same techniques that we've been using for the last decade or so, I think. And then you can use those flow baselines as it as a metric case to build upon to see if AI is actually helping or not. There is certainly qualitative feedback from the team. Are they getting answers more quickly because the product managers leveraging AI to get use cases and edge cases defined better, like those are things you can certainly look at qualitatively, but quantitatively. The best, the best I can see, is flow, and that's my own personal opinion. I don't have specific data that that an organization could use.

Eric Naiburg  9:12  
Yeah, I think that the dream is that we've always measured quality, we've always measured value, we've always measured the ability to deliver and all of those things, and now we can compare past to future and present and so on, but just doesn't happen. I worked with one large insurance company at one point where the VP was all about data, and he actually was measuring. He wanted to make a change, but before he made the change. He sent spent six months measuring how are we doing today, so that he could then spend the next six months measuring how are we doing now and then, we actually did a huge room presentation of that data to his 1000s. Of direct, or I guess, indirect reports to the people that he managed right? And it really got folks motivated, because they could really talk about, here's the difference. The problem is that doesn't happen very often, because we don't know when we're going to change. So we're not measuring for change. We don't anticipate that. We also are measuring what we're measuring, and then we make changes, and we continue to measure, but we don't necessarily have it's a gradual break point. It's not a start and stop point. So I think it becomes difficult to do that, but as long as we're measuring along the way, and we can kind of see that, and oh, by the way, AI can help us with understanding what to measure, what we're measuring, how we're measuring it, and distilling the data down. We have some opportunities to leverage AI there too well,

Darrell Fernandes  10:58  
and even in and I'm going to go back to the White Paper, even in writing the job description, the job description should be setting you up for what do I expect to be different if I add this role? What should get better and and by if you can write the job description, you can certainly ask an AI model to tell you how to measure that within your within your structure, right? And see what it says, like, use AI to your advantage, all the way back to how the job description sets you up to be able to to measure and use evidence based metrics to see, am I actually getting the results that I

Eric Naiburg  11:32  
hope to get? So we're talking about measurement. We're talking about the value that's being delivered and so on. In the model in the paper. You talk a little bit about this, who's really responsible for ensuring that we're getting the value that we think we're going to get out

Darrell Fernandes  11:48  
of the AI? Yeah, I think the hiring manager to put air quotes around it, right? The hiring manager should be the one accountable to say, I saw a need as the hiring manager to put AI in this role in the team, worked with the team to add AI, and it's up to me to make sure, through retrospectives, through all the different checkpoints we have, that we're getting the right value out of AI. And if we're not to evaluate whether we need to adjust how we're using it, maybe we didn't set the right context. Maybe we don't have all the context we need. Maybe we need to do more with the model, or maybe we have the wrong model, and maybe we have to look at this problem differently. But it's up to me as the person who who concluded we should add AI in this context, in this capacity to make sure we're getting value, and I owe that back to the rest of the team, right? I owe that back to the team to say I'm accountable here, and if AI is a net subtraction, I'm accountable to fix that. I can't just, can't just tell people to figure out how to make it better. I've got to figure out why it's not working. What I need to do differently to make it successful. And I think that's an interesting place sometimes, as managers, as hiring managers. When we bring an FTE into an organization, we can, we can ask the FTE to figure out how to work within the construct of the team. Well, AI doesn't know how to figure out how to work within the construct of the team, because it's not floating around zoom calls with you. It's not interacting offline at the water cooler, etc. We have to facilitate that process with the model so that it can be as as effective as possible within the construct of the team.

Eric Naiburg  13:26  
So how do you handle the duality that then starts to come in, because you've got qualitative data, you've got quantitative data, and you're starting to look at these things together. How do you start to handle to handle that, both as the manager and as you're feeding into the AI, right? So you're feeding this information into AI, and you're asking, Hey, start thinking about this stuff. And you're giving it kind of, maybe even conflicting ideas.

Darrell Fernandes  14:01  
I'm sure you're giving it conflicting ideas. I think, I think it's a learning process for all of us, Eric, and I don't know that there's any clear, specific answer to it. I think it's, it's some of the new world we've got to figure out is, is how to balance the team, and I think it's really important in the scrum framework construct to balance the effectiveness of the team as you bring AI to the table to make sure it's it's additive as you go through the process, of course there's going to be bumps in the road, and of course there's going to be give and take anytime. We know anytime you add a new team member, there's disruption, right? We know consistency of the team is one of the most effective ways to get better at producing producing results. So every time we change that dynamic, which adding AI is going to do, we're going to have disruption. We're going to have to work through that. And as the person taking that duality in. Of the disruption The team feels, and you're going to hear noise, and that's going to be quantitative, qualitative feedback. Excuse me, you're going to see some improvement in certain areas because you're going to have more specificity. You're going to have more breadth, so you're going to see some of that. You're going to have that that's what leadership is, right? That figuring out how to thread that needle in your organization, in your context, is what leadership is. And that's an interesting opportunity that AI presents us. Is, is, how do we do that? Because we're going to have to get over that hurdle at some point of how to bring AI into the team, whether it's this month, next month, next quarter, next year, there's a role to be played here. We can see that there's value if we can do it effectively.

Eric Naiburg  15:42  
Cool, yeah. And we're always kind of balancing that opinion and fact, and sometimes the opinion is fact, and sometimes it's not, and especially as we're starting to think about, are we becoming more productive? Right? Some of that is surveys and asking the team and asking people, and I can I call qualify it all the way down to to the true Yes, maybe, but it's about the feeling when you think about, you know, some of the work. So like Daniel Pink's book drive, for example, what motivates people and money is only one of many motivators. If I feel like I'm being more productive, if I feel like I'm getting more out of it, then I am and sometimes that's okay right now, there's not a lot of evidence when we talk about evidence based management, not a lot of evidence behind that, but that's also okay, and I think we need to be able to balance those things when we talk about some of these now, is it easy to say, Oh, we're going to invest in this because people feel like they're being more productive? No, I better get some real data behind it, too. I absolutely

Darrell Fernandes  16:57  
think you need some real data behind it. And there's a lot of data from a lot of data from a lot of different organizations. McKinsey has done a ton of work in this space. Microsoft's done a ton of work in this space. Accenture has done a ton of work in this space. Gartner has done it like there are all kinds of organizations that have done a lot of research in this space that you can lean into. Again, I'm not going to quote any of that a because I don't know when we're going to when we're going to post this, and it'll probably change between the day we're recording and the day we post it. But b It's not our place. I don't think to really push any of those sources do the research. It's out there. There's some really interesting studies that have been done, depending on the kind of work that you're looking for. AI to complement that, you can think about some some quantitative things to measure, and how you can get better and ensure it. As I said, it will be disruptive. Team members will be threatened, just like they'll be threatened anytime you add a new team member. And so that's just natural and and we've got to think that through, and we've got to kind of work with the team through that process to keep them feeling positive, that this is actually helping me and not replacing me, or not getting in my way, because those are both real threats that people

Eric Naiburg  18:10  
will fear, and like with any team member, there are things I can measure and things that are just gut feel, and that's right, and we, we have to go through that. So I think this. This is great. It's a great conversation. And measurement is hard. If it was easy, we'd all be doing it all of the time and figure out the right measures is probably more complex than anything. And again, let's ask AI and ask them, based on our scenario and our situation, what can we measure to get better.

Darrell Fernandes  18:41  
What's the most effective metric to measure the improvement that this role could provide?

Eric Naiburg  18:46  
Exactly and then keep asking it questions that's right, cool. Well, thank you. Thanks, Daryl and awesome all for listening to this episode, and we'll see you again soon. You you.

Transcribed by https://otter.ai
 


What did you think about this content?