AI as Your Teammate: The Four-Step Framework for Product Teams
In this episode, Dave West sits down with Darrell Fernandes, executive advisor at Scrum.org to explore the The AI Teammate Framework: A Four-Step Framework for Product Teams, featured in a new whitepaper. They discuss how to treat AI like a true teammate—onboarding it with context, guiding interactions through user stories, and establishing governance to manage performance.
Darrell emphasizes the importance of structured AI adoption, comparing it to onboarding human team members, and highlights how a disciplined approach can improve efficiency, reduce costs, and even protect jobs. From writing AI job descriptions to building prompt libraries and governance strategies, this episode offers actionable insights for teams navigating the evolving AI landscape.
Listen now to learn how to bring AI onboard as a true teammate.
For more, there is a live webcast coming up next week that will also be available as a recording. Learn more.
Topics covered:
Introduction to the AI Teammate Framework
- Why a framework?
- The need for a structured, holistic approach to AI in teams
AI as a Team Member
- Treating AI like a teammate rather than a tool
- The importance of onboarding and providing context
- Comparing AI onboarding to human onboarding
The Four Steps of the Framework
- Identify AI’s Role – defining the problem and writing an AI “job description”
- Onboard with Context Management – giving AI access to product, customer, and process context
- Interact Using User Stories – structuring collaboration through clear, outcome-based interactions
- Governance and Performance Management – ensuring accountability, compliance, and efficiency
Challenges of Working with AI
- Context management and maintaining prompt libraries
- Balancing AI experimentation with structure
- Cost, scalability, and efficiency concerns
Lessons from the Early Days of Cloud Computing
- Parallels between the AI adoption curve and cloud evolution
- The shift from unregulated enthusiasm to disciplined governance
Future of AI in Product Teams
- The importance of a disciplined, thoughtful approach
- How structured AI collaboration can enhance — not replace — human work
Actionable Next Steps for Teams
- Read the white paper
- Assess current onboarding and management practices
- Apply the four-step framework to integrate AI effectively
Transcript
Announcer 0:00
Welcome to the scrum.org community Podcast, the podcast from the home of Scrum. In this podcast, agile experts, including professional scrum trainers and other industry thought leaders, share their stories and experiences. We also explore hot topics in our space with thought provoking, challenging, energetic discussions. We hope you enjoy this episode.
Dave West 0:25
Hello and welcome to the scrum.org community podcast. I'm your host, Dave West, CEO, here@scrum.org in today's podcast, we're talking about AI Yes, artificial intelligence and all that stuff. And we're actually going to be focused on a white paper that was recently published by scrum.org titled The AI teammate framework, a four step framework for product teams. But I'm lucky I've got the person that wrote the paper on the podcast with me. Welcome to the podcast. Daryl Fernandez,
Darrell Fernandes 0:58
always a pleasure. Dave, really enjoy these conversations. Looking forward to this topic today.
Dave West 1:04
It's an interesting topic, and I guess I'm going to have to start where I'm sure all our listeners are thinking this. Why do we need another thing on AI? I mean, my inbox is just continuously full of things about AI. I go to a dinner party and I get caught in the corner by somebody asking me questions about AI. I mean, why do we need something? What was your motivation for authoring this white paper?
Darrell Fernandes 1:32
Daryl, so I've seen a lot as I've researched AI with scrum.org, and in other organizations, I've seen AI come at us through a different lens recently, and there's a lot of consumption out there, and there's a lot of people trying to figure out how they help teams and individuals become more effective with AI. But there's no structure around that, and they're always piece parted solutions. And what I tried to do with the white paper was take a step back and say, if you're going to leverage AI, how do you think about that a little more holistically? And as I did that, it occurred to me that it's actually not that different from looking at how you bring a new team member into your team, the steps you need to take, the things you need to think about, the approaches all of that kind of came together and evolved into this place that we need to think about, or we can think about AI as an extension of the team, as we would any other individual that we're asking to contribute to the team.
Dave West 2:38
So I love that metaphor, by the way, sorry, I really love that metaphor, because analogy, I never know whether which one's right. But the this idea that you're onboarding the AI to your team, I think that's just such a good it just has so many legs. Because, you know, just like any teammate, the less you tell them, the less valuable they are anyway.
Darrell Fernandes 3:01
Well, and there's a piece of this. It's so interesting as I went down this path and I did research and talked to different teams and organizations, there's a piece of this that's just the fundamentals of what we should be doing. To your point, Dave, every time we bring a new associate into a team, we certainly have a job description. We certainly interview them. But do we really onboard people, or do we just ask people to join the team and then give them some time to kind of immerse themselves in all that is our organization, our situation, our product, our customers to figure out how to effectively do their role, and too often, I think we don't do a really good job with the people on our teams in those regards. And as we think about AI, we don't have the luxury of AI kind of going out there and figuring things out without prompting, so we have to take a different, more disciplined approach to how we bring AI to the table. And you hit on the onboarding step, which was clearly one step here, and we'll rewind in a second and talk about all the steps, but onboarding is really the equivalent of context management for AI, and probably the weakest spot of how we do associate onboarding and the most critical for AI. And a lot of teams are focused there, but that's just one piece of the
Dave West 4:21
puzzle, and it's interesting talking about context. And I do want to get the bigger picture. I want to know all those steps, all four of them, in a minute. And I'm sure our listeners do too, but just talk about context at the moment. I know there's so many times that I've onboarded people that have worked for me, and I say, hey, just go to Google and just look at all those documents and spend the afternoon immersing yourself with that. And that's exactly what we do with AI. Often we oh, well, this instance is running on our Google Drive, therefore it has all that context. But actually, more isn't always great for AI, in the same that it's not great for human. Human beings. So I love, as I said, that the metaphor of onboarding, the metaphor of treating AI like a teammate, is so rewarding in this context, because I think it sets a lot of guardrails. You know, when you get a bad answer from Ai, it's not AI's fault, it's you've not given it enough context, or you've written the wrong you've asked it the wrong question, or maybe it isn't the right worker. Maybe there's another worker that's better at it. I just love that metaphor. I think, I think it's got legs. I you know, I think you should write a book on it. And no, no, not today, though. Darryl, all right, so let's get our viewers. Let's baseline our viewers, our listeners, baseline. Let's talk me through the four steps, the highlights of the white paper,
Darrell Fernandes 5:49
right? So did we take a step back here? Right? And as we think about our teammate here with AI, the first thing you have to do is identify what we're trying to solve with AI. What partnership are we trying to create? What skills are we trying to fill on the product team? And that's a really important step. And a way we would do that in regular hiring processes would be we'd write a job description. And candidly, there's nothing wrong with writing a job description for AI. There are so many enterprise grade platforms that go wide, whether that's anthropic Claude, whether that's Gemini, whether that's chat, GPT, there's so many broad solutions that you can choose from, and they each have nuances to what they do better than others and and you can get value in different ways. But then there's so many emerging specialized engines. When you do some research, Finance and Accounting has a unique set of specialized engines. Operational Customer Service Excellence has a unique set of specialized engines. So depending on what you're trying to do and what void you're trying to fill with the with the solution, it's really important to be specific around that so you can then go interview, right because, because you write a job description, you interview, you hire, if you will, your first teammate here by finding the right AI to fit the need that you have at the moment, and as you go through that process of writing that job description and hiring, you're also setting the baseline for what success looks like. What are you actually trying to sell for and improve within your product team by bringing an AI capability to the table with the rest of your team? And that's really important as we kind of come full circle towards towards the end of the four steps, I'll talk more about that, but that's really step one, job description and setting success criteria and, quote, unquote, interviewing the different capabilities to see which ones are going to fit your scenario. The best step two, and we hit on this one, is, is onboarding, and it's really about context management, onboarding AI talking about and giving AI the context of your product, the glossary of terms that you may use within the development of your product, what your customer demographics are, what your target audience is. These are really important things, because as you interact with an AI solution over and over and over again, having those things codified in a way that you can continually reference them. Interacting with AI drives your efficacy significantly, of first time, second time, correct response, if you will, or productive response, if you will, toward towards a much better outcome. And that has impacts both, both on the cost but but also the time spent in interacting with AI. We're starting to see many studies today around how, in certain cases, AI is actually slowing down the delivery process, because that interaction back and forth is starting to take more and more time. To your point that may or may not be because AI is or isn't providing the right answers, it's likely because we're not giving the right context when we're asking the questions so that onboarding and setting up those context documents around the who, the what, the where, the why, the when and the how of your product is really, really important, all the way to what your stakeholders are looking for, because, as AI, is providing feedback, understanding what your overall success criteria for the product is, is significant in input to the output that you're going to receive. So step two setting the context, ie, onboarding your AI capability. Step three is interacting with AI, right? And we, we believe, as you look at the underpinnings of application development, the underpinnings of Scrum, that that a great way to do that is the construct of a user story. We all have have knowledge of that. We all have context for what a user story is, but that discipline of interacting really helps the AI. AI capability, parse the what you're looking for, the why you're looking for it, and what outcome you're trying to drive with it. As a I want to so that I can become such a powerful structure for AI. And if you then have context documents with that, it really, really you can see how it can produce much better, much more impactful responses right out of the gate. So step three is, is a little bit of prompt engineering. But prompt engineering is such a loaded term these days, it's really about just interacting constructively and effectively with the AI solution so that you can get to the best response possible with the context you're able to provide. Doesn't mean you're going to get first time resolution every time, but it will significantly reduce the number of iterations you need with AI to get the get the response that helps you move the product forward and then the last piece, and probably the most important in every scenario, whether it's a an associate on your team or whether it's aI on your team, is, is the feedback loop, the managing of AI, the governance, if you will, as we relate just to AI. Are you? Are you in a position to understand whether AI is helping you or not. Do you have the data to tell you that this is actually moving you forward better or faster? Do you understand the biases that you should be looking for in the responses so that you can test to make sure that the AI that you've chosen actually fits within the context of the outcomes that you're looking for, do you have the regulatory or compliance checks that you need depending on the industry that you're in to ensure that you're not running afoul of the core competencies that you should be staying within? So those are really important pieces of that four step, which is the management of AI, and if, if need be, if you're going to decide that you don't have the right AI, what does it look like to actually exit one AI and potentially go find another AI and bring you back to step one of the framework, because it wasn't the right fit. And much, much like happens with associates, sometimes it's not the right fit, and you move on, and you have to go find a better solution for the needs that you have. Maybe your needs have evolved. Maybe the solution has evolved, and it's just not the right fit anymore, and you need to go in a new direction. And how do you do that? And when do you do that? See that you don't waste resources. And those resources could be time, those resources could be money.
Dave West 12:45
Okay, so we've got a fourth step. You start job description, interviewing. Second step, onboarding. The third step is interacting, sort of like prompt management, kind of stuff, or prompt engineering. Fourth step, governance. That sounds like a very reasonable approach that what's interesting though, what sort of like highlights is, unlike working with an associate or an employee or somebody in your teammates, AI is very honest in many ways, like, for instance, the job description stuff I know, and I actually, I wanted to create a video, right? But I so I asked, I asked Gemini, I wrote out the things I wanted. I want it to be animated. I want it to be this. I want it to be that, you know. I want it to be free, you know, because I'm cheap, you know, etc, etc. And so I created, and I just, and I asked Gemini, and it gave me the the models and the tools that would it didn't say, well, actually, you should use me. In fact, I can imagine there's a Google engineer out there going, damn it, we need to fix that. Never, never, never wait. Whereas an associate, if you said, Well, can you do? Where do you see yourself in five years? I did not ask the AI that, by the way, though I worry that the answer might be a little scary so that so then that's also true of governance as well, you know, so putting in place, asking, basically building up the criteria that you need to around privacy, around where the data is coming from, around all these stuff. And then, yeah, yeah. And then traceability is a huge one, and asking the AI about that, and putting in place some stuff, the interactive stuff, you know, prompt libraries, obviously sort of jump out there, right? You would want to have those very clear prompt libraries and managing prompts. I love the user story model. It took me a long time to break the search habit when I was using AI. I now I've broken it to such an extent that I use it when I'm searching and I like. Why is it not contextualizing for my needs? Oh, yeah, because I'm just using just a standard search engine. So actually, can I just
Darrell Fernandes 15:09
jump on that for one second? Dave, so I think you hit on something really. You know, we joked earlier on about, you know, the relative vacuum of content around AI these days and how much we need more content around AI, but I do believe that that breaking the search habits that we all have as we move to AI is going to be a really interesting evolution, because I think search has habitualized us to throw very simple queries at it and get a link farm response, and then we take time to filter through that AI is more expensive to run on the back end than search and in the inter age iterations take more time with AI because the responses are so much longer that I think that the need is there for us to get better at Our interaction model in general with AI than we've been with search over it, over history,
Dave West 16:06
ignoring the environmental impacts of AI for a moment, but the just the the what I've noticed with AI tools at the moment, and it's really actually started to frustrate me a little bit that I forgot this is context is carried over to the next to the interactions. So the problem I have, actually, in terms of the discipline, is sometimes if I do a very badly structured prompt and I get garbage, that's fine. Oh geez, that's not what I wanted. I then do the next one. The problem is the tool is using that previous garbage as context to this next one. So now I'm in this situation and and that's fine, because if it's if you've just done two, because you then kill and you start from the beginning, and you tell it, which you can Google, by the way, anyway, so you tell it, or you ask it, how to stop all that context. But the problem is, when it's like six or seven in. And most of that context is useful. Pulling out the bad stuff is actually really, really hard. So I was trying to build a legal document. Don't tell the lawyer. Okay, so if you're listening, general counsel for scrum.org This is, I'm just making this up. But anyway, I was trying to, I was trying to create a legal document. And I, you know, went through the process, and I used question answer, which I love, by the way, and I was using question answer, and it was all going there, and then I made a mistake, so the ended up being a mess, and I had to go back right to the beginning again, which was, which was a disaster. I think that the point that you're raising, I think is an important one, the success, the precision, the quality of this new AI helper is going to be significantly undermined unless you're quite systematic about The application, the use of it, and I'll
Darrell Fernandes 18:02
just draw another so we talked about search. The other interesting analogy here is cloud. Right? When cloud first emerged, a lot of organizations pushed a ton of workload to the cloud, somewhat blindly, and all of a sudden realized the cost of that workload because it wasn't optimized for the pricing models of cloud. I think we learned a lot through that process that if you're going to go to a consumption based model and a subscription based model, you have to be thoughtful, as an ex technology leader who had to kind of pay those bills, you have to be thoughtful about how you do that and how you think about that, and be ready for that consumption based model. I think the current pricing model for AI isn't too punitive today, but it's going to get there. The energy costs alone that are being driven by the A models are going to have to be recuperated somehow, or recovered somehow by these organizations, and they're going to be looking for the financials to do that, and so the efficacy of your interaction is going to be critical over time. It may not feel that way today, the everything's free model, but I think over time we're going to see that that's going to have an
Dave West 19:16
impact. Well, the cloud is just somebody else's computer, right? Somebody has to pay for it. Facebook, yeah, well, Facebook in the next till 2028, so the next three years is investing six, over $690 billion on data center creation. Somebody is going to pay for that. That's right. And the reality is that the models that how we use models today, is great for experimentation, but at some point, there is going to be a cost, and somebody is going to knock on your team door and say, Did you see how much you were spending last month? And you're like, oh my gosh, I was just trying to work out what the best tacos in Austin were. Ta. Met and and by the way, there's no bad tacos in Austin, but if you're listening, so I guess we're coming up to the end of today's podcast. We'll have to try to keep these, these short. And this is a huge, huge topic, if you were going to, you know, if you're going to leave any words of wisdom to our listeners, obviously, read the white paper. Read the white paper that's titled The AI teammate framework, which is tricky to say, a four step framework for product teams. So read the paper. We get that, or actually get your AI to read it and give you a summary, even better. But of course, remember to tell it who you are, because it all that context is important. Anyway, you'll find that out in the paper. So other than reading the paper, what should be the you know that our listeners biggest takeaway?
Darrell Fernandes 20:55
Yeah, so there's a couple things. One, one is, as you read the paper, understand that it's a framework. It's not a prescription, right? So, so it's it's there to help inform and educate. We got some great feedback when we published the paper about, is it too much overhead? Is it too much bureaucracy? The intent was for it to be a framework. Is the intent was it for it to be a guide to think about how you do these things, not to prescribe that you must do all four of these things this way, but think through how you're going to use AI, what your needs are, how you're going to onboard and build context and and how you're going to interact and measure for success. Those are important things, regardless, and, and so remember that as as you go through the process. Number two is, we got one other piece of feedback from an HR professional when they first saw the paper hit that it was going to be this kind of interesting. AI is going to solve all the world's problems. And the HR professional actually read it and said, you know, there's some interesting things here. The reality is that these are things we should be doing with our teams regardless. And so I'd say, I'd say, elevate your game. And think, read the paper. Think about it in context, not just of AI, but But ask yourself the hard question about, How good are you at onboarding an FTE to your team, a full time associate to your team, and do you need to be better? Because if you're better there, you'll be better with AI as well. So those are the couple things the future is going to be really interesting in this space, the the efficiency and the effectiveness in which we use AI, I think is going to be a bigger and bigger hill to climb for organizations, especially large enterprises. And it'll be interesting to see what techniques emerge to do that the best,
Dave West 22:38
completely Great. Great words there. Daryl, I think all of this is in the context of AI is not going to take your jobs, but people that use AI are going to so the bottom line, I think, for our listeners, is that a disciplined approach to how you use AI provides you with this enough structure so you can inspect and adapt through transparency, which is obviously a key tenant of Scrum and that discipline in this the example the structure that Daryl talked about, the four step structure, job, job description, onboarding, interaction or prompting, and then governance can provide you with the right questions to think about when building this discipline into your use of AI to augment your team and to secure your job for the future. So I think it's really, really good, a really good message, thanks for taking the time to be
Speaker 2 23:35
with us today. Daryl, thank you, Dave. Always a pleasure. Okay?
Dave West 23:40
And thank you listeners for listening to today's scrum.org community podcast. If you liked what you heard, please subscribe, share with friends, and of course, come back and listen some more. I'm lucky enough to have a variety of guests talking about everything in the area, professional Scrum, product thinking, agile. And of course, AI wouldn't want to miss that. Hey, thank you, everybody and scrum on.
Transcribed by https://otter.ai