Skip to main content

How Problem Clarity Drives AI Accuracy - A Discussion about AI, Problem Definition and the Kendall Framework

May 22, 2025

 In this episode of the Scrum.org Community Podcast, host Dave West and Brendan Mcsheffrey, Managing Partner at Kendall AI explore the Kendall Framework—a method for training AI using principles from lean, agile, and design thinking. They discuss how clear problem definition and context blocks like roles, capabilities, and processes dramatically improve AI accuracy. Learn how Scrum Teams can get better results from AI and make this powerful technology more accessible to everyone.

 

Transcript
Lindsay Velecina  0:00  
Steve, welcome to the scrum.org community podcast, a podcast from the home of Scrum. In this podcast, we feature professional scrum trainers and other scrum practitioners sharing their stories and experiences to help learn from the experience of others. We hope you enjoy this episode.

Dave West  0:19  
Hello and welcome to the scrum.org community podcast. I'm your host. Dave West, CEO here@scrum.org in today's podcast, we're going to be discussing AI and its relationship with problem definition, yes, AI. So it's finally being cool, right? I'm very, very fortunate to have Brenda mceffery, Managing Partner and founder from the Kendall project, with me to help me understand the application of AI and how problems are such a critical part to the use of AI. Welcome to the podcast. Brendan,

Brendan Mcsheffrey  0:57  
happy to be here. It's terrific to be part of this great

Dave West  1:00  
and I've got so many questions, just listeners, just to give you a bit of a heads up here, Brendan and I spend hours, usually over alcohol or coffee, talking about this stuff. So if we ramble, I do apologize. Obviously, we'll edit out some of those rambles. I hope so it's, it's to the point, but this is, I'm really thrilled to have you here. Okay, so let's get our listeners up to speed, because they've not been on these multiple coffee and alcohol conversations that we've had. Tell us a little bit about the Kendall framework stroke project, what it is and and how did it come about? What's this Genesis? Well,

Brendan Mcsheffrey  1:39  
the Kendall project, we see it came about because of our experience building rag models and training chat, chat GPT two years ago, right as it came right as for Oh came out, and what we developed is what we call the Kendall framework, which is the fastest, most effective way to train both your team and your AI, to get what you want out of AI. And it started out of a method that we had almost fallen upon about using classic tools like lean manufacturing tools, TQM tools, bit of agile, a bit of Lean Startup, bit of design thinking and some of the philosophies around problem curation created by the dmnt company in Palo Alto, and so we kind of mashed these things up when we started training AI models two years ago. And frankly, we started having results that were extraordinarily accurate, incredibly quickly, which led us to working with colleagues out of the MIT Media Lab and elsewhere to say, you know, we really ought to teach people how to do this, because we believe that AI is easy and it should be something accessible to everybody, not just not Just computer scientists, and not just CIOs and and IT teams, but making AI accessible to everybody, and easy and fast and accurate. And so that's what the Kendall framework does, and it's built upon the greatest hits of all time in terms of process engineering.

Dave West  3:18  
Okay, let me see. Let me see if I can summarize that. So ultimately, you know, when you started playing with chat GPT back, you know, two and a half years ago, whatever it was, you were frustrated a little bit by the results. It wasn't necessarily solving your problems that you were asking it to do, whether it's how should I do this? What's how? What's the solution for that? And so you took some of the stuff, actually, we were talking about six years ago around problem definition, problem creation, create. I can't even say that, but you know what I mean, problem, understanding. Problem, that's, that's the word Brenda. Yes. Yes. My Yes. Anyway, so you took those ideas, added a little bit of other things to increase the specificity, specificity of the of the use of AI. So basically, problems plus context makes for a better solution. Is that right? Does that sort of summarize it? That's

Brendan Mcsheffrey  4:20  
very, very accurate, but we had started using a lot of these methods, going back 15 years ago, to build machine learning models. These are classic methodologies that whether in root cause analysis, for example, is is a part of a cornerstone of what we've developed in into the Kendall framework to create highly specific, highly accurate context to anchor AI to answer the problems that you have. And there's also a an observation that we had very early on, and that is that most people started using AI as if it were a search. Search box, just like Google, and it's an entirely different purpose. The purpose of AI is to solve problems. Even deep research is problem solving. So if you're going to solve problems with AI, then you really need well defined problems to solve, and you need to understand who you're solving them for so we, we started working with manufacturers and bankers, attorneys and companies that were outside of Cambridge, Massachusetts. So, of course, we're located in Kendall Square. That's where the name the Kendall project came from. And, you know, coming from the Kendall Square community, which is, you know, one of the leading AI communities in the world. We realized that we need to bridge to the industrial world in the classic industries, finance, biotech, manufacturing and these education in these sectors that are really important to job growth across the world. So we we started immediately reaching out to sectors and bringing people together, and convening and starting to use this method. And use these methods to train models, and when the models, when models understand who you are, what problems you have, what your company is capable of doing, and what your processes are, you create fundamental underlying context that increases the accuracy. What we're finding, frankly, is we're finding first pass accuracy of chatgpt and anthropics, Claude and Gemini and others, is generally about 35% on first pass and first pass accuracy using the Kendall framework, is up over 95% first pass accuracy, which reduces token consumption by half and reduces time by it generally, if on a on an important Project, people will spend 40 minutes or so doing prompt, prompt rework, if you will, to get to an answer. And using the Kendall framework, you can get there in half the time or less.

Dave West  7:12  
What I find, you know, I play with, you know, Claude. And at the moment, it's funny, it's a very fashion thing. I was chat GPT. Now I'm Claude, you know, I'll probably be something else tomorrow, you know, because I'm definitely a dedicated follower of fashion for my AI. And what I tend to do, it's a bit like I do, treat it like a search engine. And I also end up in that, in that sort of spiraling, oh my gosh, is that an hour and a half gone by. You know, I do that sort of disappear into it, because there's so many interesting things that come out, and there's this and there's that, but ultimately, it doesn't necessarily take me on to the journey that I need. For instance, I was using it this week to help me build some personas for our online, self paced learning because I was, you know, trying to build up for the next round of classes that we're doing. So I was exploring personas, and what kind of personas are the most interesting, you know, to this type of class and, you know, and I ended up in this completely black hole where I didn't get any value. I wasn't using the Kendall framework. I apologize. I was just, I was literally just playing. But the what I'm really excited about is this idea that you can rapidly hone in on, actually what you're focusing on, and not get sidetracked and not take go in the wrong direction and and increase the the quality of the results.

Brendan Mcsheffrey  8:46  
Well, that's a big part of what we do with with the Kendall framework in our workshops, is we bring people together to prioritize the problems people should solve. Because our our point of view on what should you do with AI is, well, you should solve the problems you have. There's going to be so many point solutions in every single enterprise. We're all going to have dozens of agents. We're all going to have lots and lots of different AI solutions. But if in order for us to get there, we should be starting with the problems that we already have instead of big ideas. The problem with ideas, especially with AI, is AI can do everything. So where do you start? Well, the problem with ideas, when you have an idea about what you should do with with AI, is it's got a lot of dopamine. You get really excited about it, and then you do nothing. And you have organizations like the RAND Corporation, BCG and Gartner saying that over 70% of AI initiatives inside of enterprise fail. Well, most of them are failing because they're choosing to take on the wrong problems. So choosing the right problems is a really important philosophy that comes out of problem curation. But then we add on top of this, a. Lot of manufacturing and lean manufacturing thought in and around the fact that, you know, AI is a complex system, and with any complex system, there's only two ways to get high quality output. One is you can keep reworking your output until you fix all the defects and you get to your solution, or you can build quality in from the beginning. And to build quality in from the beginning, you have to have expert and outstanding context, which is difficult for humans to do, because AI is built on language, and language has a lot of variation. There's lots of words, lots of language types, and when you have a lot of variation in a complex system, your likelihood of having defects out the other side increases dramatically. So what we did with the Kendall framework is we developed a methodology to package context, to package human language into some standard ways of describing things, so that the AI understands that what we really are aiming for two things with the with the Kendall framework is, one is, is, is lucidity for human recognition and machine readable people need to recognize what people are talking about as much as the machines need to recognize. So if teams need to work together, we need to make sure that that context is both human, relatable and machine readable at the same time.

Dave West  11:38  
And so these these contexts, you these context blocks, I believe that's what they're called in the in the framework, right? Give me an example of a context block, well,

Brendan Mcsheffrey  11:49  
in for an enterprise, there are four foundational context blocks that any individual can describe and around their job and their roles. And it starts with roles. So roles describe who you are or who your user is. It can be the profile of an ideal customer, or it can be your job role. But roles it, we have a and I'll come back to this a little bit. We have, actually a series of rules for AI leadership, but, but roles anchor AI. AI needs a protagonist in order to get to high quality output. So roles anchor AI. So we create a context blocks around roles. Then we also create it help teams. Create context blocks around capabilities. Capabilities describes your products, your solutions, but also describes what your organization or team is capable of. And what you're capable of is very, very different to a large language model than your products and solutions. So if perplexity reads your website, all it's reading is what you sell today. It doesn't help the user of AI find out what you're capable of doing tomorrow. Very, very important distinction. So capabilities is a context block, and then problems. What are you working on now? And what what are you what problems are you solving internally? And what problems do you solve for others? So what are the problems that you have? And that is a really clear and very, very important type of context. And then lastly, the fund, the last industrial foundational piece of context that that is, is cornerstone to the Kendall framework, is processes, because people, solutions and problems all exist within workflows and processes, and so having standardized ways of describing these four fundamental contexts is one of the things that allows us to accelerate AI understanding your company, your work, what you want to get out of AI and what problems you want to solve.

Dave West  14:07  
So I attended the Kendall framework workshop with my team, some of my team, and what was really amazing was somebody that works in my assessments team, and what she found was she very quickly gravitated to actually building a solution for a real problem that somewhere, I mean, it's sort of on our radar, you know, it's in a backlog somewhere, I'm sure, and up in my tech team. But ultimately, she quickly created a solution using this model, because the AI was really good once we provided. You know that the problem, she obviously works this process all the time. She understands the customer probably better than I do, because she works every day with people taking our assessments and trying to get certification. And from that she. Literally built a solution and and it was really, really amazing to see. And it really made me realize something that I, you know, I use AI, like everybody else, you know, I, I throw large bits of text into it and say, Help me refine this. Make it better. Help me answer this question, help me define a key value proposition, that kind of stuff. But what? What I really, what? It really made me realize there's that that's interesting and that's fine, and we'll continue to do that, but you've got this whole group of knowledge workers that with a little bit of, you know, training and inverted commerce and a little bit of space and a little bit of empowerment and that, that it's that that's a big question as well. So like the scrum, idea of being able to take ownership of your process and and the like, you could equip them very quickly with tools to help them do their job better, using AI to do that. And I thought that was, that was pretty profound and and really speaks to the future of knowledge work. I think,

Brendan Mcsheffrey  16:06  
Well, I'm glad you brought that up, because one of the philosophies that we are approaching AI with is AI needs to be accessible from everybody, from that works in the loading dock to the boardroom, and it needs to be in plain English or plain language, if you will, that is accessible to everybody. We just recently did some work with a large financial institution that is building industrial context across all other banking for their bank branches. And the president of the bank asked, you know, who should we bring to the first workshop? And, you know, should we bring branch managers? And our answer was, you should bring branch tellers. You should bring everybody, because everybody's problem is valid. So whether you're a, you know, a loading dock or machine shop operator, the Chief Technology Officer or the CEO, your problem is valid. And so understanding problems from everybody's perspective is very, very important. One of our rules that we have is for AI leadership is AI is a team sport and and that is really a very important thing. Nobody can see everything about their business and their operations and humans at the end of the day are the best sensors. So if you have somebody from a loading dock observing a problem, and you have somebody on the production line observing part of that problem, and you have the CI CEO describing part of that problem, when you bring it all together, you have really super context that helps the AI understand what exactly is going on in your operation.

Dave West  17:47  
Yeah, and, and, I think you know that was always the dream back before AI, that we equip every knowledge worker with the ability to describe problems and manage a network, you know, around those problems, or a constitute constellation, or whatever trendy word we used for, I can't remember this, this idea that, you know, that everybody has a bit of time to solve the problems that get in the way of their processes. The empowered, you know, sort of worker, and now AI kind of takes it up to that next level, because historically, the idea was, if we gave tools a problem definition to everybody in an organization, a bank or whatever, pharmaceutical company, et cetera, and gave them some time and encouraged their managers to create the space for them to use these tools, that there'd be a large amount of clearly articulated problems that we could then use to drive projects to improve the organization at the right level, at the right time, with AI, potentially, you can cut out that development bit. You can move that development into the hands of the people that have got the problem and, and I think that's really, really interesting and kind of scary. I you know, some organizations will embrace it wholeheartedly, probably get in a little bit of a mess, and then, and then back away completely, and then slowly introduce it, probably, which is the normal Adoption Model. But, and some organizations always avoid it, but I think that this is, you know, as I said earlier, the future of of knowledge work. Yeah, it's

Brendan Mcsheffrey  19:30  
rather extraordinary the time that we're in. I mean, the to see the change and the capabilities of these systems. You know, part of the thing that that makes this so exciting for the scrum community is one of the underlying pieces that we use to standardize context is actually the user story format that came up with working at Chrysler back in the 1990s the user story format is a habit for the agile in scrum community. Community, whereas it's not a habit for the rest of the world. But when you describe things as a, I need so that as a, when I process, as a role, when I process, I need so that and the so that is really powerful with AI too, because it tells the AI why? So the one of the exciting things that that scrum has is that you there's already this army of people that know how to define problems in a way that machines understand and and if we can get more people to use standardized language with with AI, we're going to get better results because we're taking massive amounts of variation out of this incredibly complex process. And if we and anytime you take variation out, your likelihood of getting high quality results increases.

Dave West  20:58  
Yeah, I think the the importance of a structure and structured data. And I can only imagine if you could capture, and you've started doing this with organizations. I see is if you can capture this across an organization where everybody puts in their processes from their perspective, puts in the problems from their perspective, puts in the you know, the their own definition of their accountability or role from their perspective, puts in customer we get build that body of consistent data. Then you're always running those questions on, what should we do? How do we fix this problem on an increasingly, you know, structured and well defined foundation of information and knowledge, which then, I mean, where you then leverage the body of all human knowledge, which is what the internet is, right? And suddenly that, that combination, that lens on all that information, empowers, you know, create, potentially creates great solutions we

Brendan Mcsheffrey  22:01  
we we refer to that as as industrial grade context, and we need to think about the context that organizations use as an industrial grade level of context, having context that just just giving a large language model access to all of your document library doesn't give the large language model the context of what's important to you and why you do things. And there's a the the you know, roles, problems and processes describe to the LLM, the particulars of what it works. So when we when we build context blocks, roles always includes a why and what are the motivations in that role and what's important to that role. When we do, when we create a problem, context block, we always analyze how urgent is it to solve this problem, how much value is created if you solve this problem, how much risk is reduced if you solve this problem, and when you analyze those pieces along with the the you know, our context, block structure, has a couple of different parts to it. One is a user story. Three is an open ended how might we type statement, or a statement that allows people to express things in an open ended way, and then we tag things which is very much based on Ishikawa root cause, and where we're really picking up manpower, machines, methods, materials, etc. We put it into plain human, plain language, but it is those pieces of context put together, along with things like priority and urgency and understanding what's important and motivational for roles, when you put all those pieces together, you create industrial grade context, enabling teams to then, you know, enabling teams to get what you want out of the AI system.

Dave West  24:11  
I think, you know, the workshop made me realize, and that, you know, our conversations as a product owner, which is really my, I don't know vocation. I'm a product person, as you as you well know Brendan, and that's kind of what I love to do. I can. It just gives me so much more power. Because, you know, ambiguity and experimentation and dealing with uncertainty is such a key part of what I what I do as a product person, right? I'm trying to, you know, do the most economic to answer to answer questions about the problem that I'm or the product I'm trying to release in the most economic way possible. I'm trying to sort of navigate the cone of uncertainty with with, with the. Least amount of cost by increasing the amount of context that I provide for my, my LLM, my, I can ultimately answer many of those questions without having to do a very expensive build, or even if I do do a prototype, I can, I can have the tools help me do that, obviously, working with the team to ensure you know the sort of quality of it and the and the like, but, but I think that that was, that was that was great and very empowering. Okay, so I've got two last questions that I just need to one, sort of, like a big question that sort of we've talked a little bit about, and I think our listeners are probably thinking about this as well, and I'd love your take on it. So obviously, Scrum is very heavily in the IT software development world, and when we start playing with these engines in this way, providing more context, it makes me feel that the next generation, the sort of like the fifth or sixth generation of programming. Is, is problem definition and context. Do you agree? Brendan, does it is, is this sort of the way that that software developments moving?

Brendan Mcsheffrey  26:17  
I think it's not just moving there. I think it's already there. You if you look at, you know, the capabilities of cursor and replit and windsurfer and and these other coding tools, and even Claude 3.7 itself, the you know, one thing that we all have to keep in mind, the quality of AI's coding today is the worst it's ever going to be, and it's only getting better. And we can take today, if you take a the Kendall framework context blocks of role problem, and we have a software development context block, and we the context block philosophy is that, is that you just create one single unit of lucid context. So software development, it's a single unit, and when you combine those units and give it to replit, it codes. And I was just at the MIT Media Lab this morning, at a class I mentored Ramesh raskers class at the AI for impact, and one of the young students, Sloan student is running a hackathon tomorrow with 150 Sloan MBA and Harvard MBA students, none of them are technical, and they're all going to be coding tomorrow at Sloan. And so you know with when you can take MBAs students who have no technical skills and get them coding in under a half hour. Well, that certainly tells me that problem definition is the next generation of programming language. And now the key challenge to this, where the scrum community has a huge advantage and a huge opportunity to grow inside their organizations, is the fact that people across organizations are terrible generally at describing their own problems. So just because you can use replit and you can use windsurfer doesn't mean you can actually get good output. Because if you don't define your problems and your roles in the user role the problem the user has the process where the problem exists, those sorts of things to the AI, you're not going to get you're going to get terrible results out of it. So it still comes down to having the skills to define problems, accurately, prioritize them, and understand that context. Is a series of sub assemblies, if you will, which is what our context blocks are. And when you assemble them all together, your likelihood of getting high quality output out of the sixth generation programming language being problems increases dramatically. So we think problem design, problem definition is a skill of you know, of leaders of tomorrow, and that's what we're teaching today with the Kendall framework.

Dave West  29:06  
That's awesome. All right, last question, we try to keep these short, and you and I could talk for days about, yeah, the impact to society, the you know what, where the teams fit in? Do we still need sprints? There's all sorts of things we could talk about, but we're not going to talk about that. We're gonna we're gonna leave with one question, which is, I'd love you to give me your perspective, is, you know, I'm listening to this podcast. Maybe I'm working in a scrum team. Maybe I'm a product owner, maybe maybe I'm a scrum master, maybe I'm a developer, maybe I'm a consultant helping Scrum teams, or agile teams, do their thing. What would be the one takeaway that you would recommend? One or two takeaways you'd recommend that they start thinking about for this, for this future, and the use of AI in in the world that they live in.

Brendan Mcsheffrey  29:56  
The number one thing is, is that problems are AI fuel. You. Uh, context is king. In order to understand, you know, AI needs to understand your context, but problems fuel AI. The purpose of using AI is to solve problems, not search. You know, deep research is a is a form of problem solving. And yes, perplexity is a fantastic tool for search, but it's still problem solving as a as a way of thinking about it. So the one of the takeaways that I that I really encourage all of the scrum community to take on is go back to using the user story as your prompt structure. So if you do nothing but use a user story as a prompt structure, you're going to improve your outcomes. So instead of spending lots and lots of money on prompt engineering training, or, you know, AI training for teams, teacher train, teacher teams to use a user story and start there, because it helps. You're, you're going to learn what your problems are. And so that's, that's a, you know, quick, easy takeaway for Agile and Scrum practitioners.

Dave West  31:16  
Okay, Brendan, where can our listeners get these context blocks today to get going

Brendan Mcsheffrey  31:23  
well today, we deliver them in the form of workshops here in the Boston area. However, we're expanding pretty rapidly, and they people can find a starter kit at Kendall ai.org and download a problem role and software context blocks that they can use right away, which come in the form of PDFs that are fillable, PDFs that, when you fill them in and upload them to the large language model of your choice, they act as super prompts. And there's lots, literally 10s of 1000s, of different applications that you can use these for right away. But we highly encourage people to adopt a an approach where AI is a team sport, and you get your team involved, and all of you work using the same structured language to improve your results out of AI. So Kendall, ai.org and you can get your your context block starter kit there.

Dave West  32:17  
Great. Thank you, Brendan, and obviously there'll be connected that link will be on the notes for this this podcast. So thanks, Brendan, thanks for spending the time for us today, and thank thank you listeners today, you heard from Brenda mceffery, Managing Partner and founder from the Kendall project, talking about the Kendall framework. We talked about how to improve the improve the results from your generic llms by adding context blocks in terms of jobs to be done, personas. We heard about software contracts blocks and really structured problems and and how that can really help fuel your your generic LLM it's really interesting, and I'm really excited to hear about that from from Brendan today. And thank you for listening everybody to today's scrum.org community podcast. If you liked what you heard, please subscribe, share with friends, and, of course, come back and listen to some more. I'm lucky enough to have a variety of guests talking about everything in the area, professional Scrum, product thinking and, of course, agile. Thanks, everybody. Scrum on you.

Transcribed by https://otter.ai
 


What did you think about this content?