Skip to main content

Technical stories in the sprint review

Last post 07:02 pm January 2, 2021 by Daniel Wilhite
10 replies
03:49 pm December 21, 2020

Hi,

I work with one of a group of teams in my organisation that are collaborating to build a new but increasingly complex product, based on a shared product backlog. I find that our sprint reviews give a good account of typical user stories, i.e. adding new functionality for the end user, but I'm yet to find the best way of presenting more technically oriented work. Some examples:

1) As a System Administrator, I need to be able to backup and restore the application.

2) As a Customer, I require that the software can be deployed on my organisation's chosen Kubernetes platform.

3) As an IT Security Officer, I require that the application's internal architecture meets <some security standards>.

I'd be interested to learn how others present non-visual user stories in sprint reviews. One answer might be that the 'users' are not real users, they are more like stakeholders, so these are not valid user stories and should not be included in the review. But they certainly have value to our Product Owner and he's happy to prioritise them on the backlog, they need testing and documentation, etc, so I prefer to accept them as stories. If that's the case, how to demo something that can't be demo'd (at least, not in a reasonable time)? Conventional wisdom is to avoid using slides in sprint reviews, but perhaps this is an exception?

A key point for me is that these architectural changes tend to be among the most challenging tasks, so I would like the developers to get the visibility and credit they deserve for completing such work. Any thoughts welcome.


07:52 pm December 21, 2020

so I would like the developers to get the visibility and credit they deserve for completing such work.

Why? Is the value provided by each product increment unsatisfactory?


08:16 pm December 21, 2020

The first thing that stands out is that the "user stories" you identify aren't discrete actions that you implement once and then continually verify or validate until they are removed from scope. They are attributes that your system needs to have over a long period of time, regardless of what user stories are implemented.

Looking at the System Administrator's need, there's a bit to unpack here. Other user stories will probably result in changes to your system's data model, such as adding and removing elements. However, there are also probably requirements about how long backups need to be valid and restorable. This becomes an ongoing effort to make sure that backups of data can be restored into newer versions of the system without error. In other words, it becomes a small sliver of any user story that affects the format of data or what technologies are used to store data. It could even be a larger unit of work to write methods for converting old backups to be restorable on newer versions of the system.

Looking at the Customer's need, different organizations are probably running different versions of Kubernetes. It's up to the Product Owner to maintain a list of supported environments and dependencies. As new versions of Kubernetes come out, the team is going to have to look at what, if any, changes are needed to support the latest versions. If old versions are no longer supported, then it would be necessary to clean up support for them or the system will incur technical debt that may make it harder to maintain in the future.

Similarly, as time passes, both the architecture of the software system and the security standards will evolve. Just because a particular version of your software system conformed to a particular version of the security standard doesn't mean that the next version of your software system will conform to the next version of the security standard. You need to regularly review the security standards that are applicable to your system and your system against these standards.

The first thing that I'd recommend would be to move these from user stories to the Definition of Done, or perhaps to Acceptance Criteria on other user stories. Something like the backup and restore could be an Acceptance Criteria on stories that affect data storage, such as changing the database technology or changing the format of the data. The Customer's need can be a semi-regular review of your dependencies and supporting infrastructure to determine what it makes sense to dedicate time and energy to supporting and could be captured as a "technical user story" to review this data. However, I'd probably be more specific about the specific customer stakeholder category. Finally, I'd recommend conformance of the architecture to standards to be part of the Definition of Done, but you may need to decompose the security standard into more discrete requirements that you can confirm as appropriate.

Now that you have the formats of the requirements, you can look at how to demonstrate them at the Sprint Review. It's important to realize that the Sprint Review isn't a demonstration - it's an inspection and adaptation opportunity. What you inspect is up to you. For example, regularly showing a test result could be sufficient. The frequency of the testing could vary, depending on how hard it is to execute and who executes it. Likewise, reports from static and dynamic analysis tools may be useful for measuring conformance to some aspects of security standards. Getting into the details of what the specific requirements are around backups, dependencies, and standards and decomposing them into more granular needs may help.


03:24 pm December 22, 2020

@Ian, it's true "there is no 'I' in team" so why would I like to see individuals receiving recognition for their contribution to the team meeting its sprint goal? Because they are also also human beings. For some of us, recognition and appreciation from our peers is a motivating factor, while for others it may not be. It will depend on the individual. But it is clear to me that people who feel under-appreciated are not going to form a high-performing team.

Secondly, in some corporate environments it is an unfortunate truth that "raising one's profile" has an impact on end of year performance assessments. Sprint reviews are widely attended and with this concern in mind, I would like to give kudos where it's due. I guess none of this is really a Scrum concern - I was just providing background to my question.


03:59 pm December 22, 2020

@Thomas, thank you for these insightful thoughts. Our organisation already has a process for what we call "technical currency" items, namely checking that our software still works with newer versions of external things. Generally we don't create stories for this. We create tasks for testing... sometimes it ends up being "test and bless" (it still works) or may result in a larger technical task that will reduce sprint capacity for new features in the sprint in which we tackle it.

I would like to draw a distinction between doing something for the first time and maintaining it thereafter. The intent behind the three examples I gave was the former, hence user stories, while you described more of the latter, which we would cover as technical currency and (I agree) should become non-functional requirements in our Definition of Done.

Taking the customer's Kubernetes platform as an example, I had in mind different vendor implementations of Kubernetes rather newer versions of any given one. So we may have developed for deployment in AWS (EKS) initially, then our PO decides based on customer requests that we ought to be able to deploy in Azure clouds (AKS) and he puts this on the backlog. This is something qualitatively new, requiring new install scripts and potentially a different backup strategy, so I see a story there, then once achieved any further changes to maintain compatibility would not require stories.

Another example might be compliance with WCAG accessibility guidelines. The first time we add this to the product there might be stories of the form "As a user I need to be able to perform function X using the keyboard alone", but once the application is generally keyboard navigable this sort of thing just goes into our Definition of Done for all future work.

You've given a timely reminder of the scope of the Sprint Review, and internally ours are indeed called 'demos'. So I will take that thought and explore how we can inspect platform changes delivered in the sprint without getting hung up on having to 'demo' them.


04:55 pm December 22, 2020

so I would like the developers to get the visibility and credit they deserve for completing such work.

Is the only way to give them credit by showing completion of a user story?  Don't they get credit by delivering product that the stakeholders/users appreciate and use?  If giving credit for completed stories is the only goal, then you may have a bigger problem than you think.

I'm going to somewhat join @Thomas Owens' position.  The user stories you listed seem to be intrinsic functionality that should be done.  For example, the System Administrator story just seems like common sense.  Who would provide an application to users to self manage if it couldn't be backed up and restored?  

In Extreme Programming where User Stories originated the practice was to avoid any type of technology in the statements.  The story was to provide the basic understanding of a problem to be solved and the implementation details would be determined as the work began. Your Customer and IT Security Officer stories are both specific to technology and standards.  These type of requirements would be best as acceptance criteria (ref Kubernetes) or Definition of Done (ref Security Standards).  

Too often organizations will use User Stories completed or Story Points completed as a measure of success.  I can guarantee you that I could work with a team that could complete large number of stories or points but deliver no value to anyone.  I would never do this intentionally but it is quite possible.  Stop promoting the bad practice and start helping your organization see that recognition should be based on the value being delivered in an iterative and consistent way. 


04:28 pm December 23, 2020

@Daniel, I appreciate your feedback and I like "The user stories you listed seem to be intrinsic functionality that should be done" in particular. This sounds like a Product Owner's dream :-)

"Who would provide an application to users to self manage if it couldn't be backed up and restored?" It's true that every on-premise piece of business software I've ever worked on has had install, upgrade and backup/restore functionality. You could say the same about other functions though... every such application has an authentication mechanism (login/logout), also some sort of in-application history/audit view of changes. How far would you go in saying all of this is intrinsic?

It also begs the question of who is doing this intrinsic work and whether they are also engaged in feature development? Some organisations may have the luxury of a dedicated 'platform' team for this sort of work, but many smaller orgs need their dev team (Scrum team) to handle all aspects of implementation. Assuming that it is the Scrum team, one advantage of handling the creation of this intrinsic functionality as user stories is that it helps maintain a steady SP velocity for the team. Otherwise we're faced with wildly varying sprint capacity (in SPs) to allow time for other work and it becomes extremely hard, sprint by sprint, to use velocity to forecast anything.

"In Extreme Programming where User Stories originated the practice was to avoid any type of technology in the statements. The story was to provide the basic understanding of a problem to be solved and the implementation details would be determined as the work began". Certainly I would 100% agree that user stories should never state *how* a solution should be implemented, but that's not what we're talking about here. I was proposing user stories where the story's goal is platform/technology-specific, i.e. the user has mandated a technology choice. This commonly crops up in requirements for interfaces. How often in work management applications does the customer ask for integration with Jira (or some other tool they already use)? As long as the customer is not dictating which technologies are used in the solution, I'm fine with the story exposing a technology-specific requirement like this, but I'm happy to hear contrary views.

For me, it also goes back to what I wrote in reply to @Thomas Owens: it depends whether we are adding support for X for the first time or are simply conforming to a choice of supported technologies made in the past and already in the product. I like a story for the former but not for the latter.


05:23 pm December 23, 2020

For some of us, recognition and appreciation from our peers is a motivating factor, while for others it may not be. It will depend on the individual. But it is clear to me that people who feel under-appreciated are not going to form a high-performing team.

The things that motivate people within high-performing Scrum Teams include autonomy, mastery of skills, and a sense of purpose. If "appreciation" is being sought, my advice would be to dig into this further, as there may be deeper issues.

Secondly, in some corporate environments it is an unfortunate truth that "raising one's profile" has an impact on end of year performance assessments. 

Scrum is very good at highlighting unfortunate truths quickly, including organizational impediments to agile practice. They can then be accepted or dealt with.


06:58 pm December 25, 2020

@Henry Day I would argue that your examples are not User Stories, they lack a "so that" statement that would be helpful to understand and to challenge the proposed value that each of these stories is anticipated to bring.

I align here with others, that those examples rather looks like something that should be part of DoD or Acceptance Criteria, it is up to you what you will do with that point of view. Nevertheless, you are seeking advice on how to present these non-functional requirements at the Sprint Review, so I try to focus on that.

Backup and restore of the application:

  • What about showing how to make backup and breaking the app/build "live" and presenting how it is getting restored from that backup?
    • By that, you can test it out under some kind of pressure and therefore inspect the required steps to perform, steps simplicity, time consumed, data integrity, app monitoring, etc.
    • You may even try to invite one of the stakeholders to perform it, while you will observe how he behaves.

Software deployment on chosen Kubernetes platform:

  • Similar thing as the above, what stops you from preparing examples to show it's working?
    • If it is not possible, try simple storytelling, which would be better than nothing. 

Application's internal architecture meets <some security standards>:

  • I bet that even here it is possible to prepare examples, you probably could at least record some scenarios from the development effort that vulnerability you had faced, and show that now the system is secured at least for what was found and known.

What is the most important IMHO is to ask the team how to deal with this problem of presenting during the Sprint Review. You (the whole Scrum Team) should know your stakeholder, know the goal and value that you strive for to develop, and know the potential benefits of inspecting what is presented with your stakeholders - highlighting one results of your work over the others to present might be the crucial decision made by the team, as it will impact your inspecting and adapting opportunity.

If you are the Scrum Master, you should coach the Developers, and mostly the PO about each other accountabilities and responsibilities, which once understood will help them self-organize. IMHO the PO is the driving force for the Sprint Review, so I would start there.

 


02:31 pm December 30, 2020

@Piotr, thank you for your comments and especially for taking time out on Christmas Day to share them.

I instinctively agree with your final remark and especially "the PO is the driving force for the Sprint Review, so I would start there". Reading your other suggestions about what to inspect in the review, I've come to realise that our sprint reviews are shorter than is recommended (rule of thumb is "up to 1 hour per week of sprint duration", I believe?), so that could partly explain why I've not considered a live demo of things like backup/restore as practical. They would simply take too long. This is something I can take back and discuss with the team.

But I am also conscious of the number of attendees (stakeholders) and the value of their time. As mentioned in my initial post, we have several scrum teams collaborating on one product with one PO and our sprint review is a shared one. When you combine the team members from these teams plus the PO plus other stakeholders, e.g. the Support org, the number of people involved in a sprint review can be 60-100, so spending half an hour installing the product from scratch is something to think carefully about, though not rule it out if the team truly see value in it.

Finally, just picking up on your first remark about the wording of the stories, yes, I omitted the "so that" but only for reasons of space. These stories clearly have one, e.g.

As a System Administrator, I need to be able to backup and restore the [on-premise] application so that I can migrate to a new physical infrastructure when needed.

As a Customer, I require that the software can be deployed on my organisation's chosen Kubernetes platform so that I maximise ROI from my existing application infrastructure and do not have to invest in services from a new Cloud provider.

As an IT Security Officer, I require that the application's internal architecture meets <some security standards> so that I can approve its use within my organisation.

I hope that makes sense.


07:02 pm January 2, 2021

After reading your responses I have a better understanding of your questions. The examples you have are what I usually explain to my teams as Technical User Stories. These are requirements for a product and can be captured in a "story" format as you show. And they can be demonstrated in a variety of ways. My go-to suggestion is to demo them in relation to the Sprint Goal. How does the functionality of 

As a System Administrator, I need to be able to backup and restore the [on-premise] application so that I can migrate to a new physical infrastructure when needed

relate to the Sprint Goal? The Sprint Review is to review the outcome of the Sprint and determine future adaptions. The Sprint Goal is used to focus the team on an outcome and explain the reason a set of stories have been selected. So instead of demonstrating each story, demonstrate the outcome or how the Sprint Goal has been satisfied. This would also help with the time demands you mention. If every team demonstrated the outcome instead of every story you might see less time needed for demonstrations. 



The tactics for doing the demonstrations can vary. But the team doing the work should be able to come up with something that showcases how the results of their work satisfies the outcome. 


By posting on our forums you are agreeing to our Terms of Use.

Please note that the first and last name from your Scrum.org member profile will be displayed next to any topic or comment you post on the forums. For privacy concerns, we cannot allow you to post email addresses. All user-submitted content on our Forums may be subject to deletion if it is found to be in violation of our Terms of Use. Scrum.org does not endorse user-submitted content or the content of links to any third-party websites.

Terms of Use

Scrum.org may, at its discretion, remove any post that it deems unsuitable for these forums. Unsuitable post content includes, but is not limited to, Scrum.org Professional-level assessment questions and answers, profanity, insults, racism or sexually explicit content. Using our forum as a platform for the marketing and solicitation of products or services is also prohibited. Forum members who post content deemed unsuitable by Scrum.org may have their access revoked at any time, without warning. Scrum.org may, but is not obliged to, monitor submissions.