Skip to main content

Sprint Review day

Last post 10:48 pm September 12, 2019 by Ben Brumm
5 replies
11:57 am September 12, 2019

I'm the Scrum Master for a team running 3 week sprints. There's an ongoing discussion in my team about which day we should have the Sprint Review on. In the past, we had it on Thursday in week 3, but it got moved to Wednesday a while back. Most recently, we moved it to the Tuesday, because the team wanted to try that. The reason for this being that they felt that with the Review any later, they wouldn't have time to fix any critical issues that were noticed during the review, e.g. by some stakeholder, or by someone in the team that had been working on another story during the sprint. However, I'm worried that this is really going in the wrong direction, because there will be user stories that aren't ready for review on Tuesday, and so those would get released without a review at all. Also I feel that it might discourage clearly specifying the acceptance criteria for the story, if people subconsciously think "It'll be fine, we'll have plenty of time after the review if something comes up".

Some things that affect what the team is feeling:

- We release a new version of our product into production after every sprint.

- We have external stakeholders attending the review, that often have very good suggestions, ideas or valid criticism.

- The team doesn't want a new "internal review" meeting just for the team earlier, because they feel there are plenty enough meetings.

- The other stakeholders won't be able to participate more during the sprint than the review.

- Some of the stuff we build are parts of very long flows, so sometimes it's very difficult for a single developer to understand all repercussions of something even if the user story is very well-specified (e.g. some sort of value affects something 7 steps later in some part another team works on). It's not a frequent issue, but it's happened that something like that is noticed during a review when the entire team is seeing the result.

So what I am hearing from the team is about scenarios where a stakeholder will say something during a review like "I think you've misunderstood the requirement, what I'm seeing here isn't going to work/is not what the customer ordered/will cause some serious issue for the end users/etc", and then there won't be any time to fix it if the review is right at the end of the week. This has happened in the past, but I could probaly count on one hand the number of times it's happened in the last few years - we're usually pretty good at specifying what's going to be done. Usually during sprint reviews we'll just have suggestions for minor improvements that could easily be put into the backlog for subsequent sprints.

So my question is - has anybody else experienced similar issues? How do you tackle important fixes that need to be done after the review, if not fixing it results in a product that's not releasable? Any good ideas for how to manage the situation? Mostly I feel like the review should be at the last day of the sprint, and that there's some other form of internal review missing, but I'm not sure how the best way would be to manage that.


04:17 pm September 12, 2019

It is a good thing that you are getting such feedback from stakeholders in your Sprint Review; however, it seems that there is perhaps a disconnect between your Product Owner and the stakeholders if the latter are identifying significant issues or missed requirements?

You should try to promote keeping the Sprint Review as late in the sprint as possible - it isn't a demo and patch session.   Any presented functionality should have already met your DoD and be production ready.   If there are issues around missed requirements or incorrect acceptance criteria, that should be a discussion in your Retrospective on how to improve those areas.   

I would not try to accommodate a "fix" session post-Sprint Review.   Allow your Product Owner to make the call whether functionality should be released or held back based on stakeholder feedback or identification of issues like downstream impacts.   Use your Retrospective to discuss any unexpected Sprint Review feedback to explore ways to mitigate/eliminate such negative results in the future.


05:02 pm September 12, 2019

How does the team currently establish transparency over:

  • Work that is refined and “ready” for Sprint Planning
  • Work that is Done and not Done

05:30 pm September 12, 2019

I've totally experienced this before, as I'm sure a lot of people have as well.

A few suggestions I have...

  • Sprint end doesn't represent the Release date.
    • It's the day you take to get feedback from the people utilizing what you just made.
      • It's also the time for the team to reflect on what they just did and improve themselves for the next sprint.
    • By decoupling the release date you gain flexibility and reduce crunching.
      • You also gain ability to deliver exactly what is needed instead of what is done.
        • You could also set a release date to say the first Tuesday of the Sprint, giving the team the time they need to get it all done and ready, and support the deployment.

           
  • You may have to spend time training the PO
    • They need to be the customer/stakeholder proxy, thereby having the ability to approve/disapprove stories as they are ready to be marked Done.
      • If they understand the cust/sh needs well, you get feedback much earlier in the Sprint and have higher successes for the Review.
      • You also could enable the team to deliver multiple times during the sprint, on a cadence. 
        • Instead of just first Tuesday, maybe every Tuesday it deploys to a Staging/Demo environment. It's all based on how well the PO understands the cust/sh. 
          • Then you could still push to production what's in staging without delaying release based on refactors.

             
  • If your Sprint Review is based on the Sprint Goal/Objectives, you won't have to go story by story and diving deep into each.
    • It would then be done by discretion of the cust/sh.

05:51 pm September 12, 2019

We release a new version of our product into production after every sprint.

Why? Shouldn't you release new versions when it is appropriate to do so? And if information obtained in the Sprint Review determines the increment to be not ready then don't release. The Scrum Guide section on the Development Team states 

The Development Team consists of professionals who do the work of delivering a potentially releasable Increment of "Done" product at the end of each Sprint.

Notice the word potentially? 

Have your team read the section of the Scrum Guide where the Review is described (https://scrumguides.org/scrum-guide.html#events-review).  A couple of things I'd point out to them:

A Sprint Review is held at the end of the Sprint to inspect the Increment and adapt the Product Backlog if needed. 

...and the presentation of the Increment is intended to elicit feedback and foster collaboration.

  • The entire group collaborates on what to do next, so that the Sprint Review provides valuable input to subsequent Sprint Planning;
  • Review of how the marketplace or potential use of the product might have changed what is the most valuable thing to do next; and,
  • Review of the timeline, budget, potential capabilities, and marketplace for the next anticipated releases of functionality or capability of the product.

The result of the Sprint Review is a revised Product Backlog that defines the probable Product Backlog items for the next Sprint. 

Anything identified as needed changes should be reflected in the Product Backlog.  

As @John Varela suggests, decouple the releasing of product updates from the Sprint Boundaries. I've had teams that took feedback from a Sprint Review held on the last day of the Sprint, created Product Backlog items, put those items into the next Sprint, finished them and pushed a new release of the product on the 3rd day of a current sprint. I honestly feel like that should address most of their concerns. 

As for the "you didn't understand the requirements" issue, point out the empirical evidence. Ask them why they feel there is risk so great that they need to change the way they work when the empirical data suggests otherwise?  When it does happen, you should inspect and adapt immediately which is one of the largest benefits of agile and Scrum time boxes. Addressing @Ian Mitchell's questions will help with this situation as well.  If the team is providing good visibility into their plans, some of those situations that scare the Developers so much can be avoided.


10:48 pm September 12, 2019

As Timothy mentioned, I think a conversation needs to be had with the Product Owner. The Sprint Review is about "here's what we have completed during the sprint", not "please check and approve what we have done".

Is there a way you could get the Product Owner to review what has been done during the sprint, as the team is working on it, or as part of the Definition of Done?

Any feedback that is found during the Sprint Review can be added to the Product Backlog for a future sprint, not the current sprint.


By posting on our forums you are agreeing to our Terms of Use.

Please note that the first and last name from your Scrum.org member profile will be displayed next to any topic or comment you post on the forums. For privacy concerns, we cannot allow you to post email addresses. All user-submitted content on our Forums may be subject to deletion if it is found to be in violation of our Terms of Use. Scrum.org does not endorse user-submitted content or the content of links to any third-party websites.

Terms of Use

Scrum.org may, at its discretion, remove any post that it deems unsuitable for these forums. Unsuitable post content includes, but is not limited to, Scrum.org Professional-level assessment questions and answers, profanity, insults, racism or sexually explicit content. Using our forum as a platform for the marketing and solicitation of products or services is also prohibited. Forum members who post content deemed unsuitable by Scrum.org may have their access revoked at any time, without warning. Scrum.org may, but is not obliged to, monitor submissions.