At Amsterdam Airport Schiphol we're using an Agile approach to realize a large digital program. This program includes 5 value streams with multiple teams. Due to the increasing scale of the program, some challenges arise. For example:
How to organise a Sprint Review with an increasing amount of teams and stakeholders and preserve a valuable outcome?
In this blog post, I'll describe the challenges and experiments we might try to deal with them. Some of these experiments are ideas proposed by Preeti Gholap. She answered my question I posted on LinkedIn some time ago. Thanks, Preeti!
The purpose of the Sprint Review is to gather feedback on the delivered product and evaluate the collaboration. The Sprint Review should be used for a demonstration and inspection of the developed increment. It’s the best opportunity to adapt the product backlog if needed and release the increment to production if the Product Owner finds it useful enough. It’s the ideal moment for a joint reflection and to decide how to proceed to optimize value.
The Bright Side
Earlier on I wrote an optimistic article about the Sprint Review at Schiphol. The article "5 Characteristics of a Great Sprint Review" offers five examples of what is going well:
- Stakeholders are difficult to recognize. For sure they are present, but they blend themselves between the Scrum Teams making it one big collaborating group of people.
- Every Developer Participates. Yes! Every developer was there. Most of them holding sticky notes on which they wrote down the feedback they received on their product.
- Feedback. Feedback. Feedback. In short, the Sprint Review was one big feedback party. Every Scrum Team provided demonstrations on several devices. Everyone could actually use the product and share experiences and lessons learned.
- A Tailor-Made Sprint Review. A great Sprint Review has a tailor-made format. Sometimes using different market stalls for every team is the best format, sometimes a central demonstration & discussion works best. A great Scrum Team continuously searches for the ideal format to gather feedback.
- Beer & Bitterballen. The follow-up of the Sprint Review is, of course, the Retrospective. The ideal opportunity to process the Sprint Review and discuss possible improvements combined with beer & bitterballen
The Dark Side
The increasing scale of the program, however, has an impact on the Sprint Review. Due to the increasing amount of participants, some pitfalls start to appear, for example...
- The Sprint Review becomes a Demo. It's not a feedback party anymore but a demo-festival. To be honest, it was already called a "demo", but it had all the characteristics of a Sprint Review. There was a focus on gathering feedback and collaboration. However, nowadays a large amount of one-sided demos starts to exceed the opportunity of gathering in-depth feedback.
- Stakeholder abundance. Is it possible to have too many stakeholders at the Sprint Review? Ideally, no. But it can become an issue when the amount of irregular visitors makes its imprint on the session. These "one-time" visitors expect an explanation of the project/product as a whole, not necessarily an update about the previous Sprint. This is not only time consuming, but it also doesn't add any value for the Scrum Team, who wants detailed feedback on their previous Sprint.
- Developers are not participating anymore. As a consequence of the lacking feedback, not every member of the Scrum Team attends the Sprint Review anymore. The Scrum Master and Product Owner start acting as the ambassadors of the Scrum Team. They are becoming the hub between the stakeholders and the Development Team. The valuable dialogues between the stakeholders and the Development Team are diminishing.
Ideas for Experiments
1. Organize a monthly demo besides the bi-weekly Sprint Review
For sure the demo is valuable. But it has a different goal compared to the Sprint Review. The primary goal of the Sprint Review is gathering feedback. The goal of the demo is creating alignment between everyone involved or interested. Organizing a monthly demo with a focus on alignment and a regular smaller bi-weekly Sprint Review (maybe per value stream) with a focus on gathering detailed feedback might be a good solution.
2. Organize "small circle" and " large circle" 2-part Sprint Reviews
A solution suggested by Preeti Gholap
I started this when I was coaching 6 teams that needed to do a joint demo, in a situation when we only had limited face-to-face access to stakeholders (who had to fly in from several countries).
Each team uses the first part of Sprint Review (say 40 minutes for a 2 week sprint worth of stories) to get detailed feedback from their own “small circle” of mandated users/accepters – this is a small group of 4-6 people with whom the team and Product Owner works with closely for both Backlog and user story level refinement and acceptance. (full cycle engagement, thus). The feedback we’re looking for here is: “is this usable?”, “does it meet all the acceptance criteria, functional, non-functional, for the users and for operations/run?"
Then the teams proceed to the joint demo- the “ large circle” where each team presents a summary of the increment and its value (thus not the details per story) to the other teams and management/wider stakeholders. The feedback asked for is at the level of: is this valuable, and contributing to the whole? Are we going in the right direction? Are there upcoming dependencies etc. The "small circle" has intense contact with its own Scrum team throughout the Sprint- via phone, WebEx, Jira, etc, and also face-to-face at the Sprint Review. The "large circle" has face-to-face contact with all the Scrum Teams in the product program every 6 weeks. This has worked quite well for us.
3. Continuous-flow acceptance decoupled from Sprint Reviews
Another solution suggested by Preeti Gholap
This is a way I learned from a team I currently coach, who is working on a complex COTS app, and were doing this even before “officially becoming Agile”. This team ensures they get detailed feedback and acceptance on a per story basis “as soon as it’s done”.
Since entering the Scrum/Agile coaching program at the company we work at currently, they have been learning to make stories smaller, more vertical, etc, and consequently, they are getting good at getting valuable stories to "done" every few days. Meaning, they are in the lovely position of having acceptance on a per story basis several times during the Sprint. And this even without any kind of automation.
They are able to do this in part because their end-users are close at hand, and their Product Owner trusts that team and end-users can collaborate nicely without her (the Product Owner) needing to preside over a "demo". (The main reason they can do this, is they have always embraced Getting Things Done and face-to-face collaboration).
I actually have a hard time convincing this team to do a Sprint Review (“what's the point? the work is accepted already”). But neither they nor the organization is quite ready for a Kanban/continuous flow way of working. Slowly, they are seeing that a short Sprint Review is still handy for the reasons beyond feedback on the Sprint result. For example engaging with wider stakeholders' concerns, for visibility and appreciation for the team members themselves, and the “feed-forward”- reconfirming the roadmap with stakeholders holding diverse agendas. So, this technique of decoupling detailed feedback from the Sprint Review could also help your scaled teams.
In this blog post, I've shared the challenges we currently face within Amsterdam Airport Schiphol with the Sprint Review. How to organize a Sprint Review with an increasing amount of teams and stakeholders and preserve a valuable outcome? Thanks to the ideas suggested by Preety Gholap I've described some experiments we might try to deal with them.
If you've got any other suggestions that might be useful to experiment with: please share them! It's highly appreciated!