Struggling with Capacity in Sprint Planning
Your Sprint is over. Your increment is “Done”, you have coded cleanly, your unit tests and integration tests are bright green, you are proud of your work. The Sprint Review is running smoothly. The Sprint Retrospective allows the team to find 1 or 2 areas of improvement without revolutionizing the world.
Well done !
You follow on your Sprint Planning, identifying a friendly Sprint Goal as a target, a dozen Product Backlog item as a trajectory and a nice Sprint Backlog to get there.
Meanwhile, in the next building, your favorite users (internal employees of the company) will do some functional tests, when their daily professional obligations leave them the time.
It turns out that these functional tests make them find defects serious enough to deserve to be treated "quickly". The Product Owner insists, perhaps a bit heavily, that patches be made from the current Sprint, strongly upsetting your Sprint Backlog or even endangering the sacrosanct Sprint Goal.
And unfortunately, this pattern is repeated Sprint after Sprint.
You observe that Sprint Planning becomes very delicate because you realize that you have great difficulties in estimating the actual working capacity of the development team. Your Sprint goals are also disrupted, far too often to your liking.
The team is in a very uncomfortable position where they can not control and stabilize their Sprint Backlog.
By applying the "5 why", you understand that the first cause that you identify with your problem is the very variable, but never null, number of bugs to be corrected urgently, as a result of the functional tests of your users of the neighboring building.
Why ? Because they have to test after you because the current tests are incomplete
Why ? Because the increment is not really finished
Why ? Because it fails to do the functional tests inside the Sprint
Why ? Because the team does not know which of the relevant functional tests to run
Why ? Because the "Definitions Of Done" does not include the passage of functional tests, as acceptance criteria for each item, let alone their automation and that nothing is done to ensure that these tests are carried out by the development team during the Sprint.
The Scrum framework helps us to become aware of the flaws of the organization, but it lets us find the approach that best suits our context to progressively evolve the organization, while having the courage to respect Scrum without changing the rules.
The first step is to enrich the "Definition Of Done". This must (little by little, let's be pragmatic) bring to a level of quality such that the increment is actually deliverable and usable in production by the end user, at the latest at the end of the Sprint.
Now, how can the development team carry out these new tests themselves and arrive at the holy grail of the increment actually “done”?
As always, several options are possible, depending on the context that is unique, but in summary, the question is to find a way for the development team to be cross-functional and autonomous in the execution of tests.
A first option is that the business expert himself be integrated into the development team, since he has a critical contribution to obtain a completed increment. This option is completely aligned with the spirit of the Scrum framework which reveals the problems resulting from the operation in silos, here between the IT and the business. The presence of "business" expertise in the team allows the team to design and execute the relevant tests at just the right moment. The payoff is obvious for the development team that strengthens their skills, provided that the new team member accepts the rules of the game and integrates well with the team. But the company pays for its ability to deal effectively with "tasks", not necessarily to create a product and if his presence in the team is too sporadic, it will become a painful bottleneck.
Another option is to restrict the work of the development team during the Sprint to the execution or even the automation of the tests, but not to their design. The design of the tests is then done before the Sprint, during the refinement of the Product Backlog items, in the presence of the business expert and the Scrum team or at least good representatives. During Sprint Planning, the items are thus "ready" to be developed and tested by the development team, which already has the test scenarios (acceptance criteria), created in collaboration with the expert.
There are still other options to explore such as the increase in skills (training, mentoring ...) of the Product Owner or the development team if it has the appetite (profile of Business Analysts for example), in order to gain expertise and therefore autonomy without disrupting the Scrum team by adding new people.
You may want to consider decreasing the duration of the sprints so that corrections from the same non-pass tests can more easily wait for the next sprint instead of disrupting the current sprint that arrives earlier. But this symptomatic remedy does not deal with the root cause of the problem, only tolerating a poor Scrum implementation, allowing the practice of having "testable" but still "undone" increments. You are bound to enter a spiral that will reduce the duration of your sprints until you reach a point where the experts will work daily with the development team.
The choice of the most relevant solution depends on the context that you alone control. You are the only ones who know what is more or less easily feasible, what you have already managed to do. So now, what other options do you envisage?