Delivery in the last week of the Sprint
we are a team working on a 4 weeks Sprint.
We have a weekly delivery meaning that at the end of each week we deliver a new version of our software that is then tested the week after.
By doing that, US delivered on the last week will always fall to the next Sprint (to be tested) and so cannot be consider "Done".
I was wondering which is the best practise to manage this scenario. Does it mean that in a 4 weeks Sprint, we should deliver only till the week 3 so that during week 4 we can test the US and close all the topics?
It seems strange that you're delivering work that is untested. Is there a reason why the team is not able to test the work and then deliver done work?
It sounds as though the team has a compressed waterfall schedule, in which testing for a substantial batch of work is deferred for a week. WIP appears to be high and the focus on completing individual items might not be as it should be.
The risk to the Sprint Goal must be significant, given that work is regularly left untested and undone at the end of each Sprint timebox. How effectively do the Developers frame and meet Sprint Goals in the situation you are describing?
What does your Definition of Done state about delivering untested work? Remember that the Developers are expected to deliver an usable increment of value at least once during a Sprint. In the situation you describe I don't see how that can do that on a consistent basis. Even if you consider the increment from the previous Sprint. Having worked in software development for 37 years as a Developer and Tester, I know of VERY FEW instances where a Tester did not find issues that needed to be fixed before the work can be delivered. So in your case with 4 week Sprints, the previous 4 weeks of work may not get delivered until up to 4 weeks later because the Developers are busy working on the current Sprint's work. I am surprised that you ever get something delivered.
There is a lot of testing that can occur during Sprints. In fact, each Product Backlog Item should be testable on it's own. You indicate that you are using User Stories, so I'm assuming that there are acceptance criteria stated. That indicates to me that as soon as Developer says they have finished their code, a Tester should be able to start validating that. Even better would be that the Developer has implemented automated tests at the unit, integration, systems levels that would validate it.
You are doing waterfall software development using some terms from various agile practices and frameworks. You are certainly not following the Scrum framework as defined in the Scrum Guide.
Have you approached the Developers (including Testers in that group because they are doing work to deliver the increment) and asked them if they have any suggestions on how to improve the cycle? Is there anyone in the organization that sees the current behavior as a problem other than you? To change this behavior is going to require a lot of people to change their perceptions and actions. You can not do this because you are in no position of power. You can influence the change but only if others want it to happen.
Hi there! It's great to hear you've implemented weekly deliveries in your 4 week Sprint. This approach is ideal because it helps you to identify potential issues earlier and make any necessary changes. Let's get to your question.
It's essential to ensure that the definition of "Done" is clear for everyone involved in the project. In a Scrum framework, an increment is considered "Done" only when it meets the definition of "Done." Each team determines the definition of "Done" that meets their project's requirements and standards.
You mentioned that you deliver a new version of the software every week. This is an excellent practice that enables you to test new features regularly. However, it would be best if you also focused on getting the User Stories that you deliver completed by the end of your Sprint.
To achieve this, you need to ensure that all User Stories developed during a Sprint are considered "Done" by the end of it. Any US that is not "Done" by the end of the Sprint should be moved to the next Sprint planning. In your case, you would have to ensure that all US are "Done" by the end of week three, as you deliver a new version of the software every week.
Lastly, it's crucial to ensure that your Sprint Backlog is not overcommitted to help you achieve your Sprint goals. I hope this helped. If you have further questions, please don't hesitate to ask.
I see a couple of comments saying we dleiver work that is untested. That's partially true but with the current structure of the team I don't see how else it could be done. We have a java developer teams who develop US based on acceptance criterias and perform some basics testing of the us before releasing it. Then we have a dedicated qa team who will receive the new version of the software and will run automated tests and more specific test cases. The us won't be considered done untill all the test cases are passed and a demo has been done with the analyst in charge of the design.
The US needs to be delivered in order to be tested by the QA team. That's why we are always one week behind with the US validation. The only solution I see is that we stop delivering content at the end of week 3 so that during week 4 qa team can close the validation and we can close the sprint with all us done.
But I was wondering if there are other solutions. Of course if the same team could do both test and dev it would solve the issue but today it's not the case. Our testers are not developers and our developers are not skilled testers.
I see a couple of comments saying we dleiver work that is untested. That's partially true
It's 100% true.
You provided too little information about the situation to give you specific advice. So let me just give you some input.
A Scrum team needs to be cross-functional. A separate QA team is counter-productive. And waterfally.
Why does the team only deliver once a week and not every single user story? Does it take one week to implement a single story? Can you make them smaller? Why can't the QA team access preliminary versions of the story?
On a side note: If the team delivers at the end of the week: When will the feedback from QA be implemented? Does the team start a new story and then interrupt it to work on the feedback?
I suspect both technical and organisational problems that should be addressed.
Based on how you're describing the team structure, you don't have a cross-functional team. However, one of the requirements of having Scrum is that you have a cross-functional team with "all the skills necessary to create value each Sprint". There are multiple ways to get there, but until you do, I suspect you'll be struggling with other aspects of Scrum as well.
Having a cross-functional team isn't just a necessity for the Scrum framework, it's generally regarded as a good practice to minimize cross-team dependencies and optimize for flow of valuable work. I would strongly recommend against trying to find workarounds and focus on building those cross-functional teams.
It sounds like your organization has already decided that there will be separate teams working on different schedules. Unless, like everyone has said, you can influence the organization to change the methods to allow each Scrum Team to have all of the necessary skills and let the individual skilled people do the work as soon as it is ready to be done and completed within the same timebox, you are going to be doing waterfall software development. You can still say that you are doing 1 month Sprints but in reality your Developers are doing 1 month cycles in a waterfall project plan. Whenever the words "handoff" or "passed to" or "release to test" are used, I immediately see a waterfall.
You are not following the Scrum framework as it is defined in the Scrum Guide because there is not a usable increment that the stakeholders find valuable produced in each Sprint. Unless, the QA Team is considered a stakeholder. In that case, the US based individuals can be releasing to their stakeholder for User Acceptance Testing. It is creative but it could work. The organization just has to understand that all of the work done by the QA Team is not part of the Sprint. The US Developers can be part of a Scrum Team but the QA Team and any work that they do is outside of the Scrum Team.
I'm going to go out on a limb and assume that this is a situation where there is a US based company that has contracted with an international company to do all of their Quality Assurance. There is probably a contract that states the US based individuals will write code to meet certain conditions that establishes a clear break point. At that point, the work is then "released" to the international team to do their contracted work to meet a specific set of criteria so that they can say they have completed their work. There are multiple "gates" and approvals built into the contracts in order to establish some kind of controls. Unless the organization is willing to change their methods so that all of the work can be accomplished during a single timebox, Scrum is not happening.