Is test complete required before Sprint end?
It is quite often that developers give developed stories to test very late in the sprint and Test gets all pressure to complete it so that Sprint review can happen smoothly. I always wonder, Does test complete is necessary to be achieved before we call off the sprint? How important it is.
I think it shall be done but it is not possible quite often. I am not talking about all test but basic test of stories and regression test.
I'm new to Scrum so I could be wrong. But I believe test should be complete to ensure a "Done" Increment that is potentially releasable.
I would be very surprised if the Increment wouldn't be properly tested by the end of the Sprint. Potentially releasable means that your Increment can be used by the end users. During review the PO may ask the Development Team for an immediate product release.
> It is quite often that developers give developed stories to test very
> late in the sprint and Test gets all pressure to complete it so that
> Sprint review can happen smoothly
That would be a traditional use of testing. In other words, testing is done with a view to finding bugs introduced during development so they can be fixed.
However, the rationale for testing is very different in an agile or lean way of working. In agile practice testing is not done in order to find defects, but in order to prevent them. The idea is to minimize rework and waste. That's why many Scrum implementations use TDD (test driven development). By definition all work should be "test complete", because an appropriate test harness must be in place before development work even starts.
When this is done you no longer have the problem of trying to accommodate testing at some point after coding but before the Sprint ends. Instead, you replace this very coarse-grained approach with much smaller and tighter cycles of red-green-refactor.
Thanks Everyone. This certainly clears my question.
In my organization (which is new to Scrum), the QA dept has a resource on the team. Invariably, they are stuck testing at the very end of the sprint when the developers are done coding. Even with TDD, testing by someone other than the developer is required.
Code Complete + Unit Test Complete != Tested
It appears your are doing some mini-waterfall.
Your Dev Team (including developers + testers) have to find a way to focus on the sprint backlog item one by one (limiting the Work In Progress) instead of building a lot of stock of untested code.
If the bottleneck is your testing capacity, the developers must help.
We are in a similar boat so I would like to get other opinions on this. We had one PM build a schedule that had overlapping sprints for different roles (all of which would probably make up a real sprint). I think this was done more for upper management to see a typical project plan but the process confused a lot of the sprint process.
So the Dev mini-Sprint would be
Week 1-Week 2
And the Test mini-sprint would be Week 3-4.
For Dev, Week 3 would be spent with any bug fixes and week 4 would be preparing for the next sprint.
So at the end of the Week 2 , the test Team would receive a "build" and by the end of week 4, IDEALLY, everything would be perfect as fixes came in.
Obviously, this isn't ideal but I have yet to see (and maybe someone here can assist) a proper Sprint Project Plan (even with only two sprints) that properly accounts for testing. Automated testing aside, there are certain things that have to be tested by a person (integration and functional testing).
Could it be that if you were doing a month-long sprint, the final part of the sprint itself is EVERYONE doing testing and fixing? If so, how does that work for real-world project deployments where the testers need to work in a separate environment from Dev?
things like "test mini-sprint", "hardening sprint", "stabilization sprint" are an anti-pattern, no matter how you call them.
See this thread for a similar discussion: https://www.scrum.org/Forums/aft/1032
> Could it be that if you were doing a month-
> long sprint, the final part of the sprint itself is
> EVERYONE doing testing and fixing?
It could be that, and having a cross-trained and cross-functional team is good agile practice. However an even better implementation would be to de-risk the sprint item by item with the team collaborating on each one, thereby limiting WIP as far as possible. Another item would not be taken from the Sprint Backlog and actioned until those in progress have been tested and completed.
> If so, how does that work for real-world
> project deployments where the testers need
> to work in a separate environment from Dev?
This would be an example of how real-world assumptions are challenged by Scrum. The Development Team is accountable for all infrastructure and tools needed to meet their Definition of Done. There is no separate tester role and hence the supposed need for separate environments would be open to challenge.
However an even better implementation would be to de-risk the sprint item by item with the team collaborating on each one, thereby limiting WIP as far as possible. Another item would not be taken from the Sprint Backlog and actioned until those in progress have been tested and completed.
Hi Ian, this always sounds awesome. Do you know a way to get a team there? If they are blocking this idea completely and state that it is not possible to work on this item together?
You have to dig into why they are blocking the idea. Often they have strong reasons (I hesitate to say good ones) such as their role descriptions and job titles in employment contracts.
Changing this requires sponsorship for organizational change at the executive level. Sometimes cross-training can be included in PDP's and rewarded accordingly.
In many organizations cross functional training therefore has to be measured in years, and some people may have to retire or move on before it can be truly effected. A Scrum Master has to be prepared to play a long game. It is important to show the bottleneck effect of work silos on burndowns. This increases the visibility of the risk and can help promote the executive support needed to get change through.
NB there are of course tactical improvements that can be used to limit WIP, such as pair programming and the rotation of partners.
Ludwig - thanks - I was the one who started that other thread.
Ian,I think you perfectly hit a nail on the head for me, which is the cross-training/function of individuals so that it is rapidly spread out. I think that has caused many of the headaches I've seen.
Unfortunately, it's also one of the bigger challenges in government and corporate environments.
We've been having similar issues with our testing recently, in regards to ending up with a backlog of testing required at the end of a sprint. After talking to our main tester, he has said that he can't start writing tests until the development work is complete and signed off by the PO, as he then knows exactly what needs to be tested (push button A here to get result B there etc). He has also stated that the acceptance criteria isn't specific enough to allow the development of tests earlier in the piece, as the AC doesn't specify how things are going to work, only what needs to be accomplished. Now while the AC could potentially be refined more, they should never, in my understanding, spell out the solution, as they are the requirments themselves, and shouldn't constrain the developers as to how they address them.
So that's my issue, a tester sitting waiting until each story is developed before even starting to write tests. From everything I've read here, that certainly isn't the most appropriate way to be going about things.
I can sympathise with your tester Hamish. I work on large complex safety related systems where it's virtually impssible to produce something releasable in a single sprint (it can take many) and even then the customers would not be able to take anything other than a complete product because of regulations and the time/effort proving safety cases to the authorities. We also have to have full test evidence to support safety cases and conform to regulations so it is really difficult to fit testing into the classic scrum framework. I'm sure there are different, possibley less extreme variants of this and maybe your tester has similar constraints.
I thought you might appreciate a quick update on our situation, since you're in a similar boat. We've now got our testers working much closer with the developers, so some tests can actually be created as development work is going on. We've also had our testers look into how the integrations tests are done, and that has helped a lot too. Our automation guy has been given more autonomy as well, so he is not constrained in waiting for our lead tester to write the system tests, and is able to move ahead and get stuff done faster. Both testers are also working together a lot more. I guess the key part here has been all about communication. Because of this, we've been able to get almost all of our testing done in each sprint.
We're still having issues regarding what needs to be documented, and this is a real bone of contention though. We're in the process of engaging with the rest of the business to see what their actual requirements are around this, so we have a better idea of where we can maybe save some time.