We have the very common issue of stories being developed end up hitting QA at the end of a sprint, which leaves stories incomplete at the end of the sprint because there wasn't enough time to complete QA and fix any issues found during that time. First, I would love to hear some input on how we can improve this.
Second (and more concerning) the CTO and the head of product are proposing that we defer feature testing (QA), until all features marked for that release have been completed. QA's role during this time would be to write test cases in preparation for QA testing after how ever many sprints it takes to develop the feature. I have serious concerns about this, but I will refrain from listing them. I would like your thoughts on this suggestion, and please be candid. Cheers!
What are the team’s current work-in-progress limits? How might they be reduced further, so the team can apply better focus on each item and ensure QA is completed?
Why do the CTO and head of product object to the limiting of WIP, and why do they prefer batching work up instead so the incremental delivery of value is compromised?
Is the team working on stories collaboratively? Or is every dev working on "their" story by themselves? This feeds into Ian's question: How many stories are in progress (i.e. work has started and the story isn't done) at a time?
As for the second question, I would be interested in hearing the reasoning being that proposal. Are they aware of the risks?
A little different tack than @Ian and @Julian took on the first question. Why do you have people that do nothing but test in a team? Why not make testing the responsibility of everyone? I come from the old days of software development where there was no such thing as a QA Engineer. I used to be accountable for making sure the code I wrote worked correctly within the entirety of the application. On all of my current and most of my recent teams, there is not a person dedicated to testing. It is done as part of the development of the product. We don't even put a "test' task into the mix. It is assumed that things will be tested and as much as possible testing is automated. Just as the Developers will review each other's code, they will also test each other's code. And part of the code reviews are ensuring adequate automated testing capabilities. How can you deliver a potentially releasable increment at the end of a sprint if you don't know it works? In the interest of full disclosure, I have been a QA Engineer for at least 15 years because I enjoy that kind of work. But even during that time, I advocated that the testing needed to occur closer to the coding activities because it made fixing problems faster and easier (less context switching for developers who have moved on to their next "assignment"). I used most of my "testing" time working directly with the developers while they wrote the code. Providing them test scenarios while they wrote the code. A form of test driven development. It might be that your organization could benefit from that approach instead of waiting until much later to test things.
On your second question...see my response above. It has some talking points that your CTO and Head of Product should be taking into consideration before they make the move described which reeks of waterfall. Testing at the time the code is being written provides better and faster results. If you wait weeks to test things, the developers have moved on to other work, they have to stop that, context switch back to previously written code and then context switch back to their current "assignment". This not only makes fixing the bugs take longer but it also makes working on new things take longer. A lot inefficiency and waste in that model.
My product has monthly releases. We have a common Product Increament meeting in which the all teams are available.
Our team are based on modules (The team consists of th associates specialize in QA as well.)
Note: The PO along with the BAs sufficiently describes the feature and mark the feady Ready.
The Product Owner picks the Ready Features that need to be developed as part of the release.
The teams pick up the features which are part their modules. During the meeting the features are broken down to stories. And each associate will pick up story to be developed by them. The QA associate writes down the tests and does the testing as and when the development is complete.
The QA does not wait for the end of sprint.
Hope this could give you some ideas!
I'm in a similar position. We work on products which require testing as a whole (and we are working with an external client). So I'm looking to remove the client testing/approval from the DoD because the client needs to test the product with all the component parts that are delivered across sprints.
But this is where the issue starts. As we get through the tickets (across several sprints) and eventually get to the point of a product being ready as a whole what is the best way to then introduce client testing. Should these be separate tickets that are bought into a later sprint and what is the best way to then deal with bugs raised. Are bug tickets raised against the testing story ticket and that is done once those bugs are fixed?
Any ideas welcome!
So I'm looking to remove the client testing/approval from the DoD because the client needs to test the product with all the component parts that are delivered across sprints.
How would this help the team to deliver work that is genuinely “Done” and of immediate release quality each and every Sprint?
Each Sprint increment should be potentially releasable. If no testing occurs how do you determine it is potentially releasable. In my opinion, if you have testing in your DoD it should stay there. The Development Team should do adequate testing to ensure it is potentially releasable. At some point the Product Owner will determine that a combination of increments become actually releasable. No where does this say that it has to be released into an environment where it is immediately used by the stakeholder in their daily business. Why can't that release be to a staging environment where your external stakeholders can do their "user acceptance testing" and provide feedback to the Product Owner that can then be captured in the Product Backlog for future work? The more frequent this occurs the better.
Another approach is to actually involve the external customers in the Sprint activities as a Development Team member. This will require an investment on the part of the customer but it would give them the ability to provide immediate feedback that can result in faster adjustments and delivery.
One more approach is to have the external customers provide you a defined set of user acceptance tests that you can execute as the work done interacts with those tests. There are techniques for providing proof of the tests passing which will actually help the customers by not requiring them to spend their valuable time testing the product.
I would use removing the customer testing completely as an absolute last recourse. I'd do everything I could to find ways of bringing the customer testing into the process in order to benefit from it for more frequent course correction. Again, my opinion of your option is that you are just falling back on old style waterfall project management practices.
Thanks for your responses!
We include automated tests passing as part of our DoD, which is done at the feature level, and the dev team test their work too (which is released into a staging environment during each sprint).
What I am saying is that I want that to be the DoD rather than the client testing, which will be more focused on testing the product as whole when it's ready (so my idea of having test specific stories in a later sprint for the client).
At a feature level the client can approve each story, but the products touch several apps, which when tested in their entirety may throw up bugs you won't see just by looking at the feature itself.
The other option I thought of is to extend the sprint length to allow the build up of tickets so that the client can test the end to end product as part of a sprint. But that will mean we have a period of time in the sprint where we are delivering features but they then get stuck at the client testing stage until the point where the full product can be tested. Again, not an ideal scenario.