Product in pre-prod, QA not part of sprint

Last post 01:10 pm July 22, 2021
by Tony Georgiadis
6 replies
08:49 am July 15, 2021

Hi all,

I wanted to share an issue I am facing at the moment.

The product I am working with is in pre-production, with the soft-release set end of year.

The product director decided to not have QA testing the stories and log bugs during the same sprint a task is being developed; the definition of done he insisted on having was once the product was developed and devs/artist have signed it off and he's happy with it, the task is closed. QA reviews and logs bugs against it in the next sprint, where they then re-open the task.

I tried to explain him this is what we want to avoid - having QA test previous sprint tasks. That this is not very inclusive of QA in the development team. Another reason he gives me is that the out-source QA team we are using is not part of the team, they are a dependency, and thus the bugs they would log should not be considered in the sprint any way.

Before I confront him further I wanted to ask - is there any reason why a PO or Director would do that, have a DoD that does not include QA during pre-prod? Is it a trick to make sure the sprints are completed on time with high story completion rate, or do you see it make sense in some cases?

Thank you for your opinions.

04:30 pm July 15, 2021

There are perfectly valid reasons to have downstream integration and QA activities that occur outside or following a Scrum Team's sprint. I'm not sure that this is it, though.

From your description, it seems like QA is an afterthought and your process is ending up in more of a sequential process, where the design and development work happens and is then handed off to another team for testing. This tends to be antithetical to most agile methods, including Scrum. Even in cases where you have downstream integration or independent QA outside of the Scrum Team, the Scrum Team should ensure that their work is of the highest quality and these downstream activities only satisfy contractual, legal, regulatory, or other compliance purposes.

I can't think of any good reason to exclude quality activities from the Scrum Team. It seems like many anti-patterns are going on here, perhaps in an attempt to reduce cost or make it seem like work is happening faster. Either way, I wouldn't consider this approach to be consistent with agile or lean methods.

07:38 pm July 15, 2021

My experience is that the later you find a defect, the more expensive it is to fix. In fact any undone work will cost more to work on at a later date in time.

Your process is more like waterfall since development and testing is not completed in the same Sprint. Because of the loss of transparency forecasting will be difficult. It also seems like there is a lot of context switching, when the developers have to stop working on something in the current Sprint to fix a defect from the last one. Context switching is known to impact productivity.

A better option might be to allow the developers and QA people to self-organize into a cross functional Scrum Team to lesson handoffs and allow for closer collaboration.

07:43 pm July 15, 2021

the definition of done he insisted on having was once the product was developed and devs/artist have signed it off and he's happy with it, the task is closed.

Who's accountable for quality here: the director, or the Developers doing the work? When there turns out to be a problem with quality later on, will he be the one who tracks, unpicks, and fixes the issues?

04:28 pm July 16, 2021

I agree with everything that has been said but it sounds like you have a very title driven hierarchy in your organization that is command-control driven.  So this is where I would fall back on my premise that pain is a powerful motivator.  Disagree but support his decision and then start tracking data to show how often "done" work is becoming "undone".  Track the time it takes to address those items.  Track the amount of time those activities add to the actual duration to reach an increment of work that can be delivered to the end user. Track how many sprints are delivering "done" work but are not providing that done work to the end user. Also track how long it actually takes to deliver value to the end user. Also track the developers satisfaction with the process and how they feel about getting to the end result. It is a "touchy feely" metric but if the developers aren't happy, it is going to affect their desire to do good quality work.

We all know that getting to a truly agile state is not easy and takes time.  The "numbers people" at the top don't easily understand tracking value so we often have to provide them numbers that will resonate with them.  It isn't very agile to do this but it is reality that it is often necessary. 

02:01 am July 22, 2021

Daniel - good advice. Empiricism. Collect the data and present the data. 

01:10 pm July 22, 2021

First I'd try to understand why he insists in doing QA only at the next sprint. Why would he not want to bring dev and QA in sync and let both work together towards creating done increments?

Then, I'd personally present my thoughts and what I believe are the risks of what he suggests. A few to mention - 

- Too much context switching for devs, potentially. When QA reviews and discovers bugs then development will have to switch back to a piece of work that has been touched a while back.

- When the above occurs, the code might be obsolete by the time the QA occurs. Imagine you code piece A in Sprint 1, and then you start Sprint 2 touching piece A, extending it to be A1.
QA would presumably test on an env that holds piece A, but the devs have already extended the code to be A1. So the question there would be, what is the right thing to test? You will possibly run the risk of spending time to figure out whether a bug found for piece A is a bug for piece A1 but unless the latter is deployed to an env you will never know.

So to me it can become complex and problematic to QA things only in the next sprint. 

As others said, you could go for empiricism; bring risks up but if the product director insists, let them go through some pain, invite reflecting in retrospections and adapt accordingly. I hope that when trying to adapt, the product director will allow space for change to happen. Hope this helps! Good luck!