How much testing at the end of iteration?

Last post 05:08 pm May 30, 2018
by Long Luong
12 replies
Author
Messages
06:01 am February 22, 2016

Hello guys. I'm new qa in agile scrum. so want to get points if view for things i'm not sure in.

we have small team (4-5 persons) and 2-week iterations.
we have 2 environments : 1) dev-env , where i check tasks during iteration ; 2) stage-env , where we deploy right after sprint ends for customer to check.

So we have discussion how much testing should be done.

Will be efficient to check again functionality on first environment before deploy on second, in time all tasks are already tested?
And after deploy check on second environment again before send to customer?

The problem is that we have quite small budget for testing.

10:50 am February 22, 2016

In scrum, your increment should be "potentially shippable" at the end of the Sprint.
Is it really shippable if you need the increment to be tested after the sprint ?

If you have budget for coding, you have budget for testing.
If you code more than you test, you are coding too much.

02:31 pm February 22, 2016

> Will be efficient to check again functionality
> on first environment before deploy on second,
> in time all tasks are already tested?
>
> And after deploy check on second
> environment again before send to customer?

Wouldn't it be more efficient to use a test-first development approach, where the qualities needed for release are asserted prior to implementation, and thereby reduce the potential for rework?

08:00 am April 29, 2016

It looks like you are doing enough testing on your test environment and the Increment is ship-able. I believe you are talking about the production/staging environment testing, right?

Why don't you include it as a part of your DoD that a PBI should be deployed, tested and verified on the staging/production environment. This should be done within the Sprint not after the Sprint.

I agree with Oliver that -

If you have budget for coding, you have budget for testing.
If you code more than you test, you are coding too much.

02:24 am May 3, 2016

Try to automate tests as much as you can. Do not only automate your unit tests but your end-to-end tests. That way you have some kind of safety-net that your new features do not break already existing features.

09:56 pm November 7, 2017

Hi guys, we actually face the same problems as j.k. Maybe with one exception: I think we have enough testing ressources and they are able to manage their tasks. But however you organize it... even if QA tests parallel to the development process: At the end you will always have a lack of time between finishing the very last task by the developers and finishing of testing subsequently. In an ideal world this refers only to one issue, the very last one, as all the others have been tested parallel to the development. But this very last issue still remains.

So either the engineers planned well, are fast enough and finish their tasks in time. Lets even assume one or two days prior to the end of the sprint. QA has enough time to test, but dev....what are they doing, while waiting for test results? Next sprint hasn't started yet and planning the next one does not utilize them. And if anybody wants to suggest this: no I personally do not want developers to support in testing. I do not want them to test their own stuff.

The other possibility: Devs do their thing right in time on the very last sprint day and testing of the last issue happens subsequently after the sprint... I don't like this scenario either.

Apart from that: As our QA tests parallel, they do so on DEV environment. They even HAVE to, as we only deploy on stage after the sprint (the potentielly shipable version). Hence, on stage an ADDITIONAL testing (AFTER all tasks have been solved) is necessary. Same problem: If we can't finish the sprint (because it is not DONE before testing has approved).... what are the developers supposed to do in the meantime? (CI/CD would help, I know, but is currently not performed).

 

09:04 pm November 8, 2017

Karl-Uwe,

Just a couple questions based on your recap of your current situation:

Is this a concern that your Development Team has expressed, or is this your own observation?   If this is your concern,  I would ask why you feel responsible for keeping your developers busy?

I would strongly suggest that you and your team review the 12 Agile Principles.   They speak to delivery of working software, and customer satisfaction, not about local optimization.

In my experience, sentiments like this indicate an underlying level of mistrust.   Do you trust your developers to use the time at the end of a sprint wisely (i.e. - refinement, learning, addressing tech debt, knowledge transfer, cross-training, just "thinking", etc.)?

08:38 pm November 14, 2017

Hi Timothy, thanks for your answer.

Well.... good questions. In fact this is my observation or better: My concern. This might be due to the fact, that we work with an (external) off-site development team. Apparently this is a matter of me beeing anxious to lose "control" or of learning how to deal with a certain level of unsecurity, a assume. But you're right: In the end, other metrics will prove, if we were successful or if my concern is justified. E.g. by having a look on the the velocity or our ability to deliver good software.

Thanks again for the hints!

11:37 am November 15, 2017

Hey Carlos, Very Nice Question and had an Interesting Discussion in your Post. Here i would like to Share with you some Points Related Scrum Iteration. Hope it will be Useful for you. 

Missed an iteration would mean, I assume, the team failed to deliver some or all of their sprint commitments as represented by User Stories done - delivered, demoed and accepted.

This happens especially early information with inexperienced Scrum masters and teams. It should not be treated like a train smash but a valuable learning opportunity. It's how Scrum makes us better: the difference between our commitment and our delivery is transparent, the underlying reasons are examined and everyone can be a little bit wiser going forward.

Some teams make the mistake at this point of trying to alter the definition of Done so that they can fake future Sprint completions. Don't do this. User Stories should only be considered done in when the Sprint’s work is deployed. If it's not live and in production, then it's not working software and you've made documentation and processes more important.

Curious if my understanding of the question needs verifying.

08:11 am November 16, 2017

User Stories should only be considered done in when the Sprint’s work is deployed.

Where does the Scrum Guide say that every increment has to be deployed?

08:05 pm November 18, 2017

@Mia and Julian,

thanks for your contribution to that discussion. I think Julian is right: it's not about deployment after a sprint, but about to deliver a potential shipable version. Sometimes you might need more than one sprint to deliver a real value, thought an implemented feature may technically already work.

But however, regarding my initial question I meanwhile came to the following conclusions:

- DONE always includes testing. It's no way to have a kind of "testing sprint" subsequently to the "real" sprint. If it happens that an issue is not tested at the end of a sprint, the issue isn't done and has to be shifted into another (next) sprint.

- My concern was, that there is always a time lack between last development activities and subsequent testing. To prevent from this (or at least in order to shorten this time lack),

  • the team has to be able to split tasks into very small bits, so that they can deliver an increment every day in order to get tested immediately. Hence daily delivery is highly recommended
  • the team must technically be able to deploy these bits onto the respective testing environment every day
  • QA must be able to test immediately (hence: dedicated testers and automated tests are necessary)
09:03 am November 19, 2017

In a Scrum Development Team, with ought to be cross-functionnal, the QA expertise is part of the Dev Team, so actually very close and ready to work with the "coder" expertise.

06:18 am May 30, 2018

Hi all, I have the same problems but the answers here cannot solve mine yet.

Our Sprint lasts for 2 weeks, including all of testing and deployment. Testing is done as an integral part of development cycle.

However, the last two days of Sprint our QCs will focus all of their efforts on testing in staging environment which in our case UAT environment to make sure the potential deployment could go well. This is when all the task has been done so developer will not do any work regarding to Sprint task, only communicating with QC for bugs and fixing bugs in UAT (not quite often there is bug in UAT).

This lead to two issues:

1. It is very burden to test all of the Sprint tasks in staging environment in just two days.

2. Our developer basically have no job or real task to do during this period. Of course they can spend time refining the code / refactoring but this will make bugs to occurs, or they can spend time to learn something new. Anyway if there is better choice I would love to have them reviewing / refactoring the code while development, or studying also during development instead of the last two days of Sprint.

Please shed some light on me :D

Thank you very much!