Test cases as part of the staged deployment process.
Does anyone have any experience with testing as port of the definition of done?
In our company we work with short sprints (of one week). Also, we have a staged environment. At the end of every sprint we release to the test environment. At the moment the test cases defined as part of the definition of done are these that have to be passed to do a deploy to the test environment.
In the next sprint we have a backlog item which is "deployment to acceptance environment" with test cases that have to be passed. And the same for production. Has anyone experience on how to integrate this into one backlog item, or reasons not to?
We started with scrum recently and do not have many automated test yet.
Comments and tips on how to implement this are very welcome.
Rik de Groot
Rik de Groot | QA Manager | Bureau: 010-8503928 | email@example.com<mailto:firstname.lastname@example.org>
ExpertDoc BV | Veerkade 8d | 3016 DE Rotterdam | Website: www.expertdoc.nl<http://www.expertdoc.nl/>
The information sent with this e-mail message is only for the strict use of the intended recipient. In the event that this e-mail message is incomplete in any way, the recipient is kindly requested to contact the sender of this message. It is not permitted to publish, copy, circulate and/or disclose any of this information to third parties without the permission of ExpertDoc. If you are not the intended recipient and in possession of this e-mail message, please notify us immediately, delete the e-mail message from your inbox, and do not use it for any purpose nor disclose the contents of this message to third parties, nor publish, copy or store this information on an information carrier.
why don't you add those test cases for the stages above TEST to the ones that have to be passed to go on TEST? A "done" story should be potentially shippable, that means it should be tested thoroughly and be production ready. Therefore in my opinion every test that is necessary to check if it is ready for productive use should be executed before it is done.
But since I think you already know this, what did I miss? :)
if you are only deploying into a test environment then each increment is not potentially shippable. What you need to do is to look at how you can improve your Definition of Done (DoD) to include the sort of QA/UAT/System testing that is currently being done in your Test and Acceptance environments.
Part of the problem may be that you have work silos, with explicit tester roles being assigned. In Scrum, each developer should be able to do all of the requisite testing.
It sounds as though there is an implicit waterfall-style process in your organisation, which is represented by stage-gated deployments to successive environments. This is a very common barrier to agile transitioning. Using backlog items to facilitate the transitions between stage gates is also very common (I've done it myself) but it is only a sticking plaster remedy.
If I was you, I'd be looking to automate as much as possible of the deployment procedure across environments. In particular I'd want Acceptance to mirror Production in terms of its configuration. If I could get a deployment to Acceptance as part of the DoD then that would leave Production as a political battle to fight out with management and the infrastructure teams (rights to publish and/or getting change requests & release slots pre-booked etc).
@peter, @ian: Thanks voor your quick comments.
I've been thinking about what you're suggesting. Probably it is more a matter of terminology rather than possibility. I agree that each increment should be shippable, and that ALL testing should be part of the definition of done.
The point I am struggling with is that we have different environments for different types of test. Indeed Acceptance is (almost) a mirror of Production.
If we would make the deployments over each environment automated at the end off succesful passing of the test on the environment, I think the test, although they have to be succesive, can all be executed as part of each PBI.
The suggestionI an is making is maybe for the moment a nice in-between step untill most tests are automated.
I'll include all up to acceptance as part of the DoD.
Thank you again for your suggestions
Testing web apps has to be done using internet or intranet.Different points to be considered are: integration,security,UI,performance and compatibility with different browsers.
I realise I'm very very late to this thread but I wanted to share my input.
The DoD should be a list of tasks which must be completed for each PBI which will inform you whether the work is "Done". Scrum states that work should be "potentially shippable" at the end of each sprint so as a minimum the DoD should include what's required for that.
In my mind DoD should include:
- Acceptance Criteria met (whether by automated or manual means)
- Non-functional criteria met
- Can be deployed (again, ideally through automated needs)
- Any static code analysis/CRs as appropriate
If the acceptance tests aren't passed then you can't state that the PBI is ready to be shipped. Likewise with the other points.
This means that testing, almost by definition has to be included in your DoD. Ideally these should be automated tests created by the team up front - but not necessarily. You can have manual steps in your Definition of Done!