Planned Site Maintenance
Please be aware that we are planning site maintenance on Thursday July 16 which will make the Scrum.org site unavailable from approximately 7:00 PM EDT (23:00 UTC) through 9:00 PM EDT (01:00 UTC). If you have already started an assessment test, you will have no issues and will not be impacted, however no new tests can be started during this time. Thank you and sorry for any inconvenience.
Refactoring of code after delivery to test environment
Soliciting ideas on how to deal with this situation.
There is a large piece of functionality that drives the major business process - backend functionality that the user doesn't see. The development team completed the code and delivered to the test environment for the end users to test and it failed. After looking into it further, the developers decided to refactor the code.
Do we roll the stories to the next build cycle or do we write defects and put those in the next build? (We're at the end of the build cycle with no more sprints.) If we write defects, what do we do with the stories?
The development team completed the code and delivered to the test environment for the end users to test and it failed. After looking into it further, the developers decided to refactor the code.
Why do the development team consider code to be complete when it has not been tested? Doesn't their Definition of Done include testing, and don't they meet their definition each and every Sprint?
Do we roll the stories to the next build cycle or do we write defects and put those in the next build? (We're at the end of the build cycle with no more sprints.)
Isn't every Sprint a build cycle in which a release-quality increment is delivered?
If we write defects, what do we do with the stories?
Do you plan to ensure that all work remaining is accounted for on the Product Backlog, including technical debt?
I'm not sure what you mean by "next build cycle" and that you "at the end of the build cycle with no more Sprints".
If you are using Scrum, you don't have build cycles. You have Sprints that produce potentially releasable Increments. The Product Owner is responsible for deciding if an Increment should be released into the next downstream step or not. What you're describing seems very much like a waterfall, where you are now in an integration and test activity, rather than one iteration in an iterative and incremental approach.
Without more information about your process, it's hard to give concrete advice. However, one actionable item is to spend some time understand why issues were not found before the end-user testing. Although feedback should be expected, the testing performed by the Development Team should find things that would be considered "showstoppers". If it can't, I'd want to figure out where the gaps are and find opportunities to improve those.
Thank you for your feedback. You have confirmed what I have known all alone - the client (a large government entity) wants to develop in a waterfall method but call it Agile, use Agile-esque terms, and divide up each build period into sub timeboxes and call them sprints.
Regarding the initial question, "How does an Agile team deal with going back to the drawing board after having the business users test in a non-production environment," I'm curious as to how a Scrum Master would coach the team regarding creation of defects or new stories and what would the team do with the initial stories.
To touch on the comments above: we do have a definition of done which includes testing in the same non-PRD environment that the business uses to test. Our client (government) requires the team to develop for several weeks (which they've broken up into sprints) after which they must stop development ("code freeze"), the client gathers business users to test the new functionality (in same non-PRD environment mentioned above) for one week (user acceptance testing). After this, assuming there is new functionality to release to production, the government client requires the team to deliver the new functionality to the production environment and that process requires the business to (again) test (in production). (These environments, also, are not mirrors of each other which is another level of complexity.) Assuming all goes well with the production release, the development team can then go back to developing the new functionality that the business requires for the next build cycle which may also includes fixing defects found in the above "user acceptance testing" cycle.
The functionality that is being refactored was written to acceptance criteria and passed testing until it was delivered to the business for their testing. At that point, deeper details about workflow were discussed and based upon the feedback, the development team decided to refactor the code. So, now we have stories that were completed per our definition of done, but after the users tested the functionality, their feedback prompted a refactor.
Thanks for your support.