In Sprint automation

Last post 09:43 pm October 18, 2019
by Ian Mitchell
6 replies
Author
Messages
06:27 pm October 16, 2019

What are the benefits of having in-sprint automation i.e. automating the new functional test cases within the sprint window?

When team is adopting this approach, they are investing heavily on writing the scripts and they are not doing any kind of exploratory testing.

Due to time box, team is only focusing on building the automated test cases. Not able focus on exploring various real life test scenarios and that’s causing defect leakage.

My understanding is, within the sprint team should do more and more manual (exploratory) testing to conform the new functionality.

At regular frequency, team should run the automated regression suite against the dev code base (not against the local code base).

And automation test coverage should only be finalized after we have the stable (merged) code i.e. after completion of current sprint.

It should follow ‘current sprint -1” cadence for automating the test cases.

What’s your take on this topic?

06:47 pm October 16, 2019

I'd consider this an expansion of the team's Definition of Done in order to increase quality. This ensures the automated scripts for the new functionality is considered during refinement and that the team becomes consistent at producing that each and every Sprint. Otherwise there could be a situation where the team will 'get to it later' with good intentions but never do. 

If adopting this practice is having the team to create more defects because of the lack of real life testing then perhaps they need to take a step back and inspect their work to determine whether they're ready to expand the DoD.

07:14 pm October 16, 2019

Might be worth considering the level of testing being automated per Story.

Typically, you'd want to just ensure smoke level tests are automated. One to three stable, high confidence, basic functionality checks, so that any developer can later run the expanding smoke suite as part of their build process. I've seen DoD's mention that Dev's can't hand off work to Testers until automated smoke tests pass. This helps the flow of QA being able work on stable builds and help prevent Dev's jumping into new work too quickly.

Regression scope is also super handy to automate because it's a considerable testing time saver, but definitely not the same project. Typically you'd want a dedicated test automation team per product, to build and monitor this because they could serve several teams. 

I'd recommend automating the basic checks first, dig into manual exploratory testing and ensure quality standards of the team are met, and then if available capacity exists, consider taking on tasks of expanding regression suite.

09:49 pm October 16, 2019

I would suggest avoiding leaving automated tests until the next sprint - they should be done as part of the sprint that the story is being developed in.

The teams should be trying to automate as much as possible - unit tests, functional/component tests, integration tests. There may be some manual exploratory tests being done as well, but most of the issues would be caught by the automated tests. This automation may take time to add, though.

Your team should enhance/clarify the Definition of Done, as Tony mentioned, to specify which tests are to be included.

John: why would you just automate a couple of smoke tests? Why would Devs "hand off" to Testers? Why would you have a dedicated test automation team? It should be a single team effort to develop and run the automated tests.

06:06 pm October 17, 2019

Hey Ben. Thanks for the questions!

why would you just automate a couple of smoke tests?

My suggestion to focus on smoke tests in the Sprint is a bit on par with your thought. Feature is still in development so writing extensive automation for that feature should be done in a later/following sprint. Just cover minimum checks now to ensure basic stability remains throughout each subsequent check-in by anyone else on the team.

That said, understand the ask of automating anything at all? The purpose isn't to eliminate all manual testing efforts. A Potentially Releasable Increment is achieved when all appropriate level of validation is complete and it's "Done" and ready to be released without the need of any other additional validation. Automation assists by ensuring what you made today won't break what anyone has made last Sprint and beyond, but the scope of the testing should always start with covering all the super stable high confidence test scenarios, aka critical level.

Why would Devs "hand off" to Testers?

I loosely used the term handoff. Intent was to capture the moment dev's would consider their work done and things be ready to test.

I had Mobile development on my brain at the time. They often code, build/compile, test that the device is testable and meets the AC's of the Story, and leave the device wherever testers would expect to find it when they are ready to do their comprehensive work on it.

Why would you have a dedicated test automation team? It should be a single team effort to develop and run the automated tests.

I'm not saying as a rule have a dedicated automation team, but definitely consider using one per product. You sort of answered the why question yourself. There is great value in a single team effort to develop and run automated tests. Might help to picture it as an extension to DevOps.

Most products have multiple teams adding different features to it at various times. If each team owns their automation solution entirely, then there will be gaps of integration coverage to ensure the PRI is ready to go into customers hands immediately.

It's too nearsighted to only write suites of automation to only test the completed work of just that team, whenever they add functionality into a product. Therefore, you'd want a single solution of automation and connect it to the CI/CD pipeline, so that each build/merge verifies immediately if they've introduced a critical issue or not.

Dedicating a team to design this solution and maintain it, breeds cross-team collaboration. Each team would have an environment to automate in using the same codebase. They can add their own test cases as they see fit. When the feature is ready to be released, the test cases for that feature are enabled (typically by way of quick/light configuration change to the job doing the release). Now the integration suite is expanded and all other product teams contribute towards the overall product quality.

If each team is focusing on smoke tests only, they can quickly instill coverage to critical level bugs. As for Regression, it's not difficult to create stories on what to automate and how it should be tested, for a finished feature. Therefore, it won't require that team to silo and be the only SME of the feature exclusively. A single automation team can use Kanban for example to knock out test case stories several times a day, each submitted by the individual product teams. 

10:12 pm October 17, 2019

Thanks John for the explanation! You make some good points and it's helped me understand how that works.

09:43 pm October 18, 2019

What are the benefits of having in-sprint automation i.e. automating the new functional test cases within the sprint window?

What is the alternative, if the team is to create increments of "Done" work every Sprint?

When team is adopting this approach, they are investing heavily on writing the scripts and they are not doing any kind of exploratory testing.

Why does authoring test scripts demand such a heavy investment once the Sprint starts? Shouldn't the acceptance criteria already be refined and clear, if an item is to be considered ready for Sprint Planning?

At regular frequency, team should run the automated regression suite against the dev code base (not against the local code base).

Shouldn't the regression suite be run whenever the code base nearest the production environment changes, i.e. after every check-in?

And automation test coverage should only be finalized after we have the stable (merged) code i.e. after completion of current sprint.

If the code is unstable, shouldn't that be detected and remedied as quickly as possible? How important do you think it is, in agile practice, to integrate and test work continuously?

It should follow ‘current sprint -1” cadence for automating the test cases.

What would the consequences then be for delivering an increment of release quality in the current Sprint?