Skip to main content

Blackbox testing in an environment where work is split up in different branches with strong dependencies

Last post 08:47 am October 28, 2013 by Prab M
5 replies
Anonymous
09:09 am October 14, 2013

Hi all,

I’m struggling with the following thought:

Let’s say we have the following backlog item:
- I would like to order a product x.

During the Sprint Planning Meeting it turns out that the process of ordering product x needs to be done with help of a wizard consisting of with mandatory 5 steps.
1. Step 1 – insert basic user information.
2. Step 2 – configuration page
3. Step 3 – configuration page
4. Step 4 – configuration page
5. Step 5 – order!
Let’s say the team has 3 developers: Mary, John and Peter and 1 tester.
So Peter is working on step 1 in a different versioning branch, John on step 2 and Mary is working on the implementation of step 3 which is also a different branch.

As agreed in the Definition of Done an item can only be closed if it is properly ‘white box tested’ by the developer and ‘black box tested’ by the tester.
So in order for them to close their item (get it to done) a certain amount of (white/black box) testing should be done.
How should black box testing be done effectively in an environment where the work is split up in different branches with strong dependencies?


10:00 am October 14, 2013

It doesn't matter how many branches there are, or how strong the dependencies: what matters is that the pieces of work can all be brought together into a potentially releasable increment. The Definition of Done should be sufficient for each item to be included in that increment.

Is that the case here though? How are the items in the various branches merged into an end-of-sprint deliverable?


10:02 pm October 16, 2013

Is your story too big? If each step provides business value you can split them into separate stories.


06:21 am October 17, 2013

Robert is right, the story could be too big (it does sound suspiciously large). For example the last step (the one called "order") might well be a story candidate.

Having smaller stories is the "S" in the INVEST criteria. Smaller stories are less likely to be impeded by internal dependencies such as testing, as well as external blockages. In short, don't plan to have impediments. If you know they are there, plan around them. This can certainly involve refactoring stories before a team accepts them onto their Sprint Backlog, such that they become achievable as per the Definition of Done.


08:44 am October 28, 2013

The most effective way is to follow Continuous Integration. Make it a point to check in code which compiles and doesn't cause any regressions as regularly as possible.

This ensures that all developers are always using the most up to date version of the software.

Black box testing can then start even when the feature is partially done and can check for regressions. If all the tests are written beforehand and a Test Driven approach is followed many of the tests would fail.
As developers keep integrating their implementations and checking these in you will find more and more of the tests passing.

TDD is a powerful technique and very few teams get it right but I would recommend that you try this approach.

Coming back to your question the answer is most definitely Continuous Integration and Automated testing from day 1 to ensure there are no regressions being introduced.


08:47 am October 28, 2013

Developers can work in their own private workspaces or individual branches but at the end of the day code has to be checked into the main branch after the required merges.

Remember transparency is required into what each developer is working on. There's no point hiding behind a private branch endlessly.


By posting on our forums you are agreeing to our Terms of Use.

Please note that the first and last name from your Scrum.org member profile will be displayed next to any topic or comment you post on the forums. For privacy concerns, we cannot allow you to post email addresses. All user-submitted content on our Forums may be subject to deletion if it is found to be in violation of our Terms of Use. Scrum.org does not endorse user-submitted content or the content of links to any third-party websites.

Terms of Use

Scrum.org may, at its discretion, remove any post that it deems unsuitable for these forums. Unsuitable post content includes, but is not limited to, Scrum.org Professional-level assessment questions and answers, profanity, insults, racism or sexually explicit content. Using our forum as a platform for the marketing and solicitation of products or services is also prohibited. Forum members who post content deemed unsuitable by Scrum.org may have their access revoked at any time, without warning. Scrum.org may, but is not obliged to, monitor submissions.