Skip to main content

Implementing SCRUM and making it report-friendly

Last post 10:04 pm July 30, 2019 by Ian Mitchell
3 replies
08:45 am July 30, 2019

Hi everyone, 

First, let me tell you I have read multiple topics here and am fascinated about your experience and professionalism when answering, 

So we have decided to use SCRUM in our team. We run 1 week long sprints. We estimate based on story points, where each story point would represent a half-day effort. We use JIRA as our main tool.

We have a DEV team and a QA team. Currently as their work is quite often separated. We use Dev tasks (created by the Devs) against a Story, once the dev work has been completed, the DEVs would put the Story in Pending QA status (meaning all dev tasks related to it have been compeleted) and QA team will test the story itself. If there is a QA work independent of the DEVs, a QA task will be created by the QA team to test. 

So our main problem is the following:

We would like to have a clear reporting of how each sprint went, how many of the planned points were used, if planning matches reality etc. We have used the Jira reporting tools, and for now they seem detailed enough but the problem is that if a ticket has not been completed during the current sprint and has to go to the next one, it will have to be re--estimated and the work already completed will be lost. This is why we would have them close the ticket, then clone it with the remaining work and add it to the next sprint. This way we keep a ticket and its estimation and completed work to its respective sprint and separate the remaining work for the new one. However this practice is not welcome by the team. They feel it is not clear enough, that we bend to satisfy reporting, that we are doing double work. How do you track your work? How do you save it and how do you transfer issues without losing work and keeping reporting accurate enough?

 

Thanks


09:46 pm July 30, 2019

Just about everything in Agile Software Development is built around the notion of a cross-functional team. What you have now, with walls or silos around your Development Team and your QA Team is the opposite of a cross-functional team. And these hand-offs from Development to QA are causing a lot of friction.

My first suggestion is to understand why you have this separation and if it is truly necessary. Even if it is necessary (such as for compliance), you still need a good workflow that ends with done work. That is, your development process should always end with a fully integrated and tested product component. If you need an independent quality assurance activity, it should not exist to find issues, but to satisfy regulatory requirements. However, even after having worked in regulated environments, the independence is often not necessary except for the most critical software systems.

I think if you work on this cross-functionality and focus on getting work to a completely done state within a Sprint timebox, you'll also end up no longer fighting the tools and you'll surface a new set of problems and changes. Since you're producing working software on a regular basis, the reporting will just fall out of the tool and making work visible and you can focus on improving quality and throughput of your process.


09:49 pm July 30, 2019

Cole,

I'm curious who is championing Scrum (NOT AN ACRONYM) in your organization?   The reason for my question is because you described a number of anti-Scrum practices and processes.

To question a few for starters:

We estimate based on story points, where each story point would represent a half-day effort.

Why do you do this?   Why not just work with time-based estimates?   What value does story-point estimating provide you, if each story point simply equates to a time estimate?

We run 1 week long sprints.

You should note that only very mature Scrum Teams are capable of executing 1-week sprints effectively.   Why did you decide on this iteration length, and what impediments and obstacles did such an aggressive approach uncover?

We have a DEV team and a QA team.

There isn't a "Dev" team an a "QA" team in Scrum.   There is only the Development Team, composed of all the skills and abilities to take items and deliver ythem according to the Definition of Done (DoD).   As you described it, there seems to be a very visible and wasteful hand-off between development and testing.   To me, there isn't much difference between your approach and traditional waterfall development, save the attempt to work in smaller pieces.

My suggestion is to look at your attempt to implement Scrum, and ask why you seem to have difficulty in achieving the main goal of Scrum, which is to deliver a potentially releasable Increment of "Done" product at the end of each Sprint.   That should be your focus, instead of looking at ways to better report on and track what is effectively a poor process.


10:04 pm July 30, 2019

So we have decided to use SCRUM in our team.

From what you go on to say, I’d question whether in fact you have. Scrum is defined in the Scrum Guide. It seems you’ve decided to do something else.


By posting on our forums you are agreeing to our Terms of Use.

Please note that the first and last name from your Scrum.org member profile will be displayed next to any topic or comment you post on the forums. For privacy concerns, we cannot allow you to post email addresses. All user-submitted content on our Forums may be subject to deletion if it is found to be in violation of our Terms of Use. Scrum.org does not endorse user-submitted content or the content of links to any third-party websites.

Terms of Use

Scrum.org may, at its discretion, remove any post that it deems unsuitable for these forums. Unsuitable post content includes, but is not limited to, Scrum.org Professional-level assessment questions and answers, profanity, insults, racism or sexually explicit content. Using our forum as a platform for the marketing and solicitation of products or services is also prohibited. Forum members who post content deemed unsuitable by Scrum.org may have their access revoked at any time, without warning. Scrum.org may, but is not obliged to, monitor submissions.