Never Enough Time to Test
While developers are coding away in the beginning of a sprint, what the heck are the testers doing? Sure, they can develop their test plans, but that isn't as time consuming as development, so I usually see testers relatively idle until developers start closing out tasks.
This workflow creates a dilemma, where the sprint is inevitably end-loaded with testing work, leaving little time for rework and bug fixes. Often times, PBIs are incomplete at the end of a sprint, and they spill over into the next sprint for completion.
How do we get testers testing sooner, and prevent developers from rushing to get work in their hands?
> How do we get testers testing sooner, and prevent developers from rushing to get work in their hands?
Any or all of:
1) Limiting work in progress, ideally to single piece flow, so throughput increases
2) Cross-training Development Team members so that they can go to where the work is, instead of waiting for it to come to them
3) TDD and/or BDD techniques
All good suggestions. With respect to limiting work in progress, assuming there is only one work item, we still have the same issue, where most of the work happens at the end - for nontrivial coding tasks, at least.
We don't have an issue with developers running out of work. Did I give that impression? We also don't lack testing capacity. The issue is one of a chronically late hand-off to testers during the sprint.
TDD/BDD doesn't replace an actual QA tester.
> With respect to limiting work in progress, assuming there is only one work
> item, we still have the same issue, where most of the work happens at the end
Yes, you will have that problem if your Sprint Backlog represents an irreducible batch of work. On the other hand, if your Sprint Backlog includes multiple granular items selected from the Product Backlog, there will be the potential for throughput to increase. The more fine-grained the backlog items, the more fluidity there can be in the flow of work.
Yes, it all depends on the refinement status of your backlog. Also don't assume that Testers are for black box or functional testing only, involve them in writing unit tests. Test specialist should work in close collaboration with Dev specialist while the later is writing the code. Along with CI make sure that every 2nd/3rd day at-least one story is getting completed.
I am working with my current team around a similar issue/impediment.
It is symptomatic of a larger story approach. One very good solution is to split stories into smaller items that can be completed much quicker, and therefore engage your "testers" much earlier in the sprint.
To Ian's point, you can also alleviate this impediment by eliminating the roles you are currently maintaining in your sprint team (developers, testers). Resist the temptation to support "hand-offs" between development and testing, and support "Team" ownership of the items accepted each sprint.
In an ideal organization, this would be wonderful. However, we work with block box testers 12 time zones away. The work effectively goes over the wall. They also lack the skills to write coded unit tests.
Very few stories are simple enough to see completion (dev, code review, test, documentation) on day 2 or 3 of any sprint.
Mark, your response seems to indicate that your stories are too large, in light of your current organizational impediments around testing (hand-off 12 time zones away), and a lack of prioritization around cross-training.
I am assuming that your team is working from a Definition of Done that includes thorough acceptance criteria. You will continue to struggle if you are not grooming your stories effectively.
If you have complete and thorough acceptance criteria, try creating a story around each acceptance criterion. While you may initially perceive this as unneeded "overhead", give the experiment a try for a sprint and analyze the results. I think this approach will improve on some of the issues you are raising.
Most acceptance criteria I see here are just the PBI title pasted into the AC field. I don't know why the QA folks don't refuse to test anything like this, but they do. It could be because everyone is an hourly contractor, and it's in their best interest for things to move slowly.
The Scrum Master should be enforcing these things, but she doesn't. I get up on my soapbox every week about these issues, yet nobody changes...lack of consequences?
Mark, you are now identifying a number of impediments to your organization's Scrum adoption:
- Lack of ownership for sprint work (no repercussions for incomplete sprints)
- Poor acceptance criteria
- Scrum Master not enforcing Scrum
- Team members incentivized to NOT improve. Continuous improvement is a KEY Agile principle.
- Sprints resemble mini-waterfalls
- No Definition of Done (probably no Definition of Ready either)
All of this is in addition to your original issue around optimizing your "testing" team members, which actually should not be your main concern here. Instead of worrying about making sure everyone is staying busy, place your focus on the sprint work and getting those items completed before the end of the sprint.
The customer doesn't care if all of the team members were kept busy. Their focus is on the end work product, and making sure it is useful to them. That is what you and your team should be concerned with as well.
Unfortunately, I have no influence to get these resolved. Raising these issues with executive management would be the only next step. However, I'm already branded a "boat rocker" with all my "process Nazi" stuff. Not kidding. This place is clearly committed to Scrum in Name Only (SINO?)
Mark, I think iin this case then it comes down to you:
How invested are you in this endeavor?
Are you willing to ride it out and endure the ineffectiveness or are you willing to risk being thrown out by creating a ruckus...
Hard choices. There are better workplaces.
And as my old Agile mentor taught me: every project gets the employees it deserves.
Are there ways to do a stealth improvement? Are there other project members on your side?
Ultimately, this team is effective because it ignores the process' shortcomings and just gets things done through brute force. However, they work at an unsustainable pace, all on multiple projects simultaneously, along with nonstop production support tasks. I don't understand how they aren't all burned out.
I feel your pain.
If it is any consolation, I have worked in similar environments, and I am fairly certain many contributors here at Scrum.org have also faced similar situations.
It is a good thing that you are evangelizing proper Scrum and Agile in your workplace. However, I would not favor a discussion of these impediments with Executive Management if there isn't a clearly visible commitment to Scrum from top management. If that commitment is there, try to engage those individuals (Community of Practice?) around bringing change into your area.
A key skill for Scrum Masters (and it really comes from experience) is learning how to influence others without having any authority.
It can be a very hard sell job to point out to others they are doing something "wrong", and what is the "right" way to do something. This puts people immediately on the defensive, and any help you are willing to offer will often fall flat.
From what you've stated, there are a multitude of "opportunities" for improvement in how your organization practices Scrum. You can't change everything at once. Start by not raising every dysfunctional item you see. Identify one area at a time (come up with your own prioritized Scrum improvement backlog), and research it so that you can discuss not only the poor results of that practice, but also the potential benefits from alternatives to that practice.
Then, pick and choose the right moments to introduce your "observations". You are not calling anyone or anything out here. You're trying to help. And in helping, you're trying to be an agent of change.
Does your team engage in Retrospective meetings at the end of each sprint? These meetings are actually the most critical Scrum ceremony there is, as it supports Scrum and Agile principles around learning and continuous improvement. These meetings are an excellent venue to introduce your observations.
If you're not having these meetings, then this is where you need to start! Raise this serious impediment, and continue raising it until these Retrospective meetings are occurring regularly. An organization/team basically has blinders on if they believe there is nothing they can get better at, or that improvement happens in a vacuum.
I like the analogy "leading a horse to water". If they drink, great! If they don't drink, that is ok, at least you raised the issue, and tried to make it as visible as possible. Start work on your next improvement item, and look for opportunities to discuss it. Also, you never know when the "pain" of a current practice can get to be too much, and your team/organization will want to revisit an issue you brought up earlier.
Hope this helps. The soft approach is usually more effective than the "Process Nazi" approach. Good luck!
Thanks very much for all your insights! I agree the retrospective would be a good place to start. They occur very rarely, and that's the Scrum Master's fault.
I work in a culture where people don't speak up or out. That is, you don't get in trouble for what you don't do.
If I were to point out failings, I would only be able to point to the Scrum Master not doing her job well, and my manager circumventing the process to get his personal goals satisfied. I don't think I can win here.
Perhaps one approach that might apply in your situation would be to not necessarily point out failings, but to simply ask power questions that start with "Why", "What", and "How"?
- Why don't we have more frequent retrospectives?
- How can our 2-week sprints be more effective/productive?
- What criteria tells us that a story is groomed (DoR) or complete (DoD)?
What, Why, and How questions do not challenge the recipient, but instead promote dialogue. Again, I would question much of what takes place in your environment (pick and choose carefully!), but for now, refrain from providing answers.
The best value you can provide right now is just helping others see the impediments and constraints in their current processes.
Great approach. That's a lot like how a therapist would work. Don't give answers, just lead the client with questions, and help them discover the answers, even if the answers might be obvious to you.
I am tester myself and how we get involved early is by testing the feature in the developer branch itself. This could be before the code is actually code reviewed. This way we identify any issues early and dont need much time to test after the code is integrated.
Timothy, really like your suggestions. Kudos..
I hear what you are saying about the QA running out of stuff to do early on after putting together the test plans. As scrum master for our team one of the things I also have the QA doing at the moment is 'grooming' the acceptance criteria of the PBI's we get with a view to moving towards automated acceptance testing.
You mentioned that the acceptance criteria for the stories you have are not ideal so one option might be to get your QA to do the same, look at them, tweak and propose the new a/c list as the type of thing that they would like to see ideally, of course you'd need to get the whole team on board and involved and it's a good retrospective topic but from the sounds of things with the SINO this may be tricky??