Forums

By posting on our forums you are agreeing to our Terms of Use.

Please note that the first and last name from your Scrum.org member profile will be displayed next to any topic or comment you post on the forums. If you have left the first and last name fields blank on your member profile, your email address will be displayed instead.

All user-submitted content on our Forums may be subject to deletion if it is found to be in violation of our Terms of Use. Scrum.org does not endorse user-submitted content or the content of links to any third-party websites.

Ongoing long-term testing that needs to run for several sprints
Last Post 13 Jan 2014 10:43 AM by Charles Bradley. 2 Replies.
  •  
  •  
  •  
  •  
  •  
Sort:
PrevPrev NextNext
You are not authorized to post a reply.
Author Messages
Kenny McCartney
New Member
New Member
Posts:1
Kenny McCartney

--
13 Jan 2014 04:37 AM
    Hi,

    We're in the process of adopting Scrum in our group, and we're hitting an issue with on-going testing of our software. I'd appreciate any advice or thoughts that anyone has.

    The group I'm in develops compilers, and one of our on-going activities is testing - all the developers take part in it. As programmers will know, compilers have dozens of different switches and modes to control language settings, optimizations, etc., and we have *zillions* of test cases (commercial, in-house, regressions). Basically, to do a full test run that runs all our tests with a good range of options takes many weeks. We also find that, as testing progresses, we find all sorts of "gotchas" - badly configured tests, obscure regressions, newly-discovered bugs, etc., so on-going maintenance is needed.

    When we started with Scrum, we had a bunch of unchecked results and our first test-related story was along the lines of "Check the following sets of results, and log defects as required." Seemed sensible (it was a clearly defined task, we could define "Done", we could estimate the time it would take), but it didn't really work for several reasons:

    1) Sometimes a single regression can mean that several test runs are invalid - you end up with lots of failures and the only solution is to throw away those results. In the old days, we'd stop the testing, fix the bug right there and then, then start testing again. Our belief now is that we should log the bug in our bug tracking system and carry on as best we can for the remainder of the sprint- we shouldn't be diverted into fixing the bug, even though our instincts tell us otherwise. The bug should be prioritized and dealt with in the normal way (probably next sprint).
    2) It can be very difficult to say, in advance, which results should be checked, and since we generate results all the time, there's always the risk that a recent set of results needs looked at straightaway. But our current method of saying in advance which results we'll check means this isn't possible – results generated during the current sprint will be checked in the next sprint.

    So I'm trying to work out how to write a story that gives us freedom to look at whatever looks most important *during the sprint*, and also gives us freedom to log *and fix* bugs if they're really holding us up (although I expect that to be unusual). I wondered about:

    - “Spend <X> hours on testing activities, whatever that may be”
    ---- The Definition of Done is that the requisite hours are logged. But I know this is wrong. Unfortunately, it represents what we actually do, and what the team want to continue doing.
    - “Ensure testing is on-going and productive”
    ---- Very vague. No DoD. No way to estimate story points or hours

    So neither of those are really "Scrum". And I know what you'll say:
    - "Better engineering practices to ensure that no regressions are introduced and tests are written properly!"
    - "Come up with a way to run regression testing within a sprint! If you can’t do that, you’re not producing shippable software!"

    They’re fine ideas, and we have ongoing process improvement activities that will help in the longer term, but it's just not going to happen right now. Testing will always be hard and a bit unpredictable.

    So, any ideas for how we deal with this testing issue in Scrum? Anyone faced this sort of thing before?



    Thanks,


    Kenny
    Ian Mitchell
    Veteran Member
    Veteran Member
    Posts:1495
    Ian Mitchell

    --
    13 Jan 2014 07:51 AM
    > In the old days, we'd stop the testing, fix the
    > bug right there and then, then start testing again

    Yep, and the moment you stop doing that you start incurring technical debt. Why did you stop? Are these tests all manual QA tests, rather than automated unit or BDD tests? Is that how your situation became unmanageable, and is that why regression testing takes weeks rather than minutes?

    If so, I'd suggest getting some TDD unit tests in place, gradually paying off some of the technical debt and improving code coverage sprint by sprint.



    Charles Bradley
    Basic Member
    Basic Member
    Posts:408
    Charles Bradley

    --
    13 Jan 2014 10:43 AM
    Kenny,

    Let's dig deeper for a minute and try to tease apart the rest of the needed context to be able to help you. Think of this as you presenting a symptom or two to a doctor, and we as doctors need to run some further labs in order to diagnose accurately so we can suggest treatment. Just realize that diagnosing accurately over the internet is a bit risky, regardless of however well intentioned we are. :-)

    1. Are these tests you speak of fully automated?
    2. Are tests for new code/functionality fully automated?
    3. Does your team apply the Automated Testing Pyramid approach? http://www.mountaingoatsoftware.com...on-pyramid
    4. How often is your software released?
    5. Is releasing your software with these bugs acceptable to the PO? If so, why? If not, why not?
    6. What is your role at your company? What is your Scrum role?
    You are not authorized to post a reply.


    Feedback