Forums

By posting to our forums you are agreeing to the Forum terms of use.
Technical vs. Business ready
Last Post 11 Nov 2013 04:51 PM by Xebord. 6 Replies.
  •  
  •  
  •  
  •  
  •  
Sort:
PrevPrev NextNext
You are not authorized to post a reply.
Author Messages Not Resolved
P.Ross
New Member
New Member
Posts:87
P.Ross

--
08 Nov 2013 09:31 PM
    Dear all,

    I’m currently facing with the following Scrum “problem”.
    During the end of each Sprint, the Scrum team holds a Sprint Review with PO and stakeholders. In Scrum usually after the Sprint Review, there will be a clear view of which items have been accepted and which haven’t. The incomplete items gets back to the backlog.

    We’re a company that constantly work on a large and complex web application. This means that once a story is done, it isn’t completely done because manuals needs to be created and wholesalers needs to be informed etc. In other words, there is a whole different process starting after development.

    Because of this, the PO wants the stories that are done during the Sprint (demonstrated and approved) to be pushed into some kind of “acceptance environment” where a team of Product Managers can assess the changes, write the manuals, inform clients, do more testing etc.
    This approach somehow doesn’t feel very agile because at the end it isn’t a potential shippable product. Technical it’s shippable but not business-wise…

    How do deal with this kind of problems?

    Nitin Khanna
    New Member
    New Member
    Posts:47
    Nitin Khanna

    --
    08 Nov 2013 09:55 PM
    Hi P. Ross,

    Allow me to share some of my thoughts....

    I think this really gets into the Definition of Done (DoD) and what the PO considers the shippable increment. Also remember, the person responsible for maximizing ROI and delivering business value is the PO.
    E.g. "Ready to ship to Product Managers, within the Company"

    This may be a blessing in disguise and a nice way to reduce any external dependencies.

    That said, however, there are still some questions --
    A) Are the Product Managers, Manual writers, those engaging with the Clients part of the Review?
    B) Is this an internal product or an external one?
    C) Are there a lot of bugs discovered on further testing?

    Does the Dev Team feel they can do more? If it makes sense, it may be worth asking the rest of the team whether we can take on more (e.g. writing manuals), or adding some other members on the Dev team.

    The DoD may also evolve, accordingly.

    I think the "problem" isn't really a challenge, but an opportunity to reflect and improve wherever possible. Hope some of this helps -- I would also be curious to see how others respond, or are in a similar situation.
    Ganesh Doddi
    New Member
    New Member
    Posts:1
    Ganesh Doddi

    --
    08 Nov 2013 10:05 PM
    There are two options.

    Option 1 - Include writing manuals, informing clients into the Sprint
    This option is the preferred one as the information especially increment changes is fresh and can be done efficiently. It is important these people are part of the team as they play key role in delivery and also doing some testing.

    Obviously there are some problems that you are encountering using this approach. Alternate solutions can be considered based on the actual problems. It would help if you can specify them.

    Some obvious issues that I am thinking you are facing could be as follows:
    * The Product Managers will involve only in part time in the sprint
    * The rest of the development team may not much to do of the sprint backlog while this activity is being done.
    * The testing done typically do not require changes by the development team.

    For the above issues, I would suggest the following option.
    Option - Create a small second scrum team, Business stream, with people responsible for the final tasks.
    * Product backlog item from first team is Done once it is approved by PO and moved to Acceptance testing.
    * Product backlog 'Done' by first team becomes Sprint backlog for the Business stream.
    * Sprint for the Business stream could be a shorter duration compared to the first team.
    * Sprint goal would still be the same for both teams.

    There could be some issues coming up with this approach but using typical inspect/adapt scrum practices, I am sure these could be addressed.

    As mentioned earlier, further alternate solutions can be considered if you can highlight the actual problems encountered if using the first option.
    P.Ross
    New Member
    New Member
    Posts:87
    P.Ross

    --
    09 Nov 2013 01:01 PM

    That said, however, there are still some questions --
    A) Are the Product Managers, Manual writers, those engaging with the Clients part of the Review?
    B) Is this an internal product or an external one?
    C) Are there a lot of bugs discovered on further testing?

    Does the Dev Team feel they can do more? If it makes sense, it may be worth asking the rest of the team whether we can take on more (e.g. writing manuals), or adding some other members on the Dev team.


    A) Yes
    B) Both, our product are used by ourselves and by our clients.
    C) Yes, our product is so big, complex and has questionable design choices that when you introduce a feature, somewhere else (can) break. Despite that the Development team is continuously improving themselves. Unit testing, pair programming and CI are subjects that will be introduce shortly.

    And no the Development team is not capable in writing these functional manuals. Our Product Managers are hired and skilled to write these manuals.

    I guess the big question is: what would you do if stories never gets completely approved during the Sprint Review. There are various (and in my opinion) valid reasons such as:
    1) what the Development team built is still error-prone. Its not the fault of the current Development team but bad legacy coding.
    2) software is for external usage so the manual needs to be created before launching it.
    3) software is for external usage so clients needs to be notified.
    4) what we usually hear from PO is that the Sprint Review is too short to do a good User Acceptance Testing.


    Robert du Toit
    New Member
    New Member
    Posts:38
    Robert du Toit

    --
    09 Nov 2013 01:32 PM
    Stories are approved based on the Definition of Done and the Acceptance Criteria. So reasons 2 and 3 that you list are not reasons for not approving a story in the Sprint Review unless they are listed in DoD or Acceptance Criteria.

    This is potentially shippable software. But you still need to do more work (but outside of the team) to deploy it. I agree with Ganesh, another scrum team for the process after development would be good. But really, for the dev team, that shouldn't matter, because they're Done. If you're getting a lot of bugs reported by the Product Managers after you've handed them the stories that's indicative of a Quality problem within the team (or possibly a requirements problem).

    As for 1, your stories needs to pass all of the Acceptance Criteria. If that means they have to fix the legacy code, well that's technical debt that you need to pay when developing the story. If the things that are error-prone are outside of the story's acceptance criteria then the story should still be accepted (and another story/bug created to fix those errors based on priority). But the dev team should have enough test coverage that they are sure the stuff they build is error free as much as possible.

    For 4, there's nothing to stop the PO from inspecting the stories during development, and arguably the earlier you get input from those close to the Customer the better. Also, I can see how a Sprint Review may not be enough for full User Acceptance Testing. In our organization, after the story is Done and has been approved in the review it is given to the customer for UAT. At that point, it is still potentially shippable, in that our team is confident enough in it that they would deploy it to production whether or not the customer actually does any UAT testing. (This is not to say that there are no bugs, just that our dev team has completed and inspected their work to a reasonable degree and is not aware of any bugs).
    Ian Mitchell
    Advanced Member
    Advanced Member
    Posts:575
    Ian Mitchell

    --
    11 Nov 2013 10:11 AM
    > Because of this, the PO wants the stories that are done during the Sprint
    > (demonstrated and approved) to be pushed into some kind of “acceptance
    > environment” where a team of Product Managers can assess the changes,
    > write the manuals, inform clients, do more testing etc.
    > This approach somehow doesn’t feel very agile because at the end it isn’t
    > a potential shippable product.

    <devils_advocacy>

    This is interesting because in the narrowest and strictest of Scrum terms you don't have a problem. The Product Owner is asking the Development Team for a "push" to acceptance testing. This has become the Development Team's horizon of shippability...incremental delivery to a higher level of staged deployment. That's the limit of the release. It's a push-to-test model, not a release into production. The Definition of Done is correspondingly constrained, and it's likely that technical debt will be incurred until there is eventually a release into live. But since the PO determines product value, and seems to believe that deployment to a staging environment is useful and provides good value, can you really say that the position taken is wrong?

    </devils_advocacy>

    Scrum is agnostic about whether or not "Done" has to mean a delivery into production. This is to allow for complex situations where it is considered pragmatic to have multiple Definitions of Done at various points in the value stream. In your situation, your current level of Done might be appropriate for a sprint, but may be supplemented with a Definition of "Release Done" that provides for manuals and SIT testing.

    In Scrum it is possible to have Definitions of Done that are comparatively weak and far removed from production, since every team has to start somewhere and process ownership may initially be limited. What matters is that, at any given point, a DoD is clearly defined and its "quality" continually challenged and improved. Good quality means reducing batch sizes and enqueued work, so that value does not depreciate before entering production. What was once "push to test" should (eventually) become "pull from live". These improvements often happen only gradually, and are one of the outputs of a Sprint Retrospective. So if I was you, I'd have a word with the PO about whether he or she is really "satisfied" about what Done means.
    Xebord
    New Member
    New Member
    Posts:13
    Xebord

    --
    11 Nov 2013 04:51 PM
    Hi P.Ross

    It would be the best to have clearly defined acceptance criteria for every US and PO who speaks with one voice. But it sounds that in your case it might be hard to achieve. Instead of fight with it, try to tailor.

    You mentioned that the teams provide the fully finished US and then someone else gives a feedback if it's really done. You can assume that the US was the initial part of functionality then the feedback is the additional requirement for another US (or bug) and put it to the backlog. The same way you deal with the huge functionality divided into several USs. For example if you need to provide the web interface to a data base, you can create the first US with UI via console then the next one via webpage.

    Let us know your choice and how it works.

    You are not authorized to post a reply.


    Feedback