Trying to transition from Ad-hoc coding to Scrum - what is in a fully refined user story?
I'm a BA/PO. The majority of our devs have been doing ad-hoc coding: talk to the user, code whatever they ask, repeat. The pitfalls are obvious - unable to predict when the product will be releasable in any form, users using the dev team to "try out" design ideas, then discarding them which just wastes time and resources, etc.
I have been trying to convince devs that refined user stories for building a page in an application contain specific information about the fields on that page: Whether the field is optional or required, can hold numbers only or text, max size, etc. I also believe that for displayed fields, the story should say where the field is sourced from or how it is calculated. I believe this info should be in the user story before we start coding - otherwise, how can anyone test it? I'm not asking them to do the refining - I'm happy to do it myself or to work with them.
I get pushback that I'm just trying to make everyone do waterfall, and that I'm wasting time. I don't think I am. Am I crazy? Am I wrong? I've been looking for literature about refining user stories to see if anyone addresses this. All I can find is information about running refinement meetings, when you should refine (2 days prior), putting stories in the correct order, etc. Can anyone point me to some references or authoritative examples of fully refined and ready user stories?
Or at least reassure me that I'm on the right track?
Experimentation with users is good, even if many ideas are discarded. Better to validate assumptions early rather than late. Remember that a user story is a placeholder for a conversation about a requirement. The value lies in the conversation itself.
Do you have at least one Done increment of usable quality every Sprint? If not, the Developers are spending their time refining, and not really accomplishing much else.
How much risk are you willing to tolerate?
Spending more time on up-front specification and design can reduce risk, to a certain extent. Having more details can ensure that you're considering test coverage, edge cases, and meeting user expectations. However, spending that time on upfront design slows the time from a stakeholder making a request to the team delivering and receiving feedback for the next iteration.
I agree with @Ian and @Thomas. But I'd like to add to the risk idea that @Thomas had.
There's another side of the risk equation that doing too much pre-work can increase the risk that what you build is delivered too late. That was one of the problems that waterfall created. In today's economy, things change quickly. If a user request comes in on February 1st, discovery starts February 15th and finishes February 28 there is risk that the original requirement has changed. Think about how a new law is passed and then the guidelines evolve to clarify.
The trick is to balance the effort in order to deliver what is needed when it is needed. That's where @Ian's experimentation comes in. Keeping the stakeholders involved throughout the process is important and ensures the delivery is valuable.