Skip to main content

Some basic questions

Last post 01:58 am August 5, 2019 by David Wolff
5 replies
11:38 pm August 2, 2019

Hi All,

I need your perspective/suggestion/opinion on following questions. Thanks for your time.

1. How to decide whether there should be a single product backlog or more when a business unit is launching a new product (not feature) touching across multiple teams? SAFe/LESS/Nexus is alien term for the management.

2. Our monthly release consists of 2 two weeks sprint. Due to intense internal politics there are certain unresolved business and technical dependencies. In most of the releases first sprint is development sprint and next one is for QA. Management determines the success of the release in terms of rollback percentage and L1/L2 tickets. This is ScrumBut model but serving the purpose. My question is, what value we are missing here by following this approach?

3. The dev team is full of rockstar developers (they are really nerd) and they only care about the 'cool' stuff (I mean how cool his code is). They have zero respect to process (we are using scrum framework). Management highly dependent on metrics. In absence of dev team's support it's really very difficult to capture real data and generate the metrics. Management is expecting miracle from SM in terms of educating the team. Even an Agile coach was there for 3 months but nothing changed. How the SM should approach this situation without quitting?

4. How to enforce/adopt certain important engineering best practices like Automated Unit Testing, Code Peer review, Reviewing of the test cases with business/end user etc. when team is not enough encouraged to adopt them?

Thanks

Raja


12:16 am August 3, 2019

How to decide whether there should be a single product backlog or more when a business unit is launching a new product (not feature) touching across multiple teams? SAFe/LESS/Nexus is alien term for the management.

It's called a Product Backlog for a reason. If you have one product, you should have one Product Backlog, regardless of the number of teams. However, there are limits to the number of teams - at a certain point, it becomes difficult to manage a large number of teams. At this point, you may want to look at developing a portfolio of several related products or otherwise align your product technical architecture. 

Our monthly release consists of 2 two weeks sprint. Due to intense internal politics there are certain unresolved business and technical dependencies. In most of the releases first sprint is development sprint and next one is for QA. Management determines the success of the release in terms of rollback percentage and L1/L2 tickets. This is ScrumBut model but serving the purpose. My question is, what value we are missing here by following this approach?

The thing that strikes me the most is a good, robust feedback loop. One of the purposes of iterations (Sprints) with cross-functional teams that get work to a Done, potentially shippable state, is that the work can be inspected and the plan adjusted very rapidly. The Principles behind the Agile Manifesto call for delivery of working software on a timescale "from a couple of weeks to a couple of months, with a preference to the shorter timescale" while Scrum calls for a maximum Sprint length of 4 weeks. Right now, it takes you 4 weeks to have something that gets you feedback - you don't want feedback from external stakeholders on software that doesn't work, since the feedback will probably be to fix the bugs.

There's a lot of waste in your current process. Your developers work for 2 weeks on a bunch of stuff and then hand it off for testing for another 2 weeks. During those 2 weeks, a couple of things may be happening. One option is that developers are being asked questions or even asked to fix bugs. Context switching is waste. Another option is that the QA cycle is producing a list of bugs to fix. Defects queuing up is waste. Neither of these account for the fact that development is continuing to change the underlying system. Yet another option is that a show stopping bug found by QA can cause the entire QA process to wait until it's resolved to allow testing to proceed.

Build a cross-functional team. As you build up the knowledge and produce working software on a regular cadence, you can start to increase the cadence. Right now, it takes 4 weeks for work to get feedback from stakeholders. If there are sufficient or significant issues, that may be 8 weeks. Wouldn't it be nice to have working software and feedback on it every 4 weeks? How about every 3 weeks, or even 2 weeks, or even faster?

The dev team is full of rockstar developers (they are really nerd) and they only care about the 'cool' stuff (I mean how cool his code is). They have zero respect to process (we are using scrum framework). Management highly dependent on metrics. In absence of dev team's support it's really very difficult to capture real data and generate the metrics. Management is expecting miracle from SM in terms of educating the team. Even an Agile coach was there for 3 months but nothing changed. How the SM should approach this situation without quitting?

I'm curious what metrics management is expecting and what they intend to do with them. Outcome-based software development is much more closely aligned with good software development practices. Coupled with rapid feedback from external stakeholders by putting working software in their hands, you can focus on looking at (or even measuring) stakeholder and user-centric values instead of measuring the development team directly.

There's plenty of discussion on various sites and blogs about "outcome-based" software development, "outcome-driven development", and "outcomes over outputs" or "outcomes not outputs" that go into a whole lot more depth on this type of question. But the gist is that you should measure outcomes experienced by your stakeholders, rather than outputs of your development teams.

How to enforce/adopt certain important engineering best practices like Automated Unit Testing, Code Peer review, Reviewing of the test cases with business/end user etc. when team is not enough encouraged to adopt them?

The team needs to be shown the value in these and given the opportunity to adopt them.

Automated unit testing is insufficient. The emphasis needs to be on automated testing at all levels, from unit to integration, to give the team confidence that the work that they have done is correct and that, as they modify the system with additional features or changes, they do not introduce errors. There are different ways to slice testing, often centered on the risk of failure in certain aspects of the system and ensuring the riskiest cases are covered.

Peer reviews not only help to find errors or mistakes or oversights earlier, but it's an opportunity to share knowledge. If the knowledge of part of the system is locked away in one person's head (or maybe written in a few cryptic notes somewhere), other people are going to struggle if they need to interact with that part of the system in the future. This also helps to build cross-functionality among the members of the team by getting back-end centric developers looking and asking and learning about front-end code and vice-versa or primarily product developers looking at the automated test code.

Review acceptance test cases with the end user helps to make sure that the tests are right so there can be some level of confidence that the system is not only behaving as it was designed, but that the design of the system can actually fulfill the needs of the end users in an appropriate manner.

The business needs to understand the value in these, and other, software engineering practices. So do the members of the development team. And both must encourage and support their use, to the extent that it balances the technical and business needs.


02:53 am August 3, 2019

@ Thomas - I appreciate your time and valuable feedback. 

 


05:20 am August 3, 2019

Does the organization have a release-quality Definition of Done through which focus and discipline is cultivated, each and every Sprint? If not, why not?


08:10 pm August 3, 2019

Hi Ian,

I was actually waiting for your comments :)

To answer your question - answer is - 'Yes' and 'No'. Agile CoE has serious attention to quality (of process and product). But message gets diluted when it travels from CoE to Business unit (BU). Their (BU) argument is 'process should serve the project' not vice versa. Projects get funded from BU. And CoE grill the projects when they are not aligning with their recommended process. As a SM my pay band doesn't allow me to put up a argument with BU people, neither for other SMs as well. These F500 companies were born in waterfall age. They have to unlearn first to adopt the 'Agile'. Till then we'll have to wait.

Thanks for your time.


09:38 am August 4, 2019

Raja. That's correct. Makes perfect sense.


By posting on our forums you are agreeing to our Terms of Use.

Please note that the first and last name from your Scrum.org member profile will be displayed next to any topic or comment you post on the forums. For privacy concerns, we cannot allow you to post email addresses. All user-submitted content on our Forums may be subject to deletion if it is found to be in violation of our Terms of Use. Scrum.org does not endorse user-submitted content or the content of links to any third-party websites.

Terms of Use

Scrum.org may, at its discretion, remove any post that it deems unsuitable for these forums. Unsuitable post content includes, but is not limited to, Scrum.org Professional-level assessment questions and answers, profanity, insults, racism or sexually explicit content. Using our forum as a platform for the marketing and solicitation of products or services is also prohibited. Forum members who post content deemed unsuitable by Scrum.org may have their access revoked at any time, without warning. Scrum.org may, but is not obliged to, monitor submissions.