Have you heard the quote by Deming “Without data, you’re just another person with an opinion”? If you have heard it and you have taken it seriously, you probably already started your journey with Evidence Based Management. Measuring value to enable improvement and agility sounds like a good plan, but as usual it is easier said than done.
The measuring itself might be a very tricky process. Let’s limit our considerations and look at the aspect of satisfaction measurement. Many companies ask for feedback through satisfaction surveys to understand the perception of their customers regarding quality of their service or products they offer, e.g. shops, banks, private medical care service. Here are some of the challenges one can face when doing such a survey:
- You can drown in lots of data and miss relevant information that could help you improve.
- Too little data may make it difficult to differentiate what is a noise in your data and what is the real information.
- What is even worse, too little data can give you a false cosy feeling that you have good enough understanding of the reality while your sample is too small to be treated seriously.
- Appearances might be deceptive - even if gathering data is practically cost-free for your product, you can face some non-obvious costs related to it. If you require users to submit feedback that is not later used then the users might get frustrated and change their mind regarding using your product.
- On the top of that let’s add some other human-type problems:
- Do you ask your customer or user the right questions?
- Do you and your user understand multiple choice answers in the same way?
And while we're on the subject of differences in the human’s perception, let me share a story with you. If you are hungry, grab something to eat before you continue because we are going to talk about… sandwiches.
Satisfaction or feedback survey
It all started with a simple need - to measure participants’s satisfaction with the meeting and use that measurement to assess whether we are going in the right direction or not. One example worth mentioning here is the Sprint Review Feedback Survey.
Fist to five seemed good enough to quickly gather data. We used the scale 1-5 to have an odd amount of possible answers (assuming 5 means “I love it” and 1 means “It was a terrible meeting, I wish I was not there”). After several meetings we started to compare the results, trying to understand what we can learn from them. While we were able to understand the differences between answers 1, 3 and 5, the situation was not so easy when we looked at answers that were next to each other in the scale. We asked ourselves - what is the difference between 4 and 5? Do we all understand survey points in the same way? Is there a chance that Samantha’s 4 will be equal to Mary’s 5?
And then Julia, who is an expert in the fields of data and cooking, came up with a relatively simple explanation which she called “the Sandwich Effect”:
When there is no clear explanation of the scale in the satisfaction survey with 1-5 scale, the difference between 4 and 5 in the answers might be as follows:
- 4 - it was a very good (***) e.g. meeting, training, presentation,
- 5 - it was a very good (***) AND I HAD A DELICIOUS SANDWICH for breakfast.
In other words, without clarification someone’s answer might to some extent depend on external factors not related to the meeting itself like e.g. general well-being, current satisfaction with private life, random situations that happened before the meeting and affected someone’s mood.
Why is it happening?
We found out 2 possible explanations of that affect:
- Lack of clarity - if you do not give a clear explanation about the scale, you leave it up to someone’s interpretation which basically may be much different than yours.
- Personal bias - we all have biases and whether we like it or not they affect the way we perceive the world. What is more, those biases could encompass both favorable and unfavorable assessments.
What did we change?
We decided to introduce some changes:
- Answers calibration - we decided to describe clearly using words what each of the answers mean for us not to leave much space for personal interpretation.
- Shorter scale - as we realized that we do not need to differentiate several shades of satisfaction, we limited the number of possible answers to 3: (1) To improve - below expectations, (2) Our standard - meets expectations, (3) Like - above expectations!
- Optional open questions - we always nudge the survey participants to provide written answers through open questions like fields to share feedback / suggestions / ideas. Words carry more than numbers alone!
Words, not only numbers
We’ve learned that it’s great to use numbers in your measurement of satisfaction but they might not be enough, especially when you ask about someone’s opinion or perception. Using words in a description of the survey may help you make sure that you and your respondents understand the answers in the same way.
Simplicity is the key
More possible answers to select from does not necessarily mean more information. We realized that 3 is just fine for us.
Not sure what is right for you? Improve by experimenting and ask yourself:
- What information is useful for me?
- What kind of value do I get thanks to using my scale?
- What is not relevant to me? Less choice can also mean easier choice for your survey respondents.
Powerful open questions
Do not forget about open questions, because one crisp feedback sentence might give you a tail wind which pushes you to make significant changes.
Constantly seek for feedback
Ask for feedback regularly to create a habit and emphasize that feedback is a normal thing and it is expected. Throughout 2021 we sent feedback surveys after every single Sprint Review to meeting participants. Did every feedback session have a big impact? For sure - no. However, frequent small steps allowed us to climb quite a high summit and the climbing adventure is not over.
If you want to learn more about measurements, explore the idea of Evidence Based Management. There are some valuable resources that can be found in e.g Scrum.org pages.
Writing this post would not have been possible without my wonderful colleague Julia Karpińska who created the name “sandwich effect”. We met while working at Nomagic (if you are interested, see Nomagic LinkedIn profile).