Skip to main content

Measure what matters - beware of the Sandwich Effect!

December 27, 2021

Have you heard the quote by Deming “Without data, you’re just another person with an opinion”? If you have heard it and you have taken it seriously, you probably already started your journey with Evidence Based Management. Measuring value to enable improvement and agility sounds like a good plan, but as usual it is easier said than done.

Measurement challenges

The measuring itself might be a very tricky process. Let’s limit our considerations and look at the aspect of satisfaction measurement. Many companies ask for feedback through satisfaction surveys to understand the perception of their customers regarding quality of their service or products they offer, e.g. shops, banks, private medical care service. Here are some of the challenges one can face when doing such a survey:

  • You can drown in lots of data and miss relevant information that could help you improve.
  • Too little data may make it difficult to differentiate what is a noise in your data and what is the real information.
  • What is even worse, too little data can give you a false cosy feeling that you have good enough understanding of the reality while your sample is too small to be treated seriously.
  • Appearances might be deceptive - even if gathering data is practically cost-free for your product, you can face some non-obvious costs related to it. If you require users to submit feedback that is not later used then the users might get frustrated and change their mind regarding using your product.
  • On the top of that let’s add some other human-type problems:
    • Do you ask your customer or user the right questions?
    • Do you and your user understand multiple choice answers in the same way?

And while we're on the subject of differences in the human’s perception, let me share a story with you. If you are hungry, grab something to eat before you continue because we are going to talk about… sandwiches.

sandwich
Photo by Monika Grabkowska on Unsplash

Satisfaction or feedback survey

It all started with a simple need - to measure participants’s satisfaction with the meeting and use that measurement to assess whether we are going in the right direction or not. One example worth mentioning here is the Sprint Review Feedback Survey.

Fist to five seemed good enough to quickly gather data. We used the scale 1-5 to have an odd amount of possible answers (assuming 5 means “I love it” and 1 means “It was a terrible meeting, I wish I was not there”). After several meetings we started to compare the results, trying to understand what we can learn from them. While we were able to understand the differences between answers 1, 3 and 5, the situation was not so easy when we looked at answers that were next to each other in the scale. We asked ourselves - what is the difference between 4 and 5? Do we all understand survey points in the same way? Is there a chance that Samantha’s 4 will be equal to Mary’s 5?

Brilliant explanation!

And then Julia, who is an expert in the fields of data and cooking, came up with a relatively simple explanation which she called “the Sandwich Effect”: 

When there is no clear explanation of the scale in the satisfaction survey with 1-5 scale, the difference between 4 and 5 in the answers might be as follows:

  • 4 - it was a very good (***) e.g. meeting, training, presentation,
  • 5 - it was a very good (***)  AND I HAD A DELICIOUS SANDWICH for breakfast.

In other words, without clarification someone’s answer might to some extent depend on external factors not related to the meeting itself like e.g. general well-being, current satisfaction with private life, random situations that happened before the meeting and affected someone’s mood.

Why is it happening?

We found out 2 possible explanations of that affect:

  • Lack of clarity - if you do not give a clear explanation about the scale, you leave it up to someone’s interpretation which basically may be much different than yours.
  • Personal bias - we all have biases and whether we like it or not they affect the way we perceive the world. What is more, those biases could encompass both favorable and unfavorable assessments. 

What did we change?

We decided to introduce some changes:

  1. Answers calibration - we decided to describe clearly using words what each of the answers mean for us not to leave much space for personal interpretation.
  2. Shorter scale - as we realized that we do not need to differentiate several shades of satisfaction, we limited the number of possible answers to 3: (1) To improve - below expectations, (2) Our standard - meets expectations, (3) Like - above expectations!
  3. Optional open questions - we always nudge the survey participants to provide written answers through open questions like fields to share feedback / suggestions / ideas. Words carry more than numbers alone!

Takeaways

Words, not only numbers

We’ve learned that it’s great to use numbers in your measurement of satisfaction but they might not be enough, especially when you ask about someone’s opinion or perception. Using words in a description of the survey may help you make sure that you and your respondents understand the answers in the same way.

Simplicity is the key

More possible answers to select from does not necessarily mean more information. We realized that 3 is just fine for us. 

Not sure what is right for you? Improve by experimenting and ask yourself:

  • What information is useful for me?
  • What kind of value do I get thanks to using my scale?
  • What is not relevant to me? Less choice can also mean easier choice for your survey respondents.

Powerful open questions 

Do not forget about open questions, because one crisp feedback sentence might give you a tail wind which pushes you to make significant changes.

Constantly seek for feedback

Ask for feedback regularly to create a habit and emphasize that feedback is a normal thing and it is expected. Throughout 2021 we sent feedback surveys after every single Sprint Review to meeting participants. Did every feedback session have a big impact? For sure - no. However, frequent small steps allowed us to climb quite a high summit and the climbing adventure is not over.

 

 

If you want to learn more about measurements, explore the idea of Evidence Based Management. There are some valuable resources that can be found in e.g Scrum.org pages.

Writing this post would not have been possible without my wonderful colleague Julia Karpińska who created the name “sandwich effect”. We met while working at Nomagic (if you are interested, see Nomagic LinkedIn profile).


What did you think about this post?

Comments (6)


Santh
01:54 pm December 27, 2021

Wonderful insights, thank you very much for sharing. I loved the takeaways, especially "Simplicity" & "Open questions" really make the difference in terms of getting your users to respond with real and insightful feedback.
I would also add one more thing to the list, responding to your users feedback with a thank you note first and share the actions taken to incorporate their feedback- especially if it was a critical(read as negative) feedback.


Rico Trevisan
02:35 pm December 27, 2021

Now I'm curious. Did the feedback for the Sprint Review improve? Are you getting better feedback? Are you able to come up with actions?

Or was this whole story fictitious and I missed the point?


Joanna Płaskonka
03:42 pm January 4, 2022

Thank you! I fully support thank you note idea :) Appreciation of any kind of feedback is a very good pattern.


Joanna Płaskonka
05:03 pm January 4, 2022

I love questions and I am happy that I my blog post sparked your curiosity :)

I will start with answering the last question - the story is real. We ask for feedback regularly. We also use this idea for different meetings and it works for us. Perhaps it will be useful for others to give it a try.

And when it comes to feedback and actions - the goal of our feedback surveys is to have better Sprint Reviews. Closed-ended questions give us only some basic information - for example if there is a lot of "1" (it is below our standard) in the answers or a lot of "3" (wow! above expectations), we have a signal that something was particularly bad or good. This is just a starting point for the discussion but numbers without pointing specific things to improve or things worth keeping is usually not enough.

Our experience showed us that the most valuable things to consider appear in the open questions - sometimes we have very little answers but from time to time we have a lot of feedback there. When you get feedback about one particular aspect from several people - you know where to focus on and look for improvements. Usually it's much simpler to come up with specific actions based on open questions. For example, when someone says "I was not able to understand that part because of <reasons>", you know that you have to pay more attention to it in the next Sprint Review. It might be also valuable after receiving such feedback to send more information e.g. via e-mail to Sprint Review participants to help them understand the information mentioned in the feedback. You show them "I hear you, I am trying to learn and improve based on your feedback".

Hope it answers your questions.


Bart
07:34 am January 26, 2022

Maybe it's just me, but I feel that a part about the attendees is missing.
You can ask the best formulated questions in the world, but if the audiance attending the review is not the 'target' audiance, you would be getting the 'wrong' answers to those beautifull questions (speaking from experience).
For example: when you deliver a new feature that is not used by the biggest part of the audiance, they would probably not be very happy. And thus provide more noise that usable feedback

I hope I'm making sense


Joanna Płaskonka
04:29 pm January 30, 2022

Thank you for your voice! I agree up to a point with your comment about the right audience. If you get low score from your survey participants, it's good to understand why they were not happy with the meeting. When you realize that they were not the right audience (e.g. they are not going to use the presented features), maybe it's high time to ask yourself why they were invited to this particular meeting? Perhaps sharing with them the brief meeting agenda before the meeting and referencing to "two feet rule" is worth experimenting with.

If I were you, I would also ask myself another question. What do I expect from the survey I am going to conduct? Personally I am more happy with ideas for improvements rather than always perfect scores. Continuous improvement never stops :)