Whilst many Customer Experience professionals, by virtue of their jobs, spend a lot of time looking into the future, we are going to take a step back in time to when businesses first decided single metrics were an important way to define CX.
Around that time, someone in the research industry thought it would be a great idea to ask people “how satisfied” they were with the experiences they were having. This question was built into research questionnaires, surveys and in-depth interview forms. A wide variety of versions of asking the Satisfaction question were cooked up, including Yes/No answer choices, scaled responses in Likert-fashion, worded scales and a myriad other varieties, where respondents could rate being anything from “Highly unsatisfied” all the way up to “Highly satisfied”.
However, with numerous versions of asking about Satisfaction, no common rule set to compare or benchmark the results, and a great deal of time passed since, this approach to researching customer experience presents an unintuitive question based on an outdated premise.
Let me explain. Dictionary definitions of “Satisfaction” give “happy” or “pleased” as alternatives, but we don’t think in terms of being “Highly happy” or “Very un-pleased”. We’re just happy or sad, pleased/or displeased. So being “Highly satisfied” or “Highly unsatisfied” doesn’t grammatically or conceptually make sense. Respondents and customers answering a Satisfaction research question are either satisfied or unsatisfied. Adding scales, granularity or complexity merely adds grey to black and white, not any worthwhile or useable data intelligence.
Which is why, ultimately, businesses leading CX with a Satisfaction metric are measuring, pretty much, absolutely nothing worthwhile whatsoever. Because their scores always flatline and will stay that way.
The reason for this comes down to what “satisfaction” really means. Being “satisfied” is having your groceries delivered within the time slot you booked. Or your cable box working properly so you can watch streaming TV. Or your car starting in the morning so you can get to work. Or getting a coffee served up in less than 5 minutes. Whoop-tee-doo.
None of that remotely presents delight, being overjoyed or any other positive emotion. Or disgust, disappointment or any other negative emotion. Satisfaction measurement reveals nothing that is really interesting, controversial or addressable.
With this comes a secondary problem. Boredom. In some organisations I’ve listened to researchers explain to me that “We measure C-SAT every month and always get around 80-90%”. (Their explanation is typically accompanied with some sense of duty or pride). But how can you be proud of something that never changes and reveals no fresh insight? Or if measured for so long, offers no new opportunities to change and adapt? Colleagues lose interest, seeing the same recurring results every time and, worse, if they are remunerated on it, expect to be always paid for it. Which can result in a culture of number-fudging, in order to achieve a consistent target.
If you are not already convinced, here’s a third angle for you. Organisations focussed on measuring Satisfaction tend to also equally obsess over response rates. I.e. the C-SAT research must be accompanied by a high response rate for the numbers to be viable. Whilst pure research practice would tell you this is absolutely correct, as a CX professional, the last thing I want to be obsessing about is numbers. I am instead interested in stories. Even a small handful of C-SAT survey responses, with a decent “Why did you give that score?” question built into it, would give me some customer sentiment so that I can figure out what to explore and address. So I absolutely don’t need minimum viable sample to switch my ears on for customer listening.
Thus, as my advice to you ultimately is to ditch the ageing approach of Satisfaction when measuring CX, I’m not going to leave you hanging with no suggestions as to better alternatives. You’ll get much more up-to-date and useable intelligence from the following techniques:
- Ease – which captures how easy an interaction was for the end user or customer. Outcomes using this measure can guide businesses as to which interactions and touchpoints during experience journeys are working well, and what customers really like
- Effort – which captures how much effort an end user or customer had to go to so they could achieve their intended outcome. Outcomes using this measure can be used to identify which elements of experiences are causing issues and too much negative friction and where effort, time, resources and money must be invested to improve experiences
- Net Promoter Score (NPS) with the “Why?” question as the insights driver, not the number. NPS was originally devised to overcome the issues caused by a diverse variety of uses and formats for measuring Satisfaction. Whilst there are many organisations who don’t like NPS at all, even in the worst cases, the respondent verbatim which accompanies an NPS survey is typically detail-rich, offering opportunities to close loops and learn more about customer preferences
- Sentiment – which uses Text Analytics software to mine customer verbatim for topics and themes, and groups this together using specific taxonomy models. The sentiment can be custom-tuned to the unique language and customer set of a business, driving more qualitative insights than quantitative. This is distinctly advantageous when attempting to drive actions from CX rather than spending effort by tracking and chasing numbers
Overall, Satisfaction is an outdated concept as a sole measure of Customer Experience in today’s modern CX Management marketplace. Time to say R.I.P…