It’s part of human nature to recall those experiences – good and bad – that provoked an emotional response. As a CX consultant, a typical ice-breaker I use when opening a workshop with a group of participants that I haven’t worked with before is asking them to share with others the best and worst experience they have had dealing with a contact center. This brings great energy to the room, as the stories generate enthusiasm (for good) and incredulity (for the bad). It’s how it makes us feel that we actually remember.
Fast forward to 2021, and we are now five years into our industry adopting self-service automation using AI. Chatbots are now common practice as CX practitioners deploy conversational messaging and automated speech-based interfaces in an attempt to take away some of the functional and routine inquiries from our contact center agents. The business case for automation is clear – reduce the cost to serve and provide access to services 24×7 in the customer channel of choice.
It should be something that every operation adopts, surely?
Remember the hype?
There was a lot of industry hype back in 2016 – when I remember sitting in the audience of an industry event where Gartner projected that by now, 1M contact center jobs would have been impacted.
The latest research data suggests that the take-up has actually been on a much lower trajectory. The 2020 NICE inContact Customer Experience (CX) Transformation Benchmark – created from an annual research survey of 1,000 global businesses – shows that today, 61% of interactions are (still) agent-assisted vs. 39% self-service. And importantly, this proportion of self-service hasn’t increased over the previous year’s 2019 survey. Part of the reason for this is that 90% of businesses believe that chatbots and virtual assistants must get smarter before consumers are willing to use them regularly.
Creating positive chatbot experiences
So let’s start by asking ourselves how does a chatbot experience makes us feel? I wouldn’t mind betting that more negative stories emerge than positive ones – it didn’t understand me, it didn’t get me to the right place for help, it was a waste of time…
But that is actually doing many CX practitioners a disservice – particularly those who design the chatbot implementation around the most appropriate steps in the customer journey. If we think about the classic two by two matrix of value to the customer versus cost to serve for the organization, then look for those transactions that are of high value to the customer and where automation can reduce the cost to serve. Think processing orders and returns, resolving billing inquiries through refunds, etc. Some good examples are:
- Domino’s ‘Easy Order’ bot to Facebook Messenger – a bot called ‘Dom’ designed to “help superfans get their #1 fix of cheesy food heaven simply by messaging the word “PIZZA” to ‘Dom’ via Messenger” (please note that these are their words rather than mine!)
- Amazon’s chatbot for the automated returns handling where you can get near-instant refunds or schedule returns
- Vodafone’s in-app digital assistant – if you have got a question then you can ask TOBi from inside the MyVodafone app about your bill and phone usage
Usage is increasing for these solutions, with as much of the focus on branding and building user engagement as on the functional aspects of resolving the enquiry. The point being that if the experience is memorable (in a good way), then the higher the probability that consumers choose that automated channel in the future. A real win-win!
Use analytics to learn
The other design aspect to bear in mind is the role that AI and analytics can play in chatbot deployment. Compared to developing a ‘static’ form-based user interface to process these enquires, the beauty of a conversational interface is that it is easy to test and learn:
- Start by identifying the highest contact reasons where a known response process exists
- Instead of ‘hard coding’ that process into web site pages – have it handled by a chatbot instead
- Use analytics – on the customer journey and the resulting containment rate – to enable continuous improvement
- Exploit AI / Machine Learning technology so that new use cases can be adopted for more ‘varied’ processes, extending the reach of chatbot automation for second and third time users
What you now have is a CX design and development process that can build out a roadmap for automation.
Don’t forget to measure effectiveness
And there’s one final consideration – which is to track the customer effectiveness of these processes that are handled by the chatbots. Containment rate is the right metric to measure that the customer journey step has been completed i.e., the chatbot resolved the enquiry for the customer without the need for escalation, but has that actually provided contact resolution?
Returning to the NICE CX Transformation Benchmark survey, then whilst businesses estimate satisfaction to be up significantly in 2020, first contact resolution is down across all channels. And tucked away in the detail of the report is the most telling statistic of all – the metric for First Contact Resolution (FCR) is only 22% for the automated artificial intelligence channel (down 3 percentage points from the previous year), significantly underperforming the traditional agent-assisted channels.
Mystery Shopping
And let me give you a real world example of this – a mystery shopping example using the highly rated Amazon chatbot.
A few months back, my wife ordered a mirror from Amazon – it didn’t turn up – and the business seller then emailed that they could no longer supply the goods and would provide a refund. She logged onto the Amazon site to check order status and used the automated chatbot process to escalate the request for refund. When no monies were received, she went back to Amazon site, where all customer journey flows lead her back to the chatbot (to request a refund), with no easy way to get in contact by any other means.
In frustration, she ended up using a third party complaints resolution site (Resolver) to find a phone number to call. She rang Amazon, and spoke to an agent who had full access to the account and could see the repeated request for a refund, and that the seller was yet to respond. Immediately the agent authorised a credit payment, with an email confirmation received instantly, whilst still on the phone to Amazon.
The assisted channel had saved the day from an experienced perspective, and whilst the chatbot will have ticked the box for containment, the lack of ability to track the end to end experience and then be proactive to resolve customer issue was a process gap that remained.
Lessons learned
As a consultant, let me take you back to the ice breaker exercise that I use in my workshops. After hearing all the stories of good and bad experiences – and getting engagement from the participants in the room – we then take the examples provided and probe to understand what made the experience ‘good’ and how could those attributes be applied to the ‘bad’.
Taking the Amazon experience of using a chatbot, then what lessons can be learned?
In short, the chatbot correctly captured and logged the refund request but the payment handling process failed to return the monies within an appropriate timescale.
And even worse, there was no workflow trigger to escalate internally that action needed to be taken. When pointed out to the agent, the acceptance of what needed to happen next was instant – the situation was effectively recovered – but I would surmise that the focus on automation had created an internal blind spot as to what was really important to the customer.
The moral of the tale is that it is the end-to-end customer experience that is the real determinant of how a customer feels about using an automated service. When the ‘happy path’ use case works, it’s great, but unless you manage all of the outlying reasons why the process might fail, then the positive effortless customer outcome (as you will have quoted in the business case) will not come to pass.