What High-Profile AI Blunders Can Teach Us

AI Dangers and Potential Risk Warning concept
124

Customer service automation can deliver the results customers want, unless your AI goes rogue and attracts embarrassing media coverage. Here are some lessons we can learn from the bots that blundered.

It’s probably a good thing that AI can’t feel embarrassment, or some of them might never show up to work again.

When ChatGPT launched, it kick-started the race to build the technology into apps and websites and comms channels before anyone else.

This means a lot of new AI builds have gone live at the same time, which is fascinating because there are patterns emerging.

Bad AI Is Visible From Space, Even If It Used to Work

Parcel delivery company DPD made the headlines with, if you can believe it, a swearing haiku writing chatbot.

Now – an important point of distinction. It didn’t just flip out and start ‘doing poetry’ at people. This chatbot was led astray by a disgruntled end-user.

Ashley Beauchamp, a classical musician from London, needed to locate a missing parcel. And when the bot couldn’t help, he decided to get his own back with some…creative prompts.

The whole exchange ended up on X, including the bot’s haiku, which went:

“DPD is a useless
Chatbot that can’t help you.
Don’t bother calling them.”

Most of the UK’s major newspapers ran with the story, and most of them barely mentioned something important.

That the chatbot had worked well for years, before an update broke it.

A Little Automation Goes a Long Way

DPD had “successfully operated an AI element within the chat for a number of years” according to their own statement.

Before its unfortunate moment in the spotlight, DPD’s chatbot had automated 55% of customer queries for at least six years.

There were four or five ‘event types’ the old bot knew to look out for. The kinds of queries you’d expect for a delivery company, like ‘where is my parcel?’.

It could respond with maps, images and chat responses. No poetry.

You might think five event types sounds restrictive, but in our experience this kind of focus is what gets the outcomes your customers want.

Remember, the new chatbot’s misadventures only began after it failed to help with a standard enquiry.

This was an effective automation that got less effective when it gained a bunch of new capabilities that it didn’t need.

What we’re seeing is the AI version of ‘jack of all trades, master of none’.

Don’t believe me? Go back and look at that haiku. None of the lines have the right number of syllables. And Ashley Beauchamp didn’t find his parcel.

Give AI the Right Jobs

Have you seen the footage of Digit the warehouse robot falling over?

Digit is 175cm tall, is more or less human shaped, and can carry around 15kg. The idea is that it can work in spaces designed for humans, and help out with shelf stacking and conveyor belt loading.

Which didn’t stop it spectacularly falling over at a supply chain exhibition, in front of a crowd of spectators.

Roboticists are still teaching robots how to walk properly, 27 years after IBM’s Deep Blue beat a human at chess for the first time.

This is an example of Moravec’s paradox, which states that “it is easy to train computers to do things that humans find hard, like mathematics and logic, but it is hard to train them to do things humans find easy, like walking and image recognition”.

Contact centres don’t use physical robots, but it’s a good reminder that AIs and humans have different skills.

To get the best results, you have to let each of them play to their strengths.

Not too long ago I was involved in an automation project for Ratioparts, a German retailer.

They had quite a singular focus – automating data entry so agents could move the conversation along more easily.

That’s what computers are best at: repetitive, predictable tasks. The AI isn’t going to handle anything unexpected, but it’s also not going to get bored and enter a reference number wrong.

In this case, Ratioparts reallocated the time they were saving to outbound sales work that’s on track to generate €25,000 daily.

Make Sure the AI Knows What It’s Talking About

A lot of customer service AIs are wildly overconfident. You’ve probably encountered this if you’ve used ChatGPT.

The most shocking example is the Air Canada chatbot, which wrongly convinced a passenger that they could apply for a bereavement discount after purchasing a full price plane ticket.

Air Canada initially refused to honour this made-up refund policy, until a civil resolutions tribunal ordered the airline to pay compensation to the passenger.

The lesson here is obvious. Whatever AI you end up deploying has to know what it’s talking about.

In a customer service setting like Air Canada’s, where the customer is looking for information, the only good response is an *exact response*. There’s no room for creativity here.

So three things to bear in mind if you’re going to automate that kind of information request:

  • Any generative component has to have a tight set of constraints.
  • The AI should transfer the issue to a human agent if it can’t answer with a required level of certainty.
  • An AI in a locked box is clueless, so integrating it with other systems gives it enough information for useful responses. We do this all the time at babelforce.

What Can We Learn From IKEA’s Automation Strategy?

Pierce Buckley at babelforce.

Pierce Buckley

One of the best AI stories I’ve seen in the past year was this one.

Furniture giant IKEA has been able to train 8,500 contact centre workers as remote interior design advisers, because their customer service AI is handling 47% of routine queries.

This is a big deal, because IKEA is aiming to generate 10% of their revenue from the “remote interior design channel” by 2028.

It’s not just ‘how can we make things efficient with AI’ but also ‘how can AI create opportunities for revenue’.

They haven’t tried to automate everything, and they’ve clearly thought about which jobs to give their AI, and which jobs to give their staff.

  • Repetitive tasks for the AI. Run-of-the-mill stuff that it can do reliably without getting bored (once it’s trained and given the right information).
  • Creative work for the call centre agents, informed not by algorithms but by life experience.

And most impressively of all: they made the news with a positive AI story.

Fantastiskt jobb IKEA!

This article was written by Pierce Buckley, CEO & Co-Founder at babelforce.

For more information about babelforce - visit the babelforce Website

About babelforce

babelforce babelforce is the composable customer experience platform uniting agents and automation. Our platform gives you the power to create the customer experiences you’ve always wanted, with tools anyone can use.

Find out more about babelforce

Call Centre Helper is not responsible for the content of these guest blog posts. The opinions expressed in this article are those of the author, and do not necessarily reflect those of Call Centre Helper.

Author: babelforce

Published On: 29th Apr 2024 - Last modified: 2nd May 2024
Read more about - Industry Insights, , , , ,

Follow Us on LinkedIn

Recommended Articles

A man stands in front of an arrow pointing towards the sky
The Rise Of Chatbots: How AI Is Changing Customer Service
Customer Experiences Concept. Happy Client Using Computer Laptop to Giving Best Review
Achieve Better CX With Conversational AI and Automation
A picture of an ai chatbot on a mobile phone
Take Your Business to the Next Level With an AI Chatbot
19 Ways to Deal with High Contact Volumes