Last week in our podcast, host Jim Freeze interviewed Gabriel Skantze, a professor at KTH Royal Institute of Technology, about social robots and making AI approachable for humans.
While the discussion was mainly based around robots, we decided to write about how the same principles could apply to AI automation.
It’s no secret that AI can be an unsettling topic for humans. We often hear stories about AI technology knowing creepy details about us, like what we’ve searched on the internet or information that hasn’t been explicitly shared.
Or there are times when we are on a website chat or the phone with a company and we can’t quite tell if it’s a person or a machine that’s speaking with us.
As technology gets more advanced, we predict that these situations will become more common.
There is a fine line between what is helpful and what is creepy, and with technology advancements there should be practices in place for how to keep AI on the helpful side of things.
Otherwise, the productivity increases that AI allows for will be drowned out by its negative implications.
We’ve put together three ways that businesses can ensure their conversational AI technology stays approachable and not invasive.
Am I speaking with a… human?
While conversational AI technology should be as helpful as a human, it should never attempt to mislead a consumer into thinking it’s an actual person.
In fact, in a Harris Poll Survey, 40% of respondents said that they find it more creepy than helpful when an AI-powered customer service agent sounds or interacts like a human but doesn’t notify the caller that it’s a virtual agent.
We believe that customers should always know who they are talking to, whether it’s a live agent or a conversational AI application.
This makes customers feel more comfortable during the conversation and allows for a better customer experience.
How did you know that?
We know that customers do not like to repeat themselves. It wastes time and energy and can turn a simple customer service call into a frustrating experience.
But recent advances mean that most conversational AI technology can now integrate with CRM systems and ensure that customers don’t have to repeat information that they have already provided.
So where can it go wrong? Turns out, a lot more than just previous customer conversations can be found.
Because of ever-expanding social media, web networks, and distributed consumer data, companies can easily have access to more than just the information that they have been explicitly given by the customer.
According to Harris Poll, when contacting a company about a recent item that’s been ordered, 40% of respondents found it helpful when the company uses order history to determine why they’re contacting them.
What’s not OK? When a company knows certain types of information not provided to them (40% of respondents didn’t like this) and when companies use past purchase history from a different company (42% of respondents think this is pretty weird).
To stay helpful instead of creepy, we believe that companies should only reference customer data that has been explicitly shared. Otherwise, the invasiveness may cross a line and lead to a lost customer due to lack of trust.
Did I ask you?
Whether it comes from a human or a machine, unsolicited advice can be pretty annoying.
While these notifications and alerts usually have a good intention, the line to invasiveness can be easily crossed.
According to Harris Poll, while most of the respondents weren’t thrilled with the invasive qualities of AI, nearly three-quarters of respondents (72%) are willing to tolerate its meddling when it alerts them of a potential issue, it helps them resolve a problem quickly, or when it can solve a complex problem.
Our recommendation? Always be open to customers about notifications that will be sent and give them the option to opt out.
Transparency is key
In the past, we’ve talked about the importance of education around AI so that customers feel more comfortable using technology.
This is important not only for customers but also for the companies who are implementing it. Understanding and respecting the line between helpful and creepy is possible, and it’s essential in order to create positive customer experiences.
We believe that any company that implements an AI solution, especially one that comes into direct contact with a customer, should be open and transparent to avoid any issues.
This blog post has been re-published by kind permission of interactions. View the original post
To find out more about interactions, visit their website.
Call Centre Helper is not responsible for the content of these guest blog posts. The opinions expressed in this article are those of the author, and do not necessarily reflect those of Call Centre Helper.