F*** This! How to Make Sure Your Chatbots Don’t Swear at Customers

Person scolding robots

Deploying AI in customer service seems like a no-brainer – using chatbots to answer simple questions and reduce incoming call volumes, all whilst ensuring customers get the best service possible. At least in theory…

Unfortunately, the reality can be quite the opposite, and customers may not get the answers they want. Worse still, some have an experience so bad that it negatively impacts CX and brand reputation.

Take DPD for instance, whose chatbot gained notoriety with its bad behaviour, hitting news headlines and even getting fired (well disabled) as a result.

So how do you make sure this doesn’t happen to you?

To find out, we asked our panel of experts for their best advice on how to make sure your chatbots don’t swear at customers.

These can be summarized in 3 key steps:

  1. Build solid foundations
  2. Set limitations
  3. Implement ongoing monitoring and supervision

Read on to find out more…

Build Solid Foundations

There are a number of common sayings that are completely true when looking to implement a smart AI chatbot – “look before you leap”, “measure twice, cut once”, “better safe than sorry” – to name a few.

With the rise of new technology, it is easy to get caught up in the hype and jump into implementation, but as we have seen, this can be disastrous.

This is why the first step to making sure your chatbot doesn’t swear at customers (or engage in other unwanted behaviours) is to make sure you build your chatbot on solid foundations – as our experts explain:

Make Sure You Start With Large, Clean Pools of Data

Steve Morrell, Managing Director, ContactBabel

Businesses’ data assets must be in place before implementation of AI, as this is a technology that relies upon having large, clean pools of data that it can be trained on and learn from.

Without this in place, it will be virtually impossible for any AI implementation to get close to its potential.

The preparation of data will involve having an organized, non-siloed data architecture, a consistent data vocabulary, the means of accessing this data securely and quickly, and the ability to access other pieces of relevant information (e.g. customer-related metadata) to include greater context.

Without this, it will be more difficult for a machine learning process to train itself effectively, or for a chatbot to be able to use all the relevant data in order to reach a correct conclusion.

Contributed by: Steve Morrell, Managing Director, ContactBabel

Incorporate Filters That Recognize Rudeness or Swearing

Include and incorporate filters that recognize rudeness or swearing and teach the AI how to deal with it. Over time, this element of fine-tuning ensures that the AI creates content and responses that are respectful and appropriate.

It’s not just swearing that’s an issue, you might also want to block off other material surrounding topical affairs and issues that are not related to the company’s core business.

The one area that teams need to watch out for is context. Certain words or phrases may be acceptable in one context but offensive in another, so it’s important for the chatbot to understand the nuances of language and how, sometimes, the words that make up a swear word can be used innocently in another context.

It’s also worth bearing in mind that it’s not just swearing that’s an issue, you might also want to block off other material surrounding topical affairs and issues that are plain and simple not related to the company’s core business.

Contributed by: Jonathan Mckenzie, AI Contact Centre Product Manager, 8×8

Put Customer Behaviour Policies in Place to Limit Exposure to Bad Language

Finlay Macmillen at Odigo

Despite training, fail-safes, and supervision, customer outbursts or intentional provocation of chatbots can lead to response conflicts.

In such scenarios, addressing irrational customer needs or requests may result in undesirable responses, so managing expectations for these touchpoints is crucial.

This should involve carefully scripting responses for these types of situations, implementing robust escalation protocols, and establishing feedback mechanisms.

Additionally, it’s essential to consider that there is still ambiguity for some customers who don’t know when they are interacting with bots or human agents.

This raises the question of whether policies should be in place to ensure consistent behaviour expectations for customers across channels regardless of whether they are AI or agent-led.

Inconsistencies in behaviour and language tolerance across channels could well undermine professionalism and positive experiences.

Contributed by: Finlay Macmillen, Junior Sales Executive, Odigo

Any Chatbot Must Be Able to Follow Brand Guidelines

Elizabeth Tobey at NICE

Organizations must ensure that any chatbot follows brand guidelines and generates appropriate and accurate responses. For enterprise-grade AI, this means chatbots should only use chatbots leveraging AI purpose-built for CX.

This means the AI is built from billions of historical customer interactions that consider brand language and other crucial guardrails. This should then be layered with generative AI to produce conversational responses that respond just as a human would.

If organizations don’t follow this formula and use chatbots trained using generic AI, or that generate responses from the open internet, they risk their reputation. These chatbots can generate inappropriate and inaccurate responses that can damage a brand.

Contributed by: Elizabeth Tobey, Head of Marketing, Digital Solutions, NICE

Don’t Leave Your Chatbots in the Hands of Well-Meaning Amateurs

In the AI world, knowledge management is not something that is a part-time job or that can be handled by amateurs.

AI experts must understand both data and also the real-life business/customer issues

Consider developing more full-time, expert roles to support knowledge bases and to enable understanding of data models and flows across the entire enterprise.

AI experts must understand both data and also the real-life business/customer issues, and this resource can be difficult to find.

Some businesses use ‘super-user’ teams of experienced agents who understand which requests are most suited to automation, and the process steps that are required for successful outcomes.

Contributed by: Steve Morrell, Managing Director, ContactBabel

Teach AI How to Respond to the People Just Wanting to Push Boundaries

Some people will come to your chatbot just to try and game it and get it to do something it isn’t meant to.

You can mitigate this with some creative thinking and testing. For others, it’s about teaching the AI how to respond to queries and what it can and can’t say.

Contributed by: Jonathan Mckenzie, AI Contact Centre Product Manager, 8×8


Set Limitations

Once you’ve set the basic foundations, you need to take a look at what your chatbots can’t and shouldn’t do.

Limiting your chatbot is a great way to ensure that it doesn’t swear at customers and is a useful tool for your contact centre – as our panel explores:

Your Chatbot Needs the Ability to Transfer to a Human Agent

There are three things to bear in mind when deploying AI chatbots:

  • Generative AI components need a tight set of constraints.
  • Your chatbot needs the ability to transfer to a human agent if it can’t answer with a high degree of certainty.
  • A chatbot in a locked box is useless; it’s crucial to integrate it with surrounding systems so it has the data to inform its outputs.
Pierce Buckley at babelforce.

When chatbots misfire and deliver outputs like swearing, it’s because the deployment has skipped one of the above.

Constraints (guardrails) are needed to ensure that the chatbot simply cannot swear.

That’s obvious. But the integration – both with systems and with agents – is also fundamental.

Bad generative AI deployments are an attempt to make automation quick and cheap. But it’s a false economy, just like skipping agent training would be.

Contributed by: Pierce Buckley, CEO & Co-Founder, babelforce

Keep in Mind There’s More Than 100 Swear Words

Work out what words and phrases you want to be left out. Now, bear in mind that there’s more than 100 swear words in English, but you also need to think of other languages that customers may interact with and also words or phrases that aren’t officially recognized as swear words but can be interpreted as swearing or rude.

With all of this, neural language processing or predefined lists can help gatekeeping considerably. A decent starting point can be using swear lists that are used to block profanities on companies’ social media feeds and pages.

Contributed by: Jonathan Mckenzie, AI Contact Centre Product Manager, 8×8

Develop Testing Scenarios to Account for a Broad Range of Prompts

Neville Doughty, Partnership Director

It’s impossible to overplay testing. Testing scenarios must account for a broad range of prompts, so you know how the GenAI will react even if a user is intent on mischief.

And don’t forget, just because you can doesn’t mean you should. Use data-driven insights to guide you in identifying and prioritizing use cases.

Be clear about what the bot can deal with and what needs to be handed off to an advisor.

Customers always need a ‘get out’ from an automated flow, otherwise their frustration could contribute to agent attrition challenges, or they may just go elsewhere.

Contributed by: Neville Doughty, Partnership Director, Contact Centre Panel


Implement Ongoing Monitoring and Supervision

Ben Booth at MaxContact

Once your chatbot has been set up, given limitations, and put live, you can just leave it, right? Wrong!

For your chatbot to continue to operate at maximum efficiency and give customers the level of service they deserve, you need to implement a loop of continuous monitoring and supervision – as our industry experts show:

Regularly Review and Update Your Chatbot’s Knowledge Base

AI chatbots can be a real asset for contact centres, but it’s crucial to ensure they remain under control. As they become more sophisticated and autonomous, there’s a risk they could start to deviate from their intended purpose and cause issues.

That’s why it’s important to regularly review and update your chatbot’s knowledge base and responses.

This helps ensure it provides accurate, up-to-date information and handles customer queries effectively.

By maintaining a well-trained, closely monitored chatbot, you can harness its potential to enhance customer service while mitigating any risk.

Contributed by: Ben Booth, CEO, MaxContact

Document Customers Who Repeatedly Provoke Your Chatbots

Jonathan Mckenzie at 8x8

Another aspect that’s worthy of consideration is if someone repeatedly swears or provokes the chatbot, have a support ticket raised against them or have it logged in their customer file.

The chatbot can even point out that the engagement is being recorded and that if sworn at, it will give them a warning and then perhaps even terminate the session, just as a human would.

Equally, you can set up a routine within the AI that if people do swear at it, it transfers the call to a human to resolve – but you could see that being abused as some people may swear just to speak to a person.

Contributed by: Jonathan Mckenzie, AI Contact Centre Product Manager, 8×8

Think MMOG – Monitoring, Moderation, Oversight, and Guidelines

A good framework to use to keep a close eye on your chatbot is MMOG – Monitoring, Moderation, Oversight and Guidelines:

Implement Continuous MONITORING

Regularly monitor chatbot interactions to identify any instances of inappropriate language or behaviour.

This ongoing oversight allows for prompt intervention and resolution of issues as they arise, ensuring that interactions remain professional and align with company standards.

Use Profanity Filters and Content MODERATION

Incorporate profanity filters and content moderation tools provided by the chatbot platform or third-party services.

These tools can automatically flag and block inappropriate content in real time, helping to maintain respectful and appropriate interactions with customers.

Tatiana Polyakova, COO, MiaRec

Enable Human OVERSIGHT and Intervention

Implement mechanisms for human oversight, allowing contact centre agents to monitor chatbot interactions and intervene when necessary.

This provides an additional layer of control to ensure that chatbot interactions align with company guidelines and values.

Establish Clear GUIDELINES and Policies

Develop and communicate clear guidelines and policies for chatbot usage, including acceptable language and behaviour standards.

Ensure that contact centre agents are trained on these guidelines and equipped to enforce them effectively during customer interactions.

Contributed by: Tatiana Polyakova, COO, MiaRec

Use Tried-and-Tested ‘Safe’ Responses

Generally, the easiest way to ensure that a bot doesn’t swear is to use scripted responses.

While these can be extremely effective for resolving predictable queries, the pursuit of expanded functionality by many organizations and pressure from frustrated customers is driving the application of more advanced AI models.

There is no way around the fundamental fact that getting the best out of AI takes time.

However, as many have seen in the news, the benefits of these innovative technologies have the potential to be outweighed by the risks associated with the poor application of their ‘intelligence’, or lack of it, as has been demonstrated at times.

There is no way around the fundamental fact that getting the best out of AI takes time.

This process involves elements of scripting (or reviewing AI-generated suggestions) for a comprehensive range of tried-and-tested ‘safe’ responses which can be provided based on advanced AI analysis of customers’ queries.

Contributed by: Finlay Macmillen, Junior Sales Executive, Odigo


The Use of Chatbots Is on the Rise

If you follow this advice, your chatbot can excel, giving customers the service they deserve without unwanted outcomes. And why is it so important to get it right?

According to our recent survey, 67.2% of contact centres are either thinking about, or have already implemented, Generative AI, as shown by the graph below:

2023 Survey Graph 718px What Are Your Thoughts on Generative AI for Your Organization?

Not only this, but fully automated AI-enabled webchat has increased very significantly in recent years, as shown in the latest ContactBabel research below:

Contact babel research graph

Which means avoiding future issues and ensuring your chatbots are always on their best behaviour is now more important than ever.

For more great insights and advice from our panel of experts, read these articles next:

Follow Us on LinkedIn

Recommended Articles

A photo of a robot representing a chatbot
Chatbots: How Your Business SHOULD Be Using Them – With Examples
Customer and chatbot dialog on smartphone screen.
Best Examples of Chatbots and What Makes Them Great
A broken robot on the ground
4 Ways To Make Sure Your Chatbots Reach Their Full Potential
A man stands in front of an arrow pointing towards the sky
The Rise Of Chatbots: How AI Is Changing Customer Service