Nikolett Török at Cyara explains commonly confused chatbot terms.
When you stop to think about it, it seems the world we’re living in isn’t all that different from what science fiction writers imagined in the 1960s.
We don’t have flying cars or time travel, but a lot of the pieces are there. Computers have opened doors that once only seemed possible in a novel or film.
In fact, computers have come so far that, on some level, we relate to them like other humans.
We don’t merely use our devices, we interact with them — even talk to them — almost every day. Whether it’s virtual assistants or chatbots, these human-like devices and applications are now ingrained in our day-to-day lives.
Yet, behind each of those interactions is a complex infrastructure of code and artificial intelligence (AI)— technology that has been developed over decades. And when you start to peek underneath the hood, you realize it’s much more complicated than it looks on the surface.
For companies looking to leverage this technology and use chatbots to improve customer experiences, understanding these tools can seem like a daunting task.
Terms like AI, natural language understanding, intents, and entities might make you feel like you’re lost in a Philip K. Dick novel with no way out.
Chatbot, Conversational AI, or Virtual Assistant?
First, it’s important to distinguish between a few basic terms that often get thrown together and used incorrectly. Chatbots, conversational AI, and virtual assistants may all play a role in your business, but they aren’t the same thing.
Virtual assistants may share some similarities with chatbots or other forms of conversational AI (more on that next), but they serve a completely different purpose.
A virtual assistant functions in the same way as a human personal assistant would — it helps you get tasks done, stay organized or manage daily workflows.
These are the Siris and Alexas of the world, and although they can help you run your business, they’re not designed for customer service in the way that chatbots are.
A chatbot is an automated, text-based program that interacts directly with your customers to answer questions, address issues, and (if needed) connect them to the correct human agents.
Its job is to collect information from users, interpret what they need, and attempt to solve their problems in the most efficient way possible. In other words, customer service is job number one for a chatbot.
Conversational AI, on the other hand, uses Natural Language Processing in conjunction with chatbots and virtual assistants. This allows customers to interact with the AI as if they were speaking to another human being rather than a computer.
AI vs. Machine Learning
To truly understand today’s chatbot technology, you need a solid grasp of artificial intelligence and some of the underlying technology that supports it. And one of the first ways people get lost is by confusing AI with machine learning (ML). These are related but distinct concepts.
In some form, AI has been around since the 1950s. Computers of this era applied early forms of this technology, which uses software and programming to mimic human intelligence. Over the years, it’s gotten more complex and impressive, but this basic goal has been the same.
Machine learning is a subset and more advanced form of AI technology. Rather than merely relying on preprogrammed instructions to imitate human thought and responses, computers equipped with ML can actually learn from their interactions.
This is, of course, much more human-like than a preprogrammed chatbot, as it allows computers and bots to have a degree of independence in how they think and react.
Closely related to ML is the concept of “deep learning.” This is machine learning at its most advanced. Because bots with this technology are built on multilayered neural networks, they can analyze data and inputs with greater depth and accuracy.
Why do these distinctions matter for chatbots? It all comes down to what you need your bots to do. Most of today’s bots are built on AI, though some older ones simply rely on basic rules-based coding to handle only the simplest, most predictable customer issues.
And even if a bot has AI, it may not be equipped with ML capabilities, much less deep learning ones. The more of these abilities a chatbot has, the more it can handle in terms of customer service without handing over control to a human agent.
NLU vs. NLP
Delving further beneath the hood of AI, we come to some of the key technologies that drive it, especially when it comes to the conversational AI that chatbots use.
Natural Language Understanding (NLU) and Natural Language Processing (NLP) are two tools that give a bot the ability to interpret and comprehend human language.
“Natural” is a key word here, as it draws a line between human languages, which develop a life of their own and evolve naturally over time, and constructed languages that computers use to communicate according to more strict, rigid rules.
Getting a computer to understand the nuances and personal quirks of natural human language is a difficult task.
For a chatbot to do its job effectively, it needs both NLP and NLU skills. NLP helps it to process the basic words, semantics and structure of human language to sort out the basic information that’s being communicated.
NLU, which is really a subset of NLP, enables the bot to dig deeper into context clues to derive the true meaning and intent behind the words.
The two technologies rely on each other to accurately interpret human language. Without NLP, a chatbot wouldn’t be able to analyze the basics of syntax and grammar to gain a surface understanding of customer questions or inputs.
But it requires NLU to take that understanding a step further and do the complex work of following the twists and turns of a conversation, something we humans take for granted.
For the bot to contribute to the conversation, it needs a third technology, Natural Language Generation (NLG). This last piece allows the bot to translate its responses from zeros and ones into human-like words of its own.
Intents, Entities, and Utterances: A Complex Relationship
Now that we’ve explored some of the key cogs of conversational AI, we come to three of the most important concepts for how chatbots apply NLU and NLP: intents, entities, and utterances.
In simplest terms, intent represents what the user is aiming for when they ask a question or submit a request. These user inputs are known as “utterances,” and it’s the chatbot’s job to derive user intent from them.
Discerning intent from an utterance, however, is rarely a simple task. For instance, a customer might submit the utterance, “What are your hours?”
But what location are they asking about? And what day? What department? Even a simple question may carry many layers of hidden intent. And, to make matters more confusing, there are countless different utterances a user could input to ask for store hours.
Entities play a key role in clarifying the meaning behind user utterances. These are the modifiers people use to clarify intent when they communicate. In the above utterance, “hours” is an entity, but a more precise utterance would include additional entities such as “today” and “downtown.”
Provided only with “hours,” though, a well-programmed bot would readily ask follow-up questions to extract more entities and drill down toward an accurate interpretation of intent.
This relationship between utterances, intents, and entities is foundational to properly understanding meaning in communication.
That’s why, ultimately, evaluating a chatbot’s performance largely revolves around this relationship. Many of the terms that relate to bot performance connect to it in some way — from entity deviation risks to mixed intents to entity confidence.
Find Your Way in the World of Chatbots
Chatbots are only growing more critical for customer service, but getting used to the technology can feel intimidating. If you still feel like you’re living in a science fiction movie, don’t worry. We can help you navigate this new world of conversational AI to deliver better CX.