Why Aren’t Chatbots Delivering?

Why Aren’t Chatbots Delivering on ROI?
116

With a near-half adoption rate in our readers’ contact centres, why are chatbots still ranking so low for ROI when it comes to improving customer experience?

2025 Survey image

To find out, we put the spotlight on what’s going wrong – bringing our latest Call Centre Helper research together with expert insights from Shep Hyken and Vinay Parmar to explore whether chatbots are simply being judged more harshly than humans, and how to set everyone up for long-term success.

With extensive research into the latest ways that contact centres are looking to improve their people, processes, and technology, what better resource is there to benchmark your operation?

Download the 2025 report to dive deeper into the data and see how your team compares.

What’s Going Wrong with Chatbots Right Now?

A central issue runs through today’s chatbot deployments: efficiency over richness. As Vinay Parmar points out, too many bots are built for speed and tidy, linear flows rather than the layered, adaptive conversations customers actually have.

“We have to design chatbots that reflect how humans actually interact,” he argues. “A chatbot often handles one linear task at a time. But in real conversations, there’s more of a flow. One question might spark another, and customers might not even know exactly what they need until they start talking.

Right now, a lot of bots fall apart the moment the conversation stops being a neat, straightforward transaction.” And those awkward hand‑offs show up in our findings as widespread “mediocre” experiences.

The ROI debate also turns on resolution, not deflection, when readers tell us what customers value most is First Contact Resolution and knowledgeable advisors. Two areas where poorly managed bots can struggle to deliver.

After all, containment without closure simply shifts cost downstream and dents trust in the channel. This mismatch explains why leaders still back broader self‑service and personalization, while chatbots draw just 8% of the “best value for money” vote.

OUR DATA AT A GLANCE

Chatbots rank low for ROI focus: Only 8% of leaders picked chatbots as where they’d get the “maximum value for money” in CX, far behind self‑service (38%) and personalization (18%).

Quality is the sticking point: 71% of respondents rated other organizations’ chatbots mediocre or worse.

Adoption ≠ satisfaction: Chatbot usage is up to 49% of contact centres (from 43% YoY), but leaders still flag consistency and effectiveness issues.

Have Chatbot Expectations Shifted?

Brands are also faced with ever-higher customer expectations as new and improved chatbots hit the market, as Shep Hyken explains, “The problem is if they haven’t recently invested (in their chatbots), they’re dealing with old technology, which is not meeting the bar that the customer has set.

If their best experience was with a new large language model, high level ChatGPT type of chatbot, then that’s their new benchmark for what’s great. If companies say ‘well, we invested just three years ago’, it’s basically antique compared to now”.

It’s not just old vs. new tech that’s driving customers’ higher expectations. Chatbots and other elements of self-service specifically have been coming under increased scrutiny, as customers no longer only compare you to your direct competitors, they compare you the best experiences they’ve had from any company.

As Shep continues, “Today’s consumers are growing accustomed to instant gratification through rockstar brands like Amazon, which give them answers quickly, efficiently, and intuitively. Customers now expect or hope for similar experiences with all brands they interact with”.

These kind of standards heap further pressure on both leaders and their chatbots to constantly be at their best, and magnifies the instances of failure.

How To Get Chatbots Right

When looking to get it right, it’s important to note that there’s also a KPI bias at work here. Organizations often judge chatbots on contact reduction while judging people on resolution and quality.

Frame success that way and chatbots will always look second‑best, even when they shave minutes off journeys. Change the viewpoint to resolved first‑time, low‑effort, high‑quality outcomes, and good conversational AI starts to look like a genuine performance lever, not just a volume‑shifter.

Not only that, but if chatbots are to climb the ROI table in future, they must be designed and run like products whose main goal is resolution.

Start with use‑cases where the chat bot can genuinely complete the job, (simple account actions, policy look‑ups with live data, order and delivery updates), and set outcomes to what customers tell us they value: first‑time answers and knowledgeable help.

That means fixing the knowledge layer before adding more intents. Govern your knowledge base, connect it cleanly to the chatbot (e.g. using retrieval augmented generation, or RAG, to limit hallucinations), and keep it fresh with weekly updates driven by analytics on ‘couldn’t understand’, policy mismatches and forced escalations.

When the conversation needs a person, the bot should improve the hand‑off, passing the transcript, detected intent and customer state so the advisor picks up mid‑stride.

Operationally, move beyond ‘set and forget’. Treat conversation design, prompts, guardrails and training data as living assets to be reviewed in regular intervals alongside QA and data analytics insights.

Report success on a balanced scorecard too, focusing on resolution rate (not just containment), effort/ease (e.g. NetEasy), time‑to‑resolve across the blended bot‑to‑human path, and the quality of escalations.

Tie that to a simple ROI line: chatbot‑resolved contacts × cost‑per‑contact avoided × a quality factor minus the build, tuning and knowledge upkeep.

With that framing, leaders stop asking whether chatbots reduce contacts and start asking whether they reduce effort and improve outcomes, the shift that moves conversational AI from ‘mediocre’ to meaningful.

This is the practical expression of Vinay’s richness principle and Shep’s balance: digital for what it does best, humans for the rest.

Stop Deploying Chatbots in the Wrong Places, for the Wrong Tasks

Chatbots aren’t failing, immature chatbot design is. Too many are optimized for a tidy script, rather than a human conversation, being deployed in the wrong places and for the wrong tasks.

Combine this with doubt from leadership around the ROI of them relative to other channels and it’s no wonder chatbots get the reputation they have.

So, if you only do one thing today, stop and think if your chatbots are really failing your customers, or if in fact you are failing your chatbots!

Author: Xander Freeman
Reviewed by: Robyn Coppell

Follow Us on LinkedIn

Recommended Articles

8 Ways to Improve Chatbots and Boost Customer Satisfaction
A pair of white robots dressed like an angel and a devil - love or hate chatbot concept
Are Chatbots the Tech We All Love to Hate?
Customer and chatbot dialog on smartphone screen.
Best Examples of Chatbots and What Makes Them Great
Persons hand holding phone with a chatbot icon
Customer Service Chatbots: Benefits and Examples