Every company wants to be seen as customer-first but few are ready to have the real conversation around “How do you build customer trust when your systems are powered by AI?”
This was the question at the heart of a recent Call Centre Helper webinar on Responsible AI and Customer Interactions: Compliance Without Compromise.
Hosted by our very own Xander Freeman and joined by James Edmonds (Duty CX) and Jonathan Rosenberg (Five9), the session stripped AI back to its essentials: not how to use it faster, but how to use it well – exploring the hard questions leaders can no longer afford to dodge, including:
- What happens when AI gets it wrong?
- How do you protect vulnerable customers in an automated journey?
- Who’s responsible when bias creeps into your AI systems?
- How do you prove you’re doing the right thing to regulators, to stakeholders, and to the people who actually use your products?
The stakes are high, so let’s explore exactly why this conversation needs to be on every leader’s radar right now…
Trust Still Rules (Especially When Regulators Are Watching)

James Edmonds came out swinging with one simple truth: AI isn’t a thought experiment any more. It’s here, it’s live, and it’s already shaping customer experiences – for better or worse.
In heavily regulated sectors like insurance or utilities, “experience” isn’t just about convenience. It’s about confidence. Customers want to feel understood, treated fairly, and safe if something goes wrong.
That’s where the tension lies. He shared an example from an insurer that automated part of its claims process. Operationally brilliant.
But customers said it felt cold, like no one really cared. That’s efficiency versus empathy – and if you’re in a vulnerable customer situation, that’s not just bad optics, it’s a compliance risk.
So how do you fix that? Here are 5 simple-but-powerful principles that help AI feel human and stay legal:
1. Be Transparent
Don’t hide the bot. Literally tell customers they’re talking to AI. One insurer added a simple line at the start of its chatbot conversation: “You’re speaking to an AI assistant, but you can ask for a human anytime.”
James expressed that this is “just a good idea”. It’s a small statement that instantly builds trust and gives customers permission to engage more openly.
2. Design for Empathy
AI doesn’t “feel”, but it can detect emotion. You can train it to recognize tone, urgency, or distress and route accordingly. Empathy is now a design choice, not a personality trait.
For advice on staying fast and efficient in business without losing the human touch, read our article: Is It Really Possible to Balance Efficiency With Empathy?
3. Always Include an Escape Route
AI should never lead to a dead end. If the conversation goes beyond the bot’s depth, escalation to a human should be seamless – not something the customer has to fight for.
4. Leave a Paper Trail
Always ask yourself, “If something goes wrong, can we reconstruct what happened?” and build in auditability from day one.
5. Use AI to Spot Vulnerability, Not Create It
Language, tone, and behavioural patterns can tell you a lot about a customer’s state of mind. Use AI to flag potential vulnerability early and feed that insight straight to your CRM.
When Compliance Is Built In at the Start, It Drives Growth – Not Bureaucracy
Next, James zoomed out to the regulatory landscape, from the UK’s Consumer Duty to Ofcom and ICO data ethics, and reframed compliance as a commercial advantage. Whilst most companies see compliance as red tape, smart ones use it as a differentiator.
He told the story of a motor insurer that redesigned its quote process with AI-driven suitability checks. Complaints dropped, conversions rose, and customer satisfaction skyrocketed.
After all, when compliance is built in at the start – not bolted on at the end – it drives growth, not bureaucracy.
Responsible AI Is a Feature, Not a Checkbox

Next up, Jonathan Rosenberg – who has been building AI systems for over six years at Five9 – brought the engineer’s perspective.
He opened with a stat that hit home: 60% of businesses say it’ll take them at least a year to create proper governance frameworks for AI, and two years to earn back customer trust.
It’s a worrying thought, but there is lots of scope to do things well, and so to define what success looks like, Jonathan broke it down into 3 pillars:
1. Handle Customer Data Like It’s Gold Dust
“No one wants to see customer transcripts on a laptop screen,” said Jonathan, as he explained why Five9 moved all experimentation into a walled garden where sensitive data never leaves secure environments.
He also shared more about the hidden supply chain of AI: vendors using other vendors’ models, which use other vendors’ infrastructure.
And how Five9 apply a “one-hop consent” rule: if clients give permission to use data for product improvement, it doesn’t get passed down the chain. Ever.
For more advice on using the data within your contact centre, read our article: Are You Embracing the Potential of Unstructured VoC Data?
2. Respect Each Customer’s Personal Preferences
If a customer asks for a human, give them one. The idea of replacing your contact centre with bots is a fantasy. AI should support people, not erase them.
And internally? Scale your oversight to the impact of the AI decision. If a bot’s error could affect someone’s pay or performance (say, in quality scoring), you need human review built in.
3. Build Guardrails and Measure Everything
The headline message: AI trust is a dial, not a switch.
Jonathan calls it The Dial of Trust. It’s about setting the right level of autonomy for each use case – balancing reward and risk.
Here’s what that looks like in practice:
- Low Autonomy – Classic rule-based bots. Safe, predictable, boring.
- Medium Autonomy – AI listens but doesn’t speak: analysing tone, intent, or next steps while humans control the words.
- Selective Autonomy – AI handles small sections, like confirming an address, without touching sensitive data.
- High Autonomy – AI speaks freely from a trusted knowledge base.
- Full Autonomy – AI can act (calling APIs, fulfilling tasks) but only within tightly controlled boundaries.
The point isn’t to max it out; it’s to know exactly how much control you’ve given away and to keep monitoring it.
He also explained how to spot (and stop) hallucinations. In RAG systems (Retrieval-Augmented-Generation), a watchdog model checks if every AI answer is actually grounded in the retrieved data. If it’s not, it gets flagged for review.
Responsible AI Builds Trust, Confidence, and Commercial Advantage
When done right, responsible AI builds trust, confidence, and commercial advantage. Quite simply, compliance becomes the reason customers will continue to choose you!
So…
- Start with one small, low-risk use case and design it like a regulator’s watching
- Bake in James’s five principles from the start
- Give humans proper oversight
- Measure everything
- Review your dashboards weekly with compliance and CX in the same room
Then, only when the data proves it’s working, turn the dial up!
If you want to find out what else was discussed in the webinar, simply follow this link to watch: Responsible AI and Customer Interactions: Compliance Without Compromise.
For more information on managing and using technology for customer service, read these articles next:
- Create a “Win–Win” Self-Service Strategy
- What’s Next for Voice of the Customer (VoC)?
- Can AI Really Handle Customer Complaints?
Author: Stephanie Lennox
Reviewed by: Jo Robinson
Published On: 12th Feb 2026
Read more about - Hints and Tips, Artificial Intelligence (AI), Compliance, Customer Experience (CX), Customer Service, Five9, James Edmonds, Jonathan Rosenberg, Service Strategy, Stephanie Lennox, Technology Enablement Strategy, Technology Roadmap, Top Story
