The EU Artificial Intelligence Act (AI Act), formally adopted in 2024, is the first comprehensive AI regulation in the world.
Yet the ripple effects for contact centres and customer experience (CX) operations are only just beginning to be understood.
To find out more, we asked CX thought leader Charlie Adams to explore what it means for deploying AI in day-to-day operations, and how to stay ahead of compliance while still embracing AI-driven transformation.
The AI Act in Brief
The AI Act introduces a risk-based framework for classifying and regulating AI systems across the EU. AI systems are divided into four categories:
- Unacceptable Risk – Prohibited (e.g. social scoring, manipulative AI)
- High Risk – Subject to strict compliance (e.g. AI for worker management, performance assessment)
- Limited Risk – Requires transparency (e.g. chatbots, generative AI content)
- Minimal Risk – Unregulated (e.g. spam filters, AI in video games)
What makes this Act globally significant is its extraterritorial scope: it applies not just to EU-based companies but also to non-EU providers and users whose AI systems affect EU citizens.
If your contact centre or technology provider serves the EU market, you’re in scope – regardless of where your operations are based.
High-Risk AI in Contact Centres
The use of AI in employment and workforce management is classified as high risk under the Act.
This includes AI used for:
- Recruitment and task allocation
- Monitoring agent behaviour
- Evaluating job performance
- Informing decisions on promotion or termination
In the contact centre context, many AI-based quality assurance (QA) and performance monitoring tools – especially those that assess call content, sentiment, or behavioural traits – could fall under this classification.
Under the Act, providers and deployers of high-risk AI must meet strict requirements around:
- Data Governance (to ensure fairness and accuracy)
- Transparency (clear documentation of how the AI works)
- Human Oversight (ensuring decisions are reviewable and not fully automated)
- Traceability (event logging and auditability)
This is not just a checklist – it’s a fundamental shift in how AI systems are designed, tested, and implemented in professional environments.
Inferring Emotions in Workplace Environments Is Banned
One of the most discussed elements of the AI Act for our industry is the prohibition of emotion recognition in the workplace (Chapter II, Article 5).
Unless it’s used for medical or safety purposes, inferring emotions from voice, facial expressions, or behaviour in workplace environments (including contact centres) is banned.
This has major implications for tools that promise to detect:
- Agent frustration
- Customer satisfaction
- Empathy scores
- Tone of voice assessments
These types of features, often integrated into QA systems or real-time agent assistance tools, must be re-evaluated to ensure they don’t cross regulatory lines.
High-Risk AI Systems Must Be Understood by the User
High-risk AI systems must be explainable too – meaning every insight or recommendation that could influence someone’s job must be understood by the user and open to human review.
This is a direct challenge to black-box AI tools. If your system recommends coaching, flags a performance issue, or suggests task reassignment, you need to:
- Document how that conclusion was reached
- Ensure the data inputs are accurate and appropriate
- Allow managers to override or contest recommendations
- Communicate clearly to employees when and how AI is used
Taking these principles into consideration, solutions must be designed to be:
- Traceable (with event logs and contextual detail)
- Coachable (forming part of a transparent development journey)
- Non-punitive (supporting performance growth, not discipline)
What If a Tool Uses LLMs?
Many QA and agent-assist tools today are built using general-purpose AI models – such as large language models (LLMs) trained on vast datasets.
The AI Act introduces new obligations for general-purpose AI (GPAI) providers, especially when models present systemic risks (defined in part by the compute used in training).
While most contact centre platforms are downstream users of these models, businesses should still:
- Confirm their provider has published training summaries and complies with the Copyright Directive
- Verify technical documentation is available
- Ensure the AI model has been tested for safety and bias where applicable
If you’re embedding LLMs into your contact centre workflows, you must understand how that model behaves – and whether it might inadvertently expose you to high-risk or prohibited use.
What Contact Centres Should Do Now
Whether you’re a BPO, an internal service centre, or a CX tech provider, the AI Act means it’s time to tighten up:
Evaluate Your Use of AI
Start by classifying your AI systems using the risk framework.
Are you using AI to evaluate people, make decisions, or automate processes?
Work With Compliant Providers
Choose vendors who can provide technical documentation, risk assessments, and human-in-the-loop options.
Communicate Transparently With Employees
Let your workforce know how AI is being used, especially if it affects performance reviews, training, or task assignments.
Rethink Emotion Analytics
Avoid or reassess tools that infer emotions unless they’re clearly out of scope or used only for non-decision-making support.
The AI Act Is a Wake-Up Call for CX Leaders

The AI Act is a wake-up call – not just for compliance officers, but for CX leaders, contact centre managers, and technology providers.
AI is no longer the Wild West! If it shapes how we assess, manage, or motivate people, it must do so ethically, transparently, and within a framework of trust.
Written by: Charlie Adams, Director of Customer Experience & Success at Custom Connect
For more information to help you stay on top of compliance, read these articles next:
- 7 Methods to Verify the Identity of Your Callers
- Are You Doing Enough to Protect Yourself From Contact Centre Fraud?
Author: Guest Author
Reviewed by: Megan Jones
Published On: 28th Oct 2025 - Last modified: 29th Oct 2025
Read more about - Call Centre Management, Artificial Intelligence (AI), Compliance, Management Strategies, Technology Enablement Strategy, Technology Roadmap, Top Story



