Lauren Maschio at NICE explains how leading organisations are embracing AI to improve agent performance and boost CSAT.
AI is in the news more than ever, thanks to ChatGPT and generative AI. Businesses across all industries are in the process—or have already—made plans to strategically invest in AI.
In the contact centre space, 99% of companies recently surveyed by NICE, say they plan to invest in AI analytics-driven quality management.
This dramatic percentage reflects the fact that organizations are aware that their current method(s) of assessing agent performance is sub-par.
More crucially, some of these outdated sampling practices have led to misinformed decision making. Other studies agree—according to Aberdeen, 75% of executives want to make better use of their interaction data (by using AI).
Let’s take a closer look at how AI can improve how businesses gather and use data. Many contact centres rely on random sampling of interactions to evaluate the performance of their agents and gain insights from customer interactions.
They use these samples to help identify areas of improvement for agents, ensure that agents are adhering to call scripts or regulatory requirements, and identify common issues that can inform the development of training materials and targeted coaching. In some cases, these results even impact agent compensation.
Random sampling is not without its challenges, however. Chief among them is the problem of inadequate or unrepresentative sampling.
NICE commissioned a survey of 400 senior decision-makers—supervisors, managers, directors and VPs who work in customer care, customer service, or contact centre departments with at least 200 agents across all industries in the U.S. and the U.K.— to better understand the relationship between agent soft skills, customer satisfaction, and the potential of artificial intelligence (AI) to revolutionize how we evaluate agent performance.
One of the key focuses of the survey was the sampling practices of the contact centres, as well as their perception of how AI could improve those practices and CX goals and outcomes. Here’s what we learned.
Sampling is Inadequate: Contact Centres Rely on Skewed or Random Data to Make Critical Decisions
Contact centres may not sample every interaction, but they often implement strategies to ensure a representative and meaningful sample.
This can include random sampling, stratified sampling based on interaction types or customer segments, or sampling that’s targeted for another specific evaluation purpose.
The goal is to strike a balance between resource constraints, operational efficiency, and the ability to gain reliable insights that can be used to drive continuous improvement in customer service.
In reality, however, sampling performed in most contact centres is far from representative—it encompasses a very small percentage of the overall interactions that are typically handled each month.
According to our survey, the average contact centre measures just 14 voice and digital interactions each month, and more than a quarter of them currently measure fewer than 10 interactions each month.
Given that all of the respondents work for contact centres with more than 200 agents, this is an insignificant sample size, statistically speaking, and not representative of agent performance.
In addition, nearly two-thirds of the contact centre leaders we surveyed choose samples based on post-interaction customer satisfaction surveys, which are known for attracting either highly satisfied or highly unsatisfied customers, further skewing the sampling process.
CSAT surveys also tend to have a relatively low response rate, representing a small sample of customers.
Other methods of selecting interactions for evaluations include:
- Targeted based on speech analytics categories (55%)
- An automatically selected random sample (51%)
- Targeted based on specific data points (48%)
- Targeted based on desktop analytics categories (42%)
- Manually selected random samples (30%)
Despite the lack of a statistically significant or holistic view, 85% of stakeholders use this data to make critical business decisions.
Teams Don’t Trust the Process: Agents Dispute Performance Feedback Due to Unrepresentative Samples
The goal of any quality management program is to assess agent performance and provide feedback, but programs that rely on evaluators listening to a small random sample of calls and interpreting the results are inherently biased.
This erodes confidence in the process. Left feeling that their evaluations are unfair, agents are often resistant to the feedback provided.
In fact, 41% of contact centre leaders say one of their top challenges in quality management is that agents don’t buy into their current feedback.
Other top quality management challenges, according to our survey, are that evaluators are using a small sample size that is not representative of overall agent performance (38%) and that random sampling is not representative of agent performance (38%).
When feedback is inconsistent and the sample size is too small, it’s no surprise that agents will not want to accept the results and therefore won’t buy into the program.
A Path Forward
The survey results clearly illustrate that stakeholders are struggling to improve quality management. AI can easily solve this problem by analyzing 100% of all interactions to improve operational efficiencies and deliver more positive experiences.This blog post has been re-published by kind permission of NICE – View the Original Article
For more information about NICE - visit the NICE Website
Call Centre Helper is not responsible for the content of these guest blog posts. The opinions expressed in this article are those of the author, and do not necessarily reflect those of Call Centre Helper.