Manual QA in a Conversation Intelligence Program

Hand holding quality rating check mark and stars
Filed under - Industry Insights,

Megan Keup at CallMiner explores manual QA in a conversation intelligence program.

Traditional Quality Assurance has Changed

In recent years, quality assurance (QA) has evolved with the increased sophistication of conversation intelligence technology.

In the past and even today, without technology, quality teams would listen to or read a transcript of the interaction and manually score an agent on 10, 20, or even 30 metrics that made up a quality score.

Traditional manual QA can be costly, time-consuming and inaccurate because most organizations can only realistically evaluate three to 10 interactions per agent a month, leaving the majority of conversations unanalyzed.

This makes it nearly impossible to spot trends in performance and to coach agents properly, because supervisors don’t have a large enough sample size.

Why Automate QA?

Inconsistent contact monitoring as the result of a 100% manual process can have downstream customer service impacts and cause immediate issues in the contact centre, such as agent dissatisfaction when agents don’t have the coaching they need. Outside the contact centre, poor customer service can impact revenue and sales performance.

Automating QA allows organizations to analyze and score up to 100% of conversations. Improving the amount of automation an organization uses for their QA program means supervisors can more quickly identify certain language within the transcript – such as proper greeting, script compliance and closing language.

Supervisors can get a baseline on performance levels and use that information to focus on trends across their team to provide coaching at scale.

In addition, knowing what agents need to be coached on can save supervisors valuable time. In the case of State Collections, utilizing CallMiner has saved their quality management team upwards of 4,000 hours per year.

Automated QA Isn’t an On or Off Switch

Automation doesn’t have to be an all-or-nothing approach. Teams will start to see considerable benefits in terms of cost and time savings when just one element in their scorecard is automated.

From there, teams can continue to automate more of their questions and decide how much should remain more subjective.

Automating a scorecard isn’t only about reducing human effort, but also allowing supervisors to spend time on more important work, like coaching at scale.

Even with a fully automated QA program, many quality teams continue to review a small subset of interactions manually.

For example, determining and investigating a product recall to minimize impact. Organizations new to conversation intelligence may need to continue reviewing manually until they complete onboarding and training.

Quality management is a journey that starts with the manual scorecard. Organizations can easily transition to automated QA while maintaining manual QA processes in the short term.

They can see the value immediately, such as decreasing the time it takes supervisors to coach an agent when a performance incident occurs.

While analysts work to build automated scores in Analyze, the quality team can input their manual scorecard into our Coach product. Teams can leverage the workflows in Coach to continue to score manually.

Quality teams will see a number of benefits from this approach, including the ability to quickly find interactions to review that meet their parameters, streamlining the dispute process when an interaction is incorrectly monitored, and giving organizations greater insight into what is being coached and why.

Deliver Trusted Results

Organizations can ease the transition from manual to automated QA and start seeing value right away by inputting their manual scorecards. Here are a few results that our customers have seen since deploying CallMiner:

  • The Unlimited was able to reduce QA costs by 40% while increasing coverage across all channels
  • Gant Travel increased call monitoring and coaching to 100% of interactions and increased agent feedback to 400%
  • Qualfon achieved over 95% accuracy on agent scorecards and increased their close rate by nearly 33%
  • DoublePositive reduced training costs by 90%

In addition to these benefits inside the contact centre, teams can also see benefits outside of the contact centre for organization-wide impacts. For example, Sitel Group was able to improve NPS by 5% and increased sentiment scores by 9.8% in two months.

Kurt Mosher, COO and Executive Vice President at Gant Travel said: “We knew that CallMiner was going to be a gamechanger for us. It has given us the visibility into our call drivers, allowing us to understand why customers call in the first place.

It enables us to monitor 100% of our calls and provide feedback in near real time. It has also helped us improve training and productivity. Most importantly, it helps us every day to achieve our number one mission, which is to become our customers, ‘last best experience’.

This blog post has been re-published by kind permission of CallMiner – View the Original Article

For more information about CallMiner - visit the CallMiner Website

About CallMiner

CallMiner CallMiner is the leading cloud-based customer interaction analytics solution for extracting business intelligence and improving agent performance across all contact channels.

Find out more about CallMiner

Call Centre Helper is not responsible for the content of these guest blog posts. The opinions expressed in this article are those of the author, and do not necessarily reflect those of Call Centre Helper.

Author: CallMiner

Published On: 17th Feb 2023
Read more about - Industry Insights,

Follow Us on LinkedIn

Recommended Articles

strategy board
Call Centre Quality Assurance: How to Create an Excellent QA Programme
How to Create a QA Framework for Your Call Centre
Panel of judges holding signs with highest score - call scoring concept
Call Scoring in the Contact Centre: Manual Vs. Automatic
Two hands show OK gesture, approval, liking, acceptance through torn hole in yellow paper wall.
4 Ways to Incorporate Agent Recognition into Your QA Monitoring Program