Are You Ready to Embrace the New Dawn for Quality Assurance?

Are You Ready to Embrace the New Dawn for Quality Assurance?

The nature of Quality Assurance (QA) is changing, fast! For decades, it was a manual, retrospective exercise: listening to a few calls, filling in a spreadsheet, ticking boxes on compliance or courtesy, and filing the results for review later.

Today, contact centres are moving towards something much smarter – a world of real-time intelligence, AI-driven scoring, and dynamic measurement frameworks that evolve as quickly as customer expectations.

But are we ready to embrace all that it can offer? Perhaps not! Our 2025 Report: What Contact Centres Are Doing Right Now reveals a profession in transition – ambitious, but not yet fully equipped for what comes next.

The Appetite for Real-Time QA Is Impossible to Ignore

A new entry in this year’s report paints a clear picture: 44.3% of contact centre professionals say that switching to Auto or Real-Time QA is the number one change they’d like to make within their quality monitoring programme.

It’s a striking figure – and one that signals how ready the industry is to move beyond traditional sampling and post-event analysis.

“Quality Assurance has always been the cornerstone for improving customer experience and creating a culture of continuous improvement, but traditionally it has always felt a little like reviewing the black box after the flight has landed.

Now that AI-driven real-time sentiment analysis and behaviour monitoring is more accessible, it’s not surprising that Auto or Real-Time QA is the most desired change.

This is an opportunity to shift from reactive analysis to proactive action – to gain real-time insights and course-correct during the customer interaction itself.” – Adam Boelke, Founder of the Alignment Advantage Group

This Change Is More Than a Technology Upgrade – It’s a Mindset Shift

This change is more than a technology upgrade – it’s a mindset shift. Real-time QA transforms the QA function from a reactive reporting mechanism into a live feedback engine, helping teams influence outcomes in the moment.

Imagine being able to detect customer frustration as it happens, not a week later in a calibration session. Or spotting coaching opportunities instantly, not after a monthly review. It’s really exciting!

For Boelke, the message is clear:

“Even without the real-time component, using speech-to-text to evaluate 100% of interactions (versus the small sample a QA agent could listen to) significantly improves the balanced insights needed to develop agents and improve customer connection.”

In other words, the technology now exists to replace manual sampling with complete visibility, freeing QA professionals to focus on analysis, calibration, and coaching rather than data entry.

Yet Many Teams Are Still Stuck With Outdated Tools

Despite this appetite for innovation, many teams are still stuck with outdated tools.

So, although it’s encouraging to see reliance on spreadsheets has dropped from 65.9% to 55.2%, over half of contact centres depend on manual systems for core QA processes.

And spreadsheets just can’t scale to handle thousands of customer interactions, nor can they integrate with real-time data or AI scoring models. They leave QA teams spending more time compiling numbers than interpreting them.

When the real opportunity lies in shifting QA from a “tick-box” function to one that generates actionable insight, it’s frustrating to see! After all, leaders want the detail necessary to effectively develop agents to improve their performance.

Scorecards Must Evolve Alongside Customers, Advisors, and Channels

Even as the tools improve, many organizations are measuring the wrong things, too slowly.

The research shows that 53.8% of contact centres last changed the questions on their QA scorecard within the past year, while 15% haven’t changed theirs in more than five years – and 4.7% have never changed them at all.

That’s a huge risk in an environment where customer expectations can shift in a matter of months, as illuminated in this vivid comparison:

“Changing a quality scorecard is a bit like changing your bedsheets. Everyone agrees it should happen regularly, most people do it eventually, and a brave few will even admit they’ve left it far too long – ew!” – Dara Kiernan, Leadership Development and Contact Centre Consultant

To align with best practice, scorecards must evolve alongside customers, advisors, and channels. Treating them as static annual documents simply doesn’t reflect the pace of modern CX!

After all, customer patience isn’t measured in days any more, it’s measured in swipes – so if the experience doesn’t meet expectations instantly, they swipe away into a competitor’s arms!

And whilst updating scorecards annually might have been acceptable a few years ago, the rise of digital immediacy means that quality frameworks need to be continuously refreshed.

The leading 26.4% who updated theirs in the past month – and the 4.7% who did so in the past week – are treating QA as a living, breathing system, not a laminated checklist.

The payoff? Faster adaptation, stronger alignment, and scorecards that reflect what truly matters now – not what mattered last financial year!

Budget, Integration, and Skills Are Holding Back AI Adoption

Above all, the benefits of AI are clear for all to see – with the top-rated use cases including AI-driven scorecards, speech and sentiment analysis, and flagging critical interactions for review.

Yet, despite this enthusiasm, 40% of contact centres still aren’t using AI in QA at all.

Why? The barriers come down to three main issues: budget, integration, and skills.

Budget

Budget remains a perennial concern, especially when AI investment can seem speculative. But the best advice is simply to start small!

“Think about the key use cases that will almost guarantee you some return on your investment. Auto QA is a very attractive one – but it requires work upfront to align scorecards and ensure transcript accuracy.

Smaller use cases like auto-summarization are easier to deploy and showcase ROI quickly in the form of reduced handling times and wrap-up effort.” – Garry Gormley, Founder of FAB Solutions

Integration

Integration is another hurdle, particularly for centres with legacy systems or on-premise infrastructure, but here is where due diligence is of the utmost importance.

For example, making integration part of the scoping questions for any suppliers, being clear about how data will flow, whether through middleware, APIs, or data lakes – and always keeping data security at the forefront.

Skills

Finally, there’s the skills gap. Many QA and operations teams simply don’t yet have the expertise to manage AI solutions internally, but Gormley recommends a hybrid approach can help here:

“If you don’t have the skills and don’t have time to upskill internally, then outsource to those who can help. Find a recommended supplier who has experience, and assign internal champions to work alongside them and learn the ropes. We all have to start somewhere.”

In short, the ambition is there – but the operational maturity isn’t. And closing that gap will require investment in both people and process, not just technology.

Author: Megan Jones
Reviewed by: Xander Freeman

Follow Us on LinkedIn

Recommended Articles

strategy board
Call Centre Quality Assurance: How to Create an Excellent QA Programme
11 Reasons Why Quality Assurance Is Important
A photo of someone with a clipboard making notes
11 Tips and Tools to Improve Call Centre Quality Assurance (QA)
A panel of people holding up signs with 5 on them
How to Create a Contact Centre Quality Scorecard - With a Template Example