Podcast: How to Extract More Value From Your Contact Centre Quality programme

The contact centre podcast cover art for Martin Teasdale's discussion on 'How to extract more value from your contact centre quality programme'
1,166

The Contact Centre Podcast: Episode Eight

In this episode, fellow contact centre podcaster Martin Teasdale discusses the value of having a well-structured quality programme and shares his advice for helping you get there.

A thumbnail photo of Martin Teasdale
Martin Teasdale

As part of our discussion, we also talk about using quality to increase advisor motivation and maximizing the value of scorecards, as well as sharing tips to improve your quality calibration sessions.

To listen to the podcast directly from this web page, just hit the play button below:

The Contact Centre Podcast – Episode 8:

How To Extract More Value From Your Call Center Quality Programme

This podcast was made possible by our sponsor, Genesys. We now have a new link to visit their website, instead of the link mentioned in the podcast to request a demo.

So, to find out more about Genesys, simply visit their website

Podcast Time Stamps

  • 2:08 – The Benefits of Good QA
  • 5:27 – Improving Motivation with Scorecards
  • 8:45 – Testing Your Scorecard
  • 14:09 – Running Calibration Sessions
  • 19:06 – Overcoming Grey Areas
  • 24:53 – Avoiding Common QA Mistakes

Here is a Transcript to the Podcast

Charlie Mitchell: Some contact centres just use quality as a means of measuring performance, but what are the benefits, do you think, good QA can bring to the contact centre?

Martin Teasdale: There’s so many, I think we use a phrase called the sphere of influence. There isn’t a better placed function within the contact centre to be able to have a real positive influence across so many other different functions. Whether that is training, product, or the more standard and traditional ones that you’d expect QA to interact with, like the compliance and regulatory element.

But if you can get your QA right, the ability to evaluate and provide actionable insight into how the business is performing, the challenges customers are facing, the opportunities to identify process improvement and enhance processes. You’ve also got, like I say, the more traditional mitigation of regulatory and business risk, identification of core contact drivers. So, that sort of vocab is more and more popular these days about what are the customer contact drivers, what are the customer outcomes? And of course, the other thing is around employee performance. If you can get your QA right, more enlightened organizations are recognizing and realizing the potential through marrying the right QA team with the right technology and processes, and the benefits are massive, through customer experience to employee engagement.

We all like being told when we’re doing something right; this isn’t necessarily always about areas for improvement. QA can catch people doing things right, highlighting and celebrating best practice and bringing the QA team to the fore to enhance employee engagement. I think there’s nothing more dangerous in any business, but specifically in a contact centre, than a consequence-free environment. So, those people that are doing a good job and doing well day in, day out contact after contacts, QA is your vehicle in order to kind of really praise those people and give them great motivation.

So, good QA within the contact centre world can be a superpower and it’s probably underutilized at the moment. One of the things that we’ll see is people will go right, how can we really see how a service is being received by our customers? Let’s engage with, maybe an external company, or look at a way of capturing customer satisfaction metrics. All of which have their place. But often they will walk straight past the team that has more experience dealing with those contacts day in, day out, both from the frontline team member point of view, but also from the customer’s point of view; how that service is being received. So, if you can really tap into that, you can make a huge difference through getting QA right.

Charlie: Absolutely. One particularly interesting point that you made there was that quality can be a great motivator for advisors. How can we use our quality scorecards to increase contact centre motivation?

Martin: Using QA through getting a scorecard really well set up is a great motivator because you can catch people doing something right at the right time and then use that to share best practice, to help train new starters in being able to deal with certain parts of the customer journey. So, in that sense, it’s invaluable.

Charlie: I think as you were talking there earlier about the importance of using the scorecard, how do you think we can ensure the quality scorecards are measuring what’s most important to our customers?

Martin: It’s a good question. I think the key part of that is what processes you’ve developed to identify what is important to your customers. So, are customers involved in identifying the key measures of the performance being evaluated? Because I think it’s fair to say that there is a large percentage of scorecards that have been developed, organically, over time and maybe they were given to somebody way back in the distance to develop as a task and, with the right intentions, and right for the moment, they built a scorecard.

Those are often added to with new services, maybe new regulations, another line item is added, but very rarely are they reviewed holistically at that time. They grow arms and legs without, maybe, looking at how do these still align with what we are trying to achieve for our customers. Does the scorecard drive performance that’s in line with our company values and strategy? What is it telling our team to do, holistically? So, having some kind of regular cadence about reviews is critically important because, when you think about it, scorecards and the measurement systems have been developed over time.

How often does that involve factoring in what customers perceive good quality to look like? Because the danger is that we all fixate with the output, with what’s the score? Oh great. It’s over whatever line we’ve set as the target. Everything must be good. But maybe were not delivering against what customers want from us at that particular time. So, we would always suggest you factor in – whether that’s through customer satisfaction metrics or maybe you have customer focus groups or you can utilize social media to input into the scorecard development – a view from the customer. In terms of, what are the key things you expect to receive during your interaction with the business, and how can the scorecard consistently help people deliver that?

Charlie: Yeah, I think it’s interesting now, when you bring in C-SAT as well. It’s one technique that I’ve heard for improving scorecards’ effectiveness, is to just put on a graph the C-SAT of a contact against the quality score of a contact and just put those onto a graph and just find a correlation. And that will help you to tell whether you’re measuring what’s important to your customers or not. Because, as you say, a lot of these scorecards were great a long time ago, and I’ve even heard of call centres using the same scorecards on different channels. So, that’s an issue that people don’t foresee. Have you ever seen a contact centre doing that?

Martin: Well, one of the things, I think, we’ve seen that for sure, we also see people questioning our C-SAT: it tells us one thing, but our internal quality score tells us another. Why is there this difference? And then when you actually look at who’s being asked, when and, critically, what they are being asked. So, if there’s no correlation between your line items in your scorecard and the questions the customer is receiving at a certain point in their journey, it’s actually going to be more surprising if there is a correlation. Not if there isn’t. So, this kind of aligning of scorecards with customer satisfaction metrics, we would suggest, is definitely best practice.

Charlie: Yeah. I think another thing that many contact centres struggle with, not just aligning satisfaction metrics and quality scores and getting that balanced right, is finding the right contacts to pick for quality assessment. Do you have any tips to help do this?

Martin: Yeah, I think common question, common challenge. I think flexibility is key. So, regardless of your set-up, you know you have – whether it’s humans, combination of humans with technology – you know you have a set number that you’re able to monitor and then how you determine you use that in terms of what are the most valuable contacts. I mean, even that in itself is an interesting question that I think people need to address. First of all is, what does constitute a valuable contact. You can use technology, human intervention to identify trends, drivers, red flags, compliance risks, etc. But it’s also really important to identify and promote positive contacts and behaviours. And what I mean by that is with your resource, you know you can monitor a set amount. And if you are just repeating that over and over again, and that resource is purely dedicated to monitoring a set number of contacts against a set number of frontline team members per month, whether it’s four per team member, per month, and you just repeat that at infinitum, without any variety, question whether you are really getting the most out of your QA function.

Because some of those contacts that are being monitored, taken away from the kind of more standard set number per agent, per month, but used to maybe look at some outliers to follow some kind of risk-based verification may provide really, really valuable information by which you can then make some really significant improvement. So, from a tactical point of view, I’d always recommend, you’ve got your standard monitoring that is doing everything you wanted to do, from mitigating risk, driving coaching, performance, but always try and keep some element aside that you can go away and investigate. So, maybe you’re thinking of changing your service or launching a new service. Use those contacts to go and figure out how they might be perceived. The other benefit to this, of course, is it energizes your QA team because they feel like they can really make a difference by you saying, right, 10 to 20% of the monitoring this month we’re going to use to do some real tactical stuff.

Maybe we’re going to focus on new starter populations and based on what we find, we’re going to add that into, not only our induction, but possibly even our recruitment process. So, you’ll be ironing out some of the common kinks that people always go through when they’re new, but also making some real significant change through the contact centre. That’s going to have a long-lasting impact and that’s purely through starting to think about how you use your resource, and over and above the production of monitoring to produce a score, to do some really significant coaching. But also to identify things that are going to have benefits for your customers as well.

Charlie: Yeah. I think it’s interesting that you’ve talked a bit about the experience of the quality analyst there because we often just focus on the advisor on these things. But hearing how we can mix things up, the quality analysts make sure they’re fully engaged and ready to do their job at their best level is an interesting angle that we don’t really talk much about. And I think one of the things that we do like to involve analysts in is quality calibration sessions. Do you have any tips to running good quality calibration sessions?

Martin: First of all, if you don’t have calibrations as part of your QA framework, the addition of them can be really positively transformative for so many reasons. One of those you just mentioned about for the QA team itself. First and foremost, I think there’s the distinction between a joint listening session and a calibration session.

So, both have merits, but a calibration session would be one that we would suggest you define from the outset and you document it. So, you’re following a defined and documented process, that you are practising exam conditions. So, participants blind score, prior to the session itself, so there’s no conferring or, critically, no chance to be influenced, because again, if you are all just being influenced and follow who the most dominant person is in their view, what’s the point of doing the session? So, that’s a really key one. Invite different stakeholders to routinely calibrate as well. And if you can, we’d encourage that to be senior leaders as well. At least on a quarterly basis.

You’ve got to consider your contact selection. So, you need to include a mix of contact types, lengths and complexity for the broadest view possible of your world. Also, don’t be scared of the more contentious or grey issue areas. Calibrations are a really, really good way, that should be your think tank, to be able to give people a steer and to throw into the mix areas where you’ve got conflicting views. To help you manage that process, it’s important to assign a calibration owner to run the session and for other participants to calibrate against. Someone needs to be the point of truth, and that can move around the group, but it’s best practice to have somebody to say we’re going to calibrate against this person’s view.

One of the things that you would need to do as well, I think, is implement a defined and tracked escalation process for disputes. So, maybe there are some things that can’t be dealt with within the session, but you want to track them and use them going forward. Because calibrations can enable you to gain consensus on the best approach to operational change. So, to come back to one of the points around the QA team, an often overlooked group of experts, who perhaps again, when you’re thinking of making a change, you can use a calibration session. It doesn’t necessarily need to be in something that’s already happened, but the calibration session could be a good place to discuss and agree on the best approach for future changes.

Both for the QA team itself and perhaps for the wider business there should be consideration as to whether calibration scores themselves are used as a business or personal KPI because I think from an operational point of view, I would like to know that the QA function is highly calibrated. So, I think if QA teams can really get a good calibration process in place and be brave and transparent, then reporting their own calibration performance as a KPI is a really good way to engender trust and credibility of the team.

Charlie: And that’s a really interesting point on using calibration as its own KPI. How would you go about measuring that?

Martin: So, it can be as simple as an, overall, how calibrated are we using the scorecard? For  example.

Charlie: Oh, okay.

Martin: So, you can take specific parts of the scorecard or you can take specific… Predominantly it’s reported as a percentage in our experience, where we say, okay, the team is calibrated at 90-plus percent. Same way that people think about quality scores. Again, the danger is that you just look at the number and you don’t look at the data and the detail behind it. But, predominantly, through a percentage.

Charlie: Yeah, it’s because we do know there’s a lot of subjective criteria on any quality scorecard. So, I think the idea there of also having a leader, to go alongside of that, to judge over any potential disputes is a very key factor, specifically over those grey areas that you’ve talked about.

Martin: With grey areas, people will often go, okay, well where do we land when there’s multiple possible acceptable outcomes? And I think this is where, holistically, you can come back to who are you trying to be as a company? What are your values and how do you want this area received by your customers? And if you question the grey areas against that, more often than not, it sways you to one way or the other.

Charlie: Do you think that in these sessions as well, maybe advisors should be part of the calibration, so they know what they’re being measured on?

Martin: Absolutely. Again, that for all elements of the QA framework, one of the… You need to dispel myths, you need to dispel… People believe that QA is done in a certain way for certain reasons. By being transparent and involving… Most contact centres will have different groups and committees or maybe they can utilize their team leader population, but absolutely the calibration, you should have some kind of cadence that involves representation from all levels that the QA framework affects or interacts with.

And, absolutely, that should include frontline team members because, whilst I think it’s very common to say that a lot of QA teams are made up of previous frontline team members, that still doesn’t mean that you should always just rely on that fact. I think you always need to go and speak to people who are doing the job day in, day out, bring that into the calibration and allow them, through a very organized structure, to be able to help influence the best possible framework for employees to deliver against the company’s strategy for the benefit of customers. Knowing all details, knowing all data and a key source of that, are the people doing the job every single minute of the day.

Charlie: Yeah. I think it also goes back to your first point of using quality as a motivator, and the only way we can do that is to be as transparent as possible with our advisors. And one interesting technique I picked up on a recent site visit to the DAS Contact Centre was that they would not only just give advisors their quality scores, they would also send over the full scorecards with a recording. So, they could listen back and hear where they’ve been marked wrong and that would help them to actively change their behaviours instead of just seeing the score, which I thought was very interesting. Is there anything else that you would recommend doing once you have the quality data at the end of a quality monitoring session?

Martin: I think that point you made then is a really good one that bears repeating, and even if people don’t have the technology in order to share the media file that’s being monitored against, best practice would state that you timestamp and add as much commentary as possible to any of the learning points or scoring points that you want to share. I think generalist outcomes, or a pass/fail metric doesn’t enable people. I think the sole function, the key function of any quality framework is to provide rich, salient, insightful data in order to effectively coach people to be better against transparently, collectively agreed scorecards and metrics that match the company values. You can’t do that with a general ‘sorry you failed’. If we are treating people as professionals, as professionals you want to be best you can be at your job. Having QA and having clear points, coachable insight and actionable points from a QA framework is… There’s nothing more critical.

Charlie: I like that idea of the coachable points from the items of criteria on the quality monitoring scorecards. Because one of the big things you do with quality is to identify trends and agent performance. Maybe you can identify trends, you can then see which kinds of coaching that advisor needs, based on the scorecard. Is that a key point, do you think?

Martin: Yeah, I think any output from your QA function, any data really. Data is your targeted enabler to maximize the benefit of coaching and development activity. Things have come a long way, thankfully in most cases, from QA being seen as this punitive function. It’s about taking… Think about how fast paced contact centres are, how many contacts people deal with, how many interactions. Having a framework that enables you to take the really key parts of that, to enable your teams to be better at delivering their job. That’s how critical the QA functionality is.

Charlie: Yeah, and I think you’ve mentioned there that a lot of people are using the same QA processes that they did, maybe 10 years ago. Is there still any common mistakes that you notice contact centres are still making, that perhaps they were making 10, 15 years ago?

Martin: I think the biggest mistakes, again, are still viewing the QA process as punitive. That it’s purely compliance-driven activity. That maybe it’s a way of catching people doing the wrong things. It’s a business prevention unit. Rather than seeing what is possible and seeing that QA is a rich seam of valuable data, actionable insight that’s going to enhance the experience of people in your contact centre right now and therefore enable them to deliver a great experience for your customers. I think this view that QA is maybe overlooked on occasion would be a mistake if you don’t regularly canvass. If you’re a senior leader and you haven’t canvassed and sat down with QA to ask them what they’re hearing, to ask them how people could get better.

We’re all fixated with data. It absolutely has its place. But, more often than not, if you’re talking to QA experts, sometimes it’s going to be the stories and the anecdotal information that is shared that really, really resonates. So, if you’re… Like I say if you’re not sitting down with your teams and really picking their brains about what good looks like, what needs to be improved, there’s a huge area there that is under-utilized that could make a significant difference. And then I think the other thing that I would say is this. Places that absolutely get it, that I would say are really, really progressive are ones where the relationship between operations and QA is a really, really positive one. Where they’re able to challenge, but they feel like they’re working together. If there’s any conflict at all, it’s never going to be as effective as it can be and no one, no one benefits.

But where operations and QA are working together and they have regular routines, regular agreed communication, a clear escalation process, and they calibrate together, they approach everything together and each respects and values the other, then these are places that are absolutely award winning, delivering great experiences for their customers and having energized teams, both in QA and in ops.

Charlie: Yeah. I think this interesting point there, between the connection between QA and the operation teams, because we all know the nature of call centres now, there’s constant firefighting going on. And when that happens, QA, quality monitoring sessions can be the first thing to be pushed to one side. So, having those regular routines and schedule times in place is very important, I think. And in terms of just generally going back to the scorecard, is there any mistakes that you still see with businesses creating a scorecard? Maybe one that comes to my mind is not including an NA option, not applicable … on the scorecards. Is that a problem that you see?

Martin: Yeah, absolutely. And it can make a huge difference to the score. I think if… Again, we’ve touched on some of the real challenges that people face is, does the scorecard match the values and the perception, externally, of the company. If they’re at odds with each other, you’re only heading towards conflict. Another would just be to look at the number of line items you have. One of the questions we get asked more than any other is, what’s the right number of line items or how should the scorecard be structured by channel and all of these different things. And often it needs people to take a step back.

So, if your interaction is relatively transactional, quite a short time to transaction, whether it’s call or chat, if you have an overly onerous, long-winded scorecard, the negative impact of that is widespread in that it undermines QA, the credibility of QA. It provides unnecessary work. So, I would just take that wider view of your scorecard. When was it last reviewed? How is… You’ve mentioned there counting non-applicable as a positive outcome is a common error. And I think one of the key ones, though, is does the scorecard allow you to reflect, holistically, what the interaction was like for the customer? Because too often you can have an interaction that’s scored really positively on every single line item, but you’re left with this nagging sense if you’ve listened to it that that wasn’t a great interaction for the customer. If you’re in that position, and you let the interaction go and you’ve scored it and you don’t have the ability to reflect that, okay, the reverse of that can also be true. It hasn’t scored highly. However, this was a really positive interaction for the customer where they got the right outcome.

So, one of the things that we’ll see is that people don’t have the ability to reflect that somewhere. And it can be as simple as just asking the question at the end and allowing, whether it’s someone in QA or a team leader who is scoring an interaction, to say this was a positive overall experience for the customer. There are some learning areas that we need to go through and there’s some areas that you’ve done and areas that you haven’t. However, overall great experience. And too often scorecards miss that.

Charlie: I think that’s particularly a poignant point to end on there. Taking a step back away just from the over-analysis of it, almost, and just asking yourself, was the customer happy and was it just a generally positive overall interaction?

Author: Robyn Coppell

Published On: 22nd Oct 2019 - Last modified: 22nd Apr 2024
Read more about - Podcasts, , ,

Follow Us on LinkedIn

Recommended Articles

strategy board
Call Centre Quality Assurance: How to Create an Excellent QA Programme
A panel of people holding up signs with 5 on them
How to Create a Contact Centre Quality Scorecard - With a Template Example
Call Centre Quality Parameters: Creating the Ideal Scorecard and Metric
How to Calibrate Quality Scores