How to Calibrate Quality Scores


29,209

Here we take you through our recommended process to calibrate quality scores.

Why Should You Calibrate Quality Scores?

While quality scores are often just seen as a method of measuring advisor performance, they can also boost the contact centre environment, by removing any perceived bias and preventing the perception of inequality.

– Making the Process Fair

By refining quality monitoring, you can remove conversations about the process not being fair such as “well … also forgot to repeat the customer’s name and they didn’t get marked down.”

This is because the procedure becomes standardised, so it does not matter who takes charge of the monitoring process.

– Consistency Across the Business

Also, calibration allows for consistent service across the business and validates contact centre procedures as well as performance standards.

Call Calibration Template : Our Step-by-Step Process

Here we present an eight-step process for calibrating quality scores that combines calibration sessions with gathering data and looking for deviations.

STEP 1 – Assign an Overall Leader for the Quality Process

To successfully oversee the calibration process, it is wise to have a leader who can both have the final say on which elements of the call should be assessed and bear the responsibility and consequences of that decision.

Tom Vander Well, Executive Vice President at c wenger group, says: “I think that one mistake in calibration is that there is no clear authority to make the decision.

Tom Vander Well

Tom Vander Well

“Say there are two different views on the same situation, how should we assess one particular behaviour in certain situations? Is this the decision that we’re making because it fits our brand? Or is it part of our strategic plan?

“So, there might need to be someone holding people accountable to that standard. I feel often that calibration, it just becomes an ongoing debate with no resolution.”

It seems important to have someone in charge of the process who can gather everyone’s opinions and make a decision with the company’s best interests in mind.

STEP 2 – Discuss Your Standards for Quality

The next step is determining whether to set a very high standard for quality, so that advisors are challenged to change their behaviours and achieve a high standard of customer service, or set a simpler standard.

Setting a simpler standard – where advisors achieve 100%, 90% of the time – provides advisors with evidence that they are doing a good job and may boost morale.

STEP 3 – Decide on Key Behaviours to Target

Once you have determined your standards for quality, it is time to choose which behaviours to target. Tom Vander Well recommends that most contact centres should target these three in particular:

  • First Contact Resolution (FCR)
  • Courtesy, friendliness and soft skills
  • Efficiency (lowering customer effort)

However, other contact centres, for example in financial sectors, must also target verification/compliance.

STEP  4 – Ensure That Quality Targets Have a Purpose

Now that certain behaviours have been targeted, Ian Robertson, Customer Contact Specialist at The Forum, suggests asking:

Ian Robertson

  • What do the behaviours mean for our customers?
  • What do the behaviours mean for our colleagues?
  • What do the behaviours mean for our company/organisation?

By doing so, Ian Robertson says: “you can be objective and focused on the outcome, which should make it much easier to reach agreement.” 

So, if everybody involved in the process can agree on why FCR, for example, is important, and why they are targeting advisors on it, it is easier to come to an agreement as to what is considered FCR and the behaviours to look out for.

STEP 5 – Formulate a Quality Scorecard

Having listed the behaviours to target, it is time now to discuss the elements of each behaviour on which advisors can be scored.

For example, to target the soft skill of “hold etiquette”, Tom recommends the criteria:

  • Does not mute or leave caller in silence instead of placing caller on hold
  • Seeks caller’s permission to place him/her on hold
  • Thanks caller for holding when returning to the line
  • Apologises for wait on hold if length exceeds 30 seconds

Whereas for the vaguer topic of call resolution, Tom recommends:

  • Makes complete effort to provide resolution for caller’s question(s)
  • Offers call-back if wait time will/does reach 3 minutes
  • Provides time frame for call-back or follow-up correspondence
  • Confirms phone number for call-back

But everybody involved in the process should discuss the elements of each behaviour and together create a unique scorecard that the leader considers aligns with business goals.

STEP 6 – Create a Document With Guiding Principles to Remove Subjectivity

Despite creating a scorecard with the team, some call quality analysts may still have a different opinion of what constitutes a “complete effort to provide resolution for caller’s question(s),” for example.

In Tom Vander Well’s experience, “many calibration sessions turn into a war over a small piece of one call because of this.

“I found myself always asking: ‘what’s the principle we can glean from this discussion that will help us be more consistent in scoring all of our calls?’”

So, Tom Vander Well advises keeping “a ‘Calibration Monitor’ document that tries to summarise the general principles as discussed in the session, which will aid all analysts with future calls and provide guidelines to split the objective from the subjective.”

STEP 7 – Hold Calibration Sessions

Now the scorecards are primed and analysts know what is expected from them, it is time to hold calibration sessions, to ensure analysts are in tune with one another.

As Tom says, these sessions involve getting “all analysts in a room and taking one phone call. Each analyst scores the call and then everyone comes together and compares the results. This brings out the differences in the scoring, as a debate will ensue about how to align these things correctly.

“It is very healthy and positive, whilst allowing the leader to manage and say, ‘nope, this is the way we’re going to do it. I want everybody to look at it this way moving forward.’”

It can also be a good idea to focus on a different call type during each session – whether inbound outbound, sales, etc. – to remove any unnecessary confusion.

This step is where many contact centres complete their calibration process, yet there is a significant downside to doing so. This is because, as Tom Vander Well has experienced, certain individuals will do two things.

“One, they’ll say one thing in the calibration session because they know that’s what the leader wants to hear, but then when they go back and analyse the calls, they continue to score it the way they believe it should be done.

“The second thing that I see happen is that analysts will score the call that they know is going to be calibrated one way because they know they’re going into a calibration session.

“So, the analyst may think ‘I’m going to score it this way because I know that’s what’s going to be acceptable in the calibration session’. But they may continue to score actual calls, which they don’t think they’ll necessarily be held accountable for, in another way.”

So, don’t ignore step eight…

STEP 8 – Compare Scores From Different Analysts and Look for Deviations

After 50+ calls have been scored by each analyst, it’s time to record the scorecards for each contact analysed in a program such as Microsoft Excel to find a percentage likelihood of the analyst ticking a certain box.

Calibration variance for example:

Likelihood of Analyst Ticking Box (%) = Number of times an analyst ticked the box ÷ Number of Contacts analysed x 100

Then, compare the percentage of one analyst against the team, for each element, and look for deviations. In the example below, provided by c wenger group, they are looking for deviations of plus or minus 10%.

From this graphic, the analyst “Analyst 1” can be seen to be 10.2% less likely than the rest of the team to note that an advisor tried to “Avoid ‘dead air’”.

So, taking this information into consideration, the team leader can take “Analyst 1” aside and ask them to be slightly more lenient when judging whether dead-air time exceeds seven seconds (unexplained) or 15 seconds (explained).

However, be careful when doing this, as Tom Vander Well warns that “sometimes the person who looks like they’re the outlier is actually the one who’s being more accurate than rest of the team. But it allows us to have the conversation with them and dig into why there is a difference.”

What Not to Do

Here is a list of five things that you should not be tempted to do during the eight-stage calibrating process.

Create a Combined Scorecard in Calibration Sessions

Instead of scoring calls separately and then coming together as a group, some contact centres all get together in one room and listen to a call, going through the scorecard together, item-by-item and take a vote or discuss it.

Tom Vander Well adds that he has “found this to probably be the least effective method simply because it goes back to sort of the rules of the playground and the people who have the loudest voices and the strongest opinions dictate the conversation, whilst people who have different opinions but are afraid of speaking out keep quiet, and I think it has limited impact. Sometimes, this can even have a negative influence on quality scoring.”

Make Assumptions Based on Small Sample Sizes

As Tom Vander Well says, the drawback to step eight in our process is that “sometimes, depending on your sample sizes, you have to be very careful with the data, because it may look like one person may be scoring completely differently from the rest of the team.

“But, depending on how many calls you and the team are looking at, and the types of calls that they’re scoring or taking, it may be perfectly justified.”

So, when an analyst has only examined ten calls in a week, do not make any quick assumptions based on deviations. Yet, if they have analysed 50 or more, that will more than likely give a sufficient amount of data to examine.

Rush Through Call Analysis

Tom Vander Well believes that most mistakes during the calibration process are honest ones. In fact, he says that “one of the biggest problems I find is that there’s a deadline by which I have to have all my calls analysed, and due to human nature being human nature, I wait to the last minute.

“Then, all of a sudden, I’ve got to have 40 calls analysed by the end of the day and so I go in and I basically do it as quickly as possible, and don’t take the time to really listen and analyse well and I make mistakes.

“So, some people don’t even do this well – listen to the call again and I’ll guess you did and assume that you must not have heard that. So, that’s probably the biggest mistake that people make – the honest ones.”

Fail to Come up with an Understanding of When Things Apply

According to Tom Vander Well, another problem with calibration “is just understanding when things apply. Making an apology is a good example for when things haven’t met the caller’s expectations.

“So, in one instance, an advisor may say, ‘well, the customer called and left the message that they needed this’. So, now the advisor has to call back, and after they’ve done so, they may say, ‘well, the customer didn’t seem upset at all and they weren’t yelling or screaming at me, so I didn’t see the need to apologise.’

“Yet, we know from research that resolving issues quickly is a key driver of satisfaction and the fact that the contact centre was not there to answer the phone in the first place, and they had to wait for a call-back, seems enough of a reason to apologise.”

Therefore, added to the document (in step five) with guidelines for analysts, it should be noted that advisors should always apologise when calling back a customer, and that they will be scored accordingly.

Not Including Advisors in Calibration Sessions

Involving one or two advisors in calibration sessions can be useful, as it also gives them a greater understanding of how to improve their quality scores and of what is expected from them.

Ancelin Jeremy, a Renewal Manager at ServiceSource, recommends: “asking advisors to calibrate themselves and answer a self-evaluation survey before entering a meeting for calibration.

“This way you’d get them prepared for that conversation and ask them to lead you through their performance-measured call.

“Once you’ve reviewed their goals and pros/cons, you’d be able to pinpoint calibration movements.”

Can You Use Technology Instead?

When it comes to calibrating quality scores, speech analytics can also be used to provide great accuracy when detecting dead-air time, saying hello, using the customer’s name, etc. This would calibrate each of these elements with first-rate efficiency, but it may not as easily detect advisor etiquette or soft skills.

In fact, Tom Vander Well warns that “I have yet to see software that gives one the flexibility truly desired. Most companies end up sacrificing their desire to measure, analyse, and/or report things a certain way because of the constraints inherent in the software.

“If your call data, analysis and reporting is not what you want it to be, and if you feel like you’re sacrificing data/reporting quality because the software ‘doesn’t do that’, then I suggest you consider liberating yourself. If the tool isn’t working, then find a way to utilise a different tool.”

For more advice on putting together a great quality programme, read our articles:

Author: Robyn Coppell

Published On: 7th Jun 2017 - Last modified: 15th Nov 2023
Read more about - Customer Service Strategy, , , , ,

Follow Us on LinkedIn

Recommended Articles

Call Centre Quality Parameters: Creating the Ideal Scorecard and Metric
10 Best Practices for Quality Monitoring
A picture of a man holding a pencil next to a tick list
Call Center Quality Assurance Calibration Guidelines