Many call centres have a love/hate relationship with quality assurance monitoring: management love it but advisors can hate it.
Management has a legitimate interest in monitoring the quality of service their teams are providing, but monitoring can nevertheless be a hard sell to agents. And, if you look at it from their perspective, it’s pretty easy to see why.
The good news is that you will have much greater success getting your call centre agents on board if you address their concerns ahead of time.
Talking with our clients, we have identified some of the primary concerns agents have with being monitored.
Loss of Privacy
How would you feel if someone were looking over your shoulder and judging you during your entire shift? It is hard to be passionate about doing your job if you think someone is listening in on every conversation.
Monitoring can make agents self-conscious, and that’s not conducive to good performance.
Give agents a thorough explanation of how monitoring works. Is someone listening live to every call? Or are calls recorded and spot-checked?
Removing the mystery and giving agents a realistic expectation of the likelihood that any one call is being monitored can go a long way toward reducing anxiety.
Lack of Transparency
This is another situation where it’s helpful to put the shoe on the other foot: How would you like to know that you’re constantly being graded but have no understanding of what you’re being graded on?
Advisors need insight into not only how they’re being measured, but who is doing it, what they’re being measured on, and why it matters.
Train advisors on the quality assurance programme: who, what, why, when, and how. Why not run internal focus groups and ask your agents what they think you should measure?
Scoring advisors on metrics that are by nature contradictory undermines the credibility of the entire programme. A good example is measuring both First Call Resolution (FCR) and number of calls per hour.
Measuring agents on the number of calls they handle per hour encourages them to end calls quickly, which works directly against resolving issues on the first call.
Choose metrics carefully. If it’s necessary to include metrics that may appear contradictory, explain how that apparent conflict will be handled.
Even the best quality assurance monitoring can go awry if it’s implemented poorly. Things can go wrong in several ways, ranging from scores being used punitively to metrics that are scored differently by different managers.
Make sure managers and anyone else involved in quality assurance understand why and how monitoring is to be used.
For instance, it shouldn’t be used as an excuse for getting rid of an employee with whom a manager has a personal conflict. Your calibration process should be explained and should support consistent scoring.
In addition, evaluators should receive regular training on how to interpret customer interactions and be encouraged to annotate their evaluations with coaching tips.
Measurement on Things Outside the Advisor’s Control
Sometimes an agent can do everything right, but the call still goes wrong.
- Some customers have unreasonable demands that agents don’t have the authority to grant.
- Some customers don’t follow the anticipated script, so the agent must make things up as they go along.
- Sometimes call volume can be unexpectedly light, driving down the metric of how many calls an agent handles during their shift.
- Sometimes agents call in sick, causing a spike in the time it takes other agents to answer a call.
- Management decisions on things like staffing can have a direct impact on agent effectiveness.
Train managers to consider extenuating circumstances before reacting to agents’ scores — and give them the flexibility to do so.
In addition, give agents a sense of control by granting them direct access to their own scores, so that they see the same information that management sees.
Quality assurance is an important part of any call centre operation: management has every right to expect agents to do the job they’re paid to do in the way they’re paid to do it. But, any quality assurance programme needs some built-in flexibility if you want the support of the advisors.
A pattern of behaviour that does not follow established procedures is a cause for increased coaching, but a one-time incident where the agent did the best they could under the circumstances should not be used punitively.
Neither should advisors be punished for failing to meet measurements that are mutually exclusive, e.g. keep talk time to a minimum but fully explore the customer’s problem!
The bottom line is that advisors are a lot more accepting of call monitoring when they trust management, understand the goals and know the programme will be used with a little compassion and common sense.