How Do Other People Quality Monitor Their Calls?

Monitor concept with magnifying glass
1,189
Filed under - Forum,

How Do Other People Quality Monitor Their Calls?

In our contact centre we have been using Quality Monitoring forms that have a scale from 1 – 5 for each thing that is being marked. The problem is that what someone sees as a 3, someone else might mark as a 5. This meant that scoring was not consistent.

I made up a new form and it seems to be working a bit better although it is not perfect by any means.

I am interested to hear how other people Quality Monitor their calls and if possible it would be great to see some examples.

I am putting together a new form that will be scored as yes/no, a work in progress.

Looking forward to hearing how everyone else does it.

Question asked by thornden4

Calibration Sessions

Do you have calibration sessions? – basically this is when all of those involved in monitoring all score the same call then discuss any points of variation and agree a consensus stand-point.

You can do this in a live environment (more time consuming) i.e. play the call in the meeting room as each monitor scores the call, then at the end of the call discuss how each point is scored.

Alternatively, get everyone to monitor the call first and have their scored monitoring forms with them – discuss point by point and ensure that you have the call available to play to support the consensus view. (I find this works best)

The above assumes you have the ability to record calls?

Further, do you / can you break down or add ‘criteria’ for each point demonstrating behaviours or actions the agent should demonstrate to achieve each score.

For example score ‘0’ agent did not use customer name at all during call, ‘1’ agent used customer name once….. ‘5’ agent used customer name at beginning middle and end of call and personalised closing of call….

With thanks to Bunnycatz

Do You Need to Change Your Current Form

We recently changed our quality monitoring form to something less subjective. It has an advantage, in that there is greater consistency in scoring, but can be a little limiting in what you are asking.

Previously we could say “How friendly was the agent” and have a mark out of 10. To put it as a simple yes or no, results in either less info, or the need for several questions asking more specific questions.

In my opinion its worth looking at if you do need to change from your current form. We have 3 people who monitor quality, all trained by the same person, and whilst they might mark it differently on occasion, they have less than 0.2% variance in the average total scores, so we didn’t have much need to change.

With thanks to KevinP

Banding

Thanks for the replies guys. We do regular calibration sessions although we still find that people find it difficult deciding what is a 3 and what is a 2. I am trialling a Yes/No system and people seem to like it as it takes guesswork out, an advisor either did something or didn’t do something. Can anyone send me an example of their forms?

Also I am interested in banding. Ours is as follows:

  • 91%-100% WOW
  • 82%-90% Spot On
  • 64%-81% Miss the mark
  • -63% OUCH

I feel the WOW benchmark is very high as not many people are getting this score. A lot get 89 and 90% and it is a bit disheartening.

With thanks to thornden4

What Type of Calls are Your Agents Handling?

What type of calls are your agents handling? are they giving technical advice? selling or providing customer service? and what sort of products are they supporting?

I don’t think I can send you an actual QM form for confidentiality reasons but if I have a better idea of the actual role your agents have I may be able to offer something useful.

With thanks to Bunnycatz

I Am Listening for the Empathy and Personality

I’m just looking at this topic again. The hard bit for me is that without knowing technical parts of a call I am listening for the empathy and personality and I’d mark a great call from that perspective highly but a technical scorecard could mark it low.

Getting the weightings right is a big issue. I really like the brackets above – spot on for KISS

With thanks to gbwindy

The Scorer Programme

We are new to the contact centre world and predominantly an outgoing based business with 9 sales centres in UK.

We have a QA form with 20 questions which covers the pattern of the sale, data input into the system, and soft skills.

It’s a yes/no and n/a form. yes is only a yes if it is completed 100%. If there’s an perspective of not quite, then it’s scored ‘no’ but the manager needs to feedback and coach through why it didn’t get a yes.

It begins with low scores but we are seeing them improve, a score the scorer programme is being implemented to get parity in the managers, but agreed there will always be some element of subjectivity.

Happy to send a copy if email addresses are provided……would like to see others with a weighting as we avoided this as it appeared to give more room for disparity….

With thanks to schoalesytj

Weighting and Fatal Error

Sounds as if your form is working and you are beginning to see improvements to the scores indicating quality is improving.

In the past we have used weighting when we wanted to focus on a specific issue for example DPA, Opening and Closing scripts, call logging, use of customer name and so on.

The weighting is used to generate a Fatal Error where an agent is not demonstrating the desired behaviour for the area of focus. Failing calls then has to be linked in with disciplinary action where improvement following coaching and guidance is not seen – this will drive improvements.

It is much easier to build a form and Fatal Errors for definite factors such as accurately logging the call (which the person monitoring can physically check) or using an opening or closing script of course than the more subjective ‘soft’ areas, perhaps appropriate tone of voice, which may be the areas of disparity thornden4 is finding?

With thanks to Bunnycatz

Rating Calculation

I would gladly assist you with examples of assessment forms. We are using the yes/no principle and it works well, we do however separate questions into winning or hygiene questions where the winning questions have to do with customer interaction etc. and the hygiene questions with things like compliance and business process followed.

The rating is calculated on two criteria A= pass B= pass C=Fail D= Fail as well as a percentage outcome for the call and is directly linked to our company’s performance development system.

What this entails is that an agent can still achieve a 69% quality rating but due to hygiene errors they will have a D=fail for the call.

This helps to build trends on the entire assessment and see specific questions on where agents are getting it wrong. i.e. the agent can be brilliant in the interaction with the client but fail at mentioning compliance related information.

With thanks to HASTUR

Break Down into Categories

I am implementing a quality system for my c/s teams as the one they currently use is very subjective. What I have done is broken down in categories from the start of the call to the end including everything an operator should cover in that call and it is as simple as yes, no or N/A. Add up all the yes and all the no and this gives you a percentage.

Work out your benchmark and then you have a percentage against it. This can be done individually or you can do an overall department %.

With thanks to sadams

Fatal Errors

I am also implementing a assessment based on yes/no and then the points are scored.

Where does fatal errors fit into this? Would anyone have a current program that includes this in the score?

With thanks to Nymeria

Calibrate Once a Month

I run the Quality Assessment for a large group of estate agents and we regularly calibrate our quality (at least once a month). We use a Yes/No/NA form which have points allocated to each, giving an overall percentage score.

  • <60% is a fail
  • 61% – 74% is an average
  • 75% – 96% is a good
  • 96% > Outstanding.

This way, when Quality is reported to separate sales team, we can compare it to competing sales teams on the sales floor and also, when the sales floor report is issued at the end of working month, teams can see where they are in relation to quality compared to other teams doing the same job but for a different region for example.

This particular approach works really well for us here, as its a simple Yes/No or N/a so we don’t have the issue with what one might feel is a 3, another may feel is a 5. Another benefit to using this, is when we are feeding back to agents they motivate themselves to beat their % score the next month (often because colleagues scores are higher then theirs.

With thanks to QALoz

Yes/No Form

I like the idea of a Yes/No form, takes away the ambiguity of it all. I am currently trying to put one together at present. We are going to have a workshop with our Quality Team and Team Managers to thrash this out as the form we are using at present is not working.

With thanks to thornden4

Quality Monitoring Form

Just wanted to thank Thornden4 for sending his Quality Monitoring form, it is now available to download

Password to unlock to make changes is: Example1 (although I seem to be able to edit the form without the password!)

The scores are worked out as following:

  • The scores in RED are mandatory and the marker chooses a score from the right hand column.
  • Scores in Blue can be set to 0 to make that part non applicable.

With thanks to Rachael

Use of Slang

The form is quite interesting, we did something similar, the use of slang was an eye opener.

We originally were really hot on this, yeps, yeahs etc.

Now though we are allowing their use as long its not over used (every other word etc.)

Whenever we discuss slang we are always asked for a definitive what is and isn’t slang and I found this hard as realistically there are so many that if we miss one we open ourselves up for that to be used.

How does everyone get round this?

With thanks to ERAC123

Mirrors Many Aspects of Various Froms

It is interesting and mirrors many aspects of various forms I have used over the years, which I guess means we all look for the same sort of things, which I find rather reassuring :O).

With thanks to Bunnycatz

Empowering Agents

The whole area of QA really intrigues especially when it comes to looking at ways of empowering agents to allow ‘them’ to make there own decisions.

With thanks to ciaranball

Edit the Form

Just found this on this site and you can edit the form!  Free Call Monitoring, Evaluation and Coaching Form

With thanks to ClareJ

Have a Weighting System

When you are creating a Quality Monitoring form you should always try and put a weighting system in place so that the most important factors are weighted heavier to reflect the impact it makes on the quality of the call, email or webchat.

It definitely should be reviewed and updated over time!

With thanks to A Byrne

Link with FCR

Linking this with first call resolution is also good.

Call monitoring seems to work with the first indicator being a clear yes or no, easy to understand and give feedback you either did it or you did not, then how effectively you did it as a grade of performance followed by the customer outcome, resolved or nor resolved for the customer…. everyone wins

With thanks to JudithCC

Rating

I have been working with a form where I use a rating: instead of 0 to 5 with 5 being every effective. I use 0 – 3 – 5, where 5 remains very effective, 3 is average and 0 poor. It makes it easier to be calibrated when a team is grading, even though there are still times where we are off.

This helps me determined how strong the agent in certain areas and where areas of improvements are. But reading about the yes/no form is interesting and would like to view one. I don’t mind sharing mine.

With thanks to sburgess

Identify the Requirements

I am in Call Monitoring process for a long time now used various types of forms and devised a few.

It all depends on how simple or complex the process is and what results you want to derive at from the call monitoring process.

I am sure once you have identified these requirement you will have an idea what you want to include in the form and to want extend you want the form to give you the information you are looking for.

I can help you in this area once i know what exactly you need.

You may use simple yes or no check list or you may also want to use “NA”

You may use plane “1” and “0” method or you may use weightage for each parameter and derive at weighted average score.

With thanks to Parag

Align with Business Objectives

Our QA scoring forms have evolved continually throughout the last 2 years until we have arrived at one which delivers results. That being said it is probably time to ‘move the goalposts’ again.

We use a simple YES/NO/NA form with individual weightings for a total of 14 attributes. The form is broken into 3 themes and is aligned with the overall objectives of the wider organisation.

  1. Relationship building
  2. FCR
  3. System use

Our emails are also audited with a similar goal in mind but based on Email etiquette rather than relationship building in the former.

We have a number of agents working in different teams providing a different service; technical advice, customer service issues and Returns and warranty. Each form is adapted to serve that area of the business correctly and also in different channels.

We have found our most recent form has positively impacted on our NPS, FCR and agent performance and confidence in their role. This, combined with effective regular coaching sessions has resulted in a long serving, competent workforce with a very low level of staff turnover.

PS we deal primarily with inbound CS calls for an e commerce business.

With thanks to Stephen

Allow Combinations

We help clients to build scorecards and the most effective method is to have a combination of answers; some questions have YES/NO/NA answers, some are 1-3, 1-5 or even very poor, inadequate, adequate, good or excellent.

Having to capability for additional comments, fail points and star marks or kudos also prove beneficial. Having scorecards that can be easily adapted to fit the need without losing historical data is a key advantage for rapid changes.

With thanks to A Byrne

Test the Form

I am currently testing a form I have with some of the same characteristics as you mention. Like, Yes/No/NA and areas with poor, average, good and excellent and an area for comments from the reviewer. It seems to be working on the two campaign I assigned it to.

I would like to try using it on the bigger campaign but feels like there is something missing that is crucial that I cant identify as yet. (Our centre is an out bound centre for different sales campaign running all at the same time.)

With thanks to sburgess

Areas to Consider

There are a few areas to consider; Fail All, Sections/Headings, Organizational Goals, Attaching Calls.

Are you using Fail All sections? These are generally used for priority high security information, e.g. “Did agent verify customer account details?” Answer Yes/No/NA, if this is a requirement and the answer is no then this could be a fail all as this is a major breach in policy.

Have you broken your criteria into sections?

You should also try to incorporate the organizational goals into the scorecard but this may be a design element you need to tackle with seniors managers, working on a similar principal as a balanced scorecard.

Attaching the call/chat/mail to the scorecard is a good idea to increase understanding and transparency… do these make sense?

With thanks to A Byrne

Fail All

Yeah, it does make sense. Some of these I do have, like Fail all based on “not following client requirements and about three other reasons”. We do send off the recordings as well so agent can listen to it to justify the grade and for coaching reasons.

Yeah, I have talk already with upper management and I’m incorporating what they are expecting from the form and Agents.

Yes, I have sections on the form and then those are subsections with specific points to gain or lose.

I have the form in a test phase on two campaign and it seems to be working but I’m not sure if it would work the same on a campaign with more client requirements.

With thanks to sburgess

Testing, Testing, Testing

Doing a lot of testing before a major roll out it always a good idea too.

With thanks to A Byrne

Here is the Latest Quality Monitoring Form

Here is a link to the latest Quality Monitoring Form

With thanks to Jonty

Author: Jonty Pearce

Published On: 12th Apr 2022 - Last modified: 14th Apr 2022
Read more about - Forum,

Follow Us on LinkedIn

Recommended Articles

Top Tips to Monitor Customer Service
lady monitoring stats
30 Tips to Improve Your Call Quality Monitoring
A panel of people holding up signs with 5 on them
How to Create a Contact Centre Quality Scorecard - With a Template Example
A quality concept with QUALITY stamp being used
Mastering Contact Centre Quality Assurance