In the penultimate article in the series of Customer Service Strategies, Paul Cooper argues that it is important to address the right metrics.
A short while ago Call Centre Helper wrote an excellent article on the Top 10 most important contact centre metrics, as currently used by you, and I’d say he was spot on in his listing. Just to remind you, here is the list:
- Quality Scores
- First-Time Call Resolution
- Customer Satisfaction
- Service Level
- Average Handling Time
- Right Party Connects
- Net Promoter
- Forecast Accuracy
- Revenue Per Call
I would also say, however, that, in many cases, I have observed that these can also be a hindrance, not a help.
I don’t mean any one of them specifically. What I mean is that, in many contact centres, measurement is being done for the sake of it, rather than to learn and improve.
Too many computer-generated metrics
This is being made worse by modern technology, where the ability to analyse things to death can also haemorrhage good and clear decision making and change.
For example, why do organisations carry out customer satisfaction surveys?
- To please the bosses? Well, isn’t that just the fear motive coming through?
- To get self-gratification? In this one I just see conceit.
- To please staff? Well, I find that very patronising.
Actually, there is only one good reason to do customer satisfaction surveys – to learn about what you need to improve, and then to improve it.
Surveys should be done regularly, in the same format, and with questions that matter to the customer, not just the organisation. Too many of these surveys, especially those silly cards in hotels etc. are about the hygiene factors – was it clean? did we smile? and the like.
What really matters to the customer?
This isn’t what matters to the customer, who sees these as the superficial side of good service. A clean room/bathroom that is so small that you can’t turn round in it, with a hard bed, not enough pillows and constant noise outside may tick the boxes the hotel wants to hear, but does nothing for customer satisfaction.
Recently I went through Stansted Airport. The security checking was its usual self. There was a lady at the other end who had a list of 10 questions. Nine of them were the hygiene ones – were they polite, did they get you to take your shoes off, etc. I answered each one truthfully, with 8-10 scores. The last question was how had I ‘enjoyed’ the whole experience, to which I answered ‘zero’. She was taken aback and couldn’t understand the response. I’m sure you all do.
Most measurements are dependent on the targets set
Another point that I want to make is that monitoring and measuring things is all very well, but most measurements are dependent on the targets set. In these cases, I have a problem again with many organisations.
Suppose on a particular measurement – first-time resolution, for example – the team makes 92%. Management, predictably, will then raise the target to, say 93% and away we go again. Why? Where’s the science in that? Surely the only acceptable target is 100%, but an intelligent organisation will go further, and measure things in such a way as to be able to explain why this isn’t always hit, and then introduce changes to make it more likely. After all, if an airline pilot had any other target than 100% for landings and take-offs, I don’t think we’d be flying with that company!
The golden rules
So, the mantra must be to:
- Measure the right things, not the easiest things.
- Only measure things that will be reviewed/analysed, leading to actions being taken.
- Measure what is relevant to the times and the needs: this should change over time.
- However, measuring and comparing the same things over time is much better than just a one-off view/comparison, as trends can be observed and learned from.
I do believe that it is nearly always essential to measure employee and customer satisfaction/delight, and learn from the results. However, don’t mix this up with measuring loyalty – for that a metric like Net Promoter Score is far more relevant.
Benchmarking, also, can be a key part of your measurement programme. Learning from other sectors can often be much more beneficial than same-sector comparisons, as one can see new ways of approaching issues and problems.
Sometimes more is less
When I was doing the judging for awards programmes I saw some excellent measurement systems that were used intelligently to improve organisations.
One of the best quotes I picked up at the time was the following, which is a good lesson for us all to ponder:
“We stopped over-controlling the amount of time advisors spent on the telephone. Average call handling time went up 10 seconds, but overall call volume went down 10% due to improved call resolution!” – A major credit card contact centre.
Sometimes, taking something out is better than putting something in.
Paul Cooper is a Director at Customer Plus