We all measure customer satisfaction. So why is it so hard to drive it up?
Mats Rennstam gives a few pointers.
Most companies already measure customer satisfaction on an enterprise level, but the problem with this is it does not get acted on.
Asking customers what they feel when they hear your brand name will not help us drive change in the contact centre. Instead you need to be able to link the feedback to a recent call to your centre and break it down to agent or, as a minimum, team level.
Feedback to the agent
When doing automatic post-call surveys and feeding back the results to the agents directly, we have seen dramatic improvements in first-call resolution (FCR) and customer satisfaction.
Yes, there is an element of competitiveness but also (and quite often for the first time in their contact centre career) an agent can actually see that they are making a difference.
They can also experiment with the way they handle customers and see what effect that has on their scores; for instance, if they make an effort to sound happier and more upbeat on calls and that translates into better customer satisfaction scores then they are going to repeat that behaviour.
Sharing the results down to the agent level helps create a self-developing and learning organisation.
General satisfaction develops general behaviours
Asking about general satisfaction is actually a more useful tool for identifying problems somewhere else in the organisation. If customers rate your agents highly and all your performance metrics are good, especially if they hold up when benchmarked, then you know that it is the product or another area of the business that is at fault.
Breaking customer satisfaction to the team and agent level
One of the major bonuses with breaking down customer satisfaction to team and agent level is finding the most effective agents, as opposed to just efficient, and then being able to replicate their behaviour.
The difference between quantity and quality
Previously the most productive agent has been seen as the best but when that view is complemented with customer satisfaction data and the behaviour the agents demonstrate, a different picture of who is best appears. And this is something we must address quickly because data from the Bright Index suggests our industry is getting a worse at delivering customer satisfaction, not better. Even if this trend is being driven by an increase in customers’ expectations, rather than deterioration in service, it still shows that the industry must do more.
Employee satisfaction and engagement
Of course we want happy staff but although general satisfaction affects their behaviour, it is how engaged they are with the customers that affects customer satisfaction and loyalty.
First-call resolution (FCR) and hold time used to be seen as the top customer satisfaction drivers.
From our customer satisfaction surveys carried out in parallel with monitoring delivered service levels; we see that an additional 30 seconds hold time has little effect on satisfaction but only a small dip in agent engagement sends it through the floor.
The key metric here is engagement but to drive that you need to measure the three key drivers of motivation and engagement.
Employee satisfaction metrics
The most relevant employee evaluation metrics are:
Linking the metrics
By measuring the key metrics simultaneously a new world will open up. As soon as you see a movement in one area you can go back to the other two to see what caused it, because they are intrinsically linked. If in addition to this you also break down the results to a team or agent level you will have a very powerful tool helping you, for example:
- find out how far you can turn down your service levels without affecting customer satisfaction
- find the most effective agents as opposed to the most efficient
- find correlations; what drives sales, FCR, customer satisfaction, etc?
Pulling it all together
Measuring performance in parallel with employee engagement and customer satisfaction will make you feel like someone has switched on the light in a dark room.
Read Mats’ last article – what to measure and manage in your call centre.
Mats Rennstam is Managing Director at Bright UK Ltd (www.brightindex.co.uk)
Tel: 0208 892 95 30
Measuring CSAT for me is a given, I also agree with the employee engagement. There is one thing that bothers me in the metric: setting the correct goals. Having contact center across EMEA I expect cultural differences in respondents that will have a major impact on the high level CSAT score. I also suspect the time of season having an impact on scores but not in an equal fashion to the different sites. I do not have research to validate that line of thinking though. It is not feasible to set targets for every country, but if CSAT drops without the employee being able to influence that, it will have an adverse effect on employee morale and engagement. What do you think? How can you address that?
Referring to the above question. The key is to have a split in your customer satisfaction measure. A split is required to identify whether the CSAT has dropped as a result of product pr service delivery as well as which channel was the root cause eg. Internet, retail outlet, email or call center.
Customer satisfaction as we measure it is split into 3 categories namely: “exceed customer expectation”,”meet customer expectation”&”did not meet expectation”. The key with the measure is that one should continually look for ways to exceed the customers expectation. For example if I deliver a product currently in 3 days(current expectation) and next month I deliver in 2 days while I am exceeding my customers expectation the customer will expect that the following month I deliver in 2 days. As a result a in order to meet CSAT targets it becomes a neccisity that we continually strive to improve our product, processes and services.
The link between ESAT and CSAT is logical. Satisfied employees are well informed, happier and thus in a better position to deliver excellent service and go the extra mile.
I work at a call center that measures CSR performance primarily on customer surveys. I agree that it does motivate one to strive for FCR and excellent service! I love that aspect! The problem my peers and I have is that so many customers do not respond to the survey! Which, of course, can cause some serious morale issues when negative responses are measured against positive responses, and over-all calls taken aren’t considered. If I take 96 calls, 15 people answered the survey, and 2 of those people were not satisfied, my satisfaction rate is going to be 13% (unacceptable at my company).
Our questionnaire gives a yes/no option to “Did I solve your problem?” What do you think of the wording in this survey? There are times when the CSRs truly cannot solve a customer’s problem (such as weather related delays and our own company’s policy).
Thanks for taking the time to read a CSR’s point of view!