How to Calculate Customer Effort

rowing effort
21,690

In this article we look at the best methods to measure and calculate customer effort.

Customer effort is an easy and cost-efficient metric to calculate in the call centre, but there are four different methods circling the business sphere for how to calculate a score for customer effort.

So, which is the most efficient?

1. The Customer Effort Score (CES)

Calculating a CES is the conventional method of measuring customer effort. However, over the past six years, the metric used to calculate it has evolved considerably. Here are three such evolutions that have been commonly used in the call centre.

A scale of 1-5

When the CES was first formulated in 2010, it was proposed that after a customer–call centre advisor interaction occurred, the advisor should ask: “How much effort did you personally have to put forth to handle your request, on a scale of 1-5?”

The scale would be measured so that:

– 1 = Very low effort
– 2 = Low effort
– 3 = Neutral
– 4 = High effort
– 5 = Very high effort

When you have obtained a significant database, recording each customer’s response to the query, you should divide all the customer effort scores by the number of customers who responded.

This will give you an overall score between 1 and 5. The lower your score, the better.

A scale of 1-7

Since 2010, method one has evolved in a number of call centres, so that the question and the scale has a slight change in wording and numbering.

Method two depicts this change, as the question in this version of the metric reads: “How easy was it for you to get your issue resolved fully, on a scale of 1-7?”

The scale for this method would be measured so that:

– 1 = Extremely easy
– 2 = Very easy
– 3 = Fairly easy
– 4 = Neither
– 5 = Fairly difficult
– 6 = Very difficult
– 7 = Extremely difficult

Then, once you have acquired a significant database of customer responses to the question, you should again divide all the customer effort scores by the number of customers who responded.

This will give you a score between 1 and 7. Again, the lower your score, the better.

Easy, difficult or neither

The third evolution involves a similar question, which states: “How easy did (insert name of company CEO) make it easy for you to handle your issue today, easy, difficult or neither?”

This version of the CES, recommended by customerthermometer.com, also involves a simple easy, difficult or neither scale that is not numbered like the previous two.

When the data is collected, you then simply subtract the percentage of people who said easy from the percentage that said difficult.

This will give you a Customer Effort Score that is in between -100 and 100, with the higher your score, the better.

So, which evolution is the best?

Whilst there is no right answer when it comes to which of the evolutions is the best, you can have a preference, and we at Call Centre Helper would recommend the “easy, difficult or neither” method.

three-faces-green-yellow-red

How Much Effort was the Task? “easy, difficult or neither”

This is firstly because the question is simple, so it maximises customer engagement with the query, whereas the question used on the “scale of 1-5”, which includes confusing jargon, can cause the customer to feel alienated from the matter at hand.

The question also involves personalisation, which is feature that we at Call Centre Helper would encourage you to adopt. This should again boost customer engagement with the query and increase response rates.

The scale is also refined, so that it only involves three possible responses; easy, difficult or neither. This is beneficial as the question is easier to answer and you are not wasting anybody’s time in arriving at a certain conclusion. After all, it is customer effort that you are trying to reduce!

2. Net Easy

Instead of calculating a CES, BT employ a Net Easy Score, which is a hybrid metric that includes features of the “scale of 1-7” and “easy, difficult or neither” evolutions of the CES, noted above.

Net Easy involves advisors asking customers “how easy did you find it?” And it employs the same scale as the second evolution and calculations as the third. This is calculated so that 1-2 is easy, 3-4 is neutral and 5-7 is difficult.

Nicola Millard

Nicola Millard

However, Net Easy also asks why. This, according to Nicola Millard, Head of Customer Insights and Future at BT, is “the interesting bit.”

By asking why the customer has selected a particular number, BT are able to identify what customers are finding difficult about their processes, so they can target specific areas to improve. Also, BT can interpret how their customers have reacted to any new features or changes in company procedure.

Nicola says that by asking why and observing the change in the Net Easy score, “we [BT] can start to see the effects of things like maybe changing a training programme, changing the IVR, redesigning the website.”

Yet, if BT have not made any obvious changes to their processes and the Net Easy score changes, Nicola says that BT then “have to kind of try to figure out what’s driving it and whether it’s good or bad or whether it will readjust, because often it goes right back up again.”

3. Looking for Other Signals of Customer Effort (e.g. wait time and repeat calls)

When golfbreaks.com conduct their customer effort survey, they, like BT, ask two questions to formulate their CES.

Chief Customer Officer at golfbreaks.com, Alex Mead, told us that they ask the questions: ““How satisfied are you with the solution we provided?” and “How happy are you with the service provided by our agent?” These questions are both measured on a scale of one to ten.”

However, Alex continued by noting that: “We will already know how long we made a customer wait to get an answer, how many times they contacted us about an issue etc, so we shouldn’t ask them at that level.”

man-with-magnifying-glass-510

“Rather uniquely, though, we also ask our customers to ‘tick a box’ if they would like us to follow up on their feedback.”

Adopting this ‘tick a box’ procedure into your customer effort survey will allow you to discover why customers thought that the number that they picked was appropriate.

This will help you to identify any common causes for concern in your business processes and also identify the strengths of your organisation.

Also, the people who are providing you with feedback would have chosen to do so, meaning that your advisors will not be asking for additional feedback from unwilling customers.

4. Creating Individual Customer Effort Scores

When you are assessing a score relating to customer effort it can often be difficult to interpret. This is because it is difficult to compare your score to other organisations, as there are various different methods with which to calculate it.

Michael Allen, Managing Director of Interactions247.com, recognises this as he stated that: “Looking for an overall customer effort score can be over-simplistic – you are never going to get (nor should you wish for) an NPS-type effort metric to compare to other organisations.”

Instead, Michael suggests taking “your top ten contact reasons by volume and look at the effort of each one, devising a score for each.”

This would be beneficial, as you would be able to identify which area is causing most effort instead of making estimations from a general score.

What to Watch out for?

The pitfalls of the Customer Effort Score

Whilst there are many pros to using the CES metric over others, including that it is a better predictor of loyalty than the Net Promoter Score, there are a number of pitfalls that you should be wary of before you put it into practice. Here are seven such pitfalls.

1. External factors
One drawback of implementing the CES metric is that it does not factor in the influence that your competitors, price and product/service is having on your customers.

2. Calibration of the scale
Another pitfall is in the nature of the question that is asked and the numbered scaled that is used to calculate the CES, as they can both be susceptible to inconsistent interpretation.

Firstly, if the customer’s query was simple to answer, the question can be considered as questioning whether the customer had done enough before calling. Also, forming a scale in which the lowest number is the best outcome could be cause for confusion, unless the scale is well-explained.

3. Push data vs pull data
All four methods of measuring customer effort, as noted previously, depend on push-based data, which increases the likelihood of obtaining imprecise results.

Ali Mazhar-Ul-Haq, Manager of Resourcing & Development at a Faysal Bank Contact Centre, is aware of this issue. Ali suggests that, “if you have developed a survey on IVR and your customers are required to fill out that survey at the end of the call, then it’s more likely that your customers will not follow the questions accurately and you may get inaccurate results.”

4. The bathtub curve
Another pitfall of the CES revolves around “The Bath Tub Curve”, which visualises how people tend to leave an extreme response on either end of the query’s scale.

Gary Smith, Director of Product & Marketing at QPC, believes this is very “revealing” as “even when you do get that very small sample of results back it can be biased towards the extremes of any given situation.”

5. Impractical feedback
An additional drawback of the results of a CES is that they do not deliver any feedback of what happened or why it happened. So, in some cases, there are no clear responses that can help you to improve your score.

6. Relying on customer interpretations of time effort
An additional pitfall of the four methods mentioned is that they do not fairly interpret the effect of time effort, as you cannot say that a journey that lasts one hour is the same as one that lasts five hours, on a scaled value of five. This is because, relative to each other, there is a big difference, but someone could realistically score each interaction as a five (the highest effort value on the scale).

7. Customer effort is always changing
The final downside is that customer effort changes at different times in the customer journey.

Emphasising this point, Gary Smith gives the example of a customer having to reverify their personal information. This example starts “when you phone a contact centre and you’re going through the IVR… and you have to put in your card number or mobile telephone number and you think ‘that’s a way of verifying me’… But as soon as you drop out and speak to the agent and they ask you for the same information… you know that those things increase your effort slightly.”

How People Manipulate the Customer Effort Scorecartoon-man-on-puppet-strings-300

When a metric starts to be viewed as important, there will be businesses who will start to manipulate their results, and with the CES there comes one common manipulation.

Attempting to formulate a Customer Effort Score revolves around the question that your agents are asking the customer. However, it is when an advisor asks the question that forms the basis of this manipulation.

If an advisor were to ask customers the question only ever after a smooth interaction, you will obtain uneven results, where your CES will be much better than if you asked on random occasions.

Another example is telling agents not to ask the question to a customer phoning for the fourth or fifth time regarding the same issue, as they are likely to be unhappy with the amount of effort they are exerting on the matter.

So, how do you measure customer effort? Do you use a different method from those stated above? Please put your answers in an email to Call Centre Helper

Author: Robyn Coppell
Reviewed by: Megan Jones

Published On: 16th Nov 2016 - Last modified: 25th Apr 2024
Read more about - Customer Service Strategy, , , , , , , ,

Follow Us on LinkedIn

Recommended Articles

Customer Satisfaction Score CSAT
What is a Customer Satisfaction Score (and How to Calculate CSAT)
Emotion and effort in customer service with blocks and icons
Customer Effort and Emotion - 10 Reasons to Take Action Today
A picture of someone pushing a concrete ball up a hill
What Is Customer Effort?
7 Ideas for Lowering Customer Effort
2 Comments
  • Very good overview for me. Our market has very polite customers who will tend towards giving you very good survey feedback out of courtesy. Validating Effort Scores by comparing them to our Repeat Call rates, First Contact Resolution and even our service level should help in getting to the real pulse of things.

    Adinor 17 Nov at 11:21
  • Currently experimenting with outbound surveys post-interaction – using both online self-completion approach as well as telephone interview approach … in this way we can establish if there are any score differences between the two approaches

    Scottio 18 Nov at 15:36