# How to Calculate a Customer Satisfaction Score (CSAT)

In this article, we are going to investigate how a business can calculate a customer satisfaction score (CSAT Score) and gather the data needed to the CSat score.

We also discuss how important this metric really is, and give you some expert tips on creating a good customer satisfaction survey.

### How to Calculate Customer Satisfaction

The term CSAT means Customer Satisfaction. CSat is probably the most widely recognised measure of customer sentiment used by businesses.

Yet, there is no universally recognised approach to measuring a CSat score.

Far from making this a useless metric, the lack of a specific definition means that businesses are able to tailor CSat measurements to the needs of their own business.

Here are four approaches that are commonly used by businesses of every size.

#### 1. Average Score

One method is to calculate the mean average of all the scores combined on a scale between 1 and 10.

The CSat Formula

Surveyors simply ask a question like “How satisfied were you with our service today?” The result is data that can easily be represented as a percentage.

Respondent Score between 1-10
a 5
b 7
c 10
d 3
e 8
f 6
Total 39
$$\text{CSat =} \, \large\frac{\text{39}}{\text{6}}\normalsize\times 10 =65%$$

Unlike some other scoring systems, this approach does not need to group any scores, and so places a greater emphasis on score distribution. This helps project owners to investigate the common experiences of low-scoring customers, identifying key pain points.

Now that you know how to calculate customer satisfaction (C-Sat), read this article for 5 Great Methods to Improve Your Customer Satisfaction Score.

#### 2. Happy vs Unhappy

Perhaps the easiest way to generate CSat data is with a simple happy/neutral/unhappy question. The results produced by this method do not need a large amount of analysis, and surveyors have the option of following up by asking customers about what could improve the score.

This method also manages to account for cultural differences better – research in Psychological Science has shown that individualistic cultures score more in the extremes, while other cultures score towards the middle.

The graphic above represents the three options: happy, neutral and unhappy

This system doesn’t generate particularly nuanced data, but it can produce a quick health check on feelings towards a brand or service. Because it is so easy to link it to a graphical representation, it also reduces effort for the respondent.

#### 3. The Star Rating

Used by major services like Amazon and Netflix, the Star Rating system has the benefit of being familiar to customers. Because it is so familiar to consumers, businesses can advertise the percentage of customers who gave them a five-star rating.

This makes simple visual feedback very easy to produce, as seen in Amazon product listings:

#### 4. The Net Promoter Score

Net Promoter Score (NPS) is another metric of customer experience, often thought of as the loyalty metric. Most enterprises calculate both NPS and CSat, often using different scales for each.

However, the standard methodology for NPS can also be applied to CSat for a more nuanced understanding.

With NPS, users are always surveyed on an 11-point scale from 0 to 10, with responses grouped as detractors, passives, or promoters.

Once the responses are grouped, the number of detractors is subtracted from the number of promoters and averaged against total respondents.

The formula to calculate Net Promoter Score

Rather than a percentage, this calculation generates a whole number between -100 and +100.

This kind of measurement raises the bar for satisfaction by accepting only the highest scores as satisfied. It also encompasses more than the binary idea of satisfied/unsatisfied

For a more in-depth look at NPS, read this article find for the three steps to calculate NPS, discussed in much more detail.

### Is It Important to Understand CSat?

Fundamentally, customer satisfaction is important for customer retention. The average company loses half of its customers every five years, leading to a need for constant replenishment.

According to Customer Experience author Jeff Toister, surveying CSat can limit customer loss by allowing enterprises to “…eliminate pain points and improve customer loyalty.

“You can learn how to serve your customers more efficiently without sacrificing quality, which cuts your costs. You can gain valuable insight to improve the quality of your product or service. The list of benefits goes on and on!”

So, providing good customer support is important – what about CSat as a metric?

For the contact centre and other customer-facing areas of business, CSat provides simple targets, and flags progress to management. As with any metric, it is the underlying data rather than the score that matters – but the score can be a useful way to communicate complex data quickly.

CSat surveys are also a good way to understand what customers value in a business. As customer survey expert Teresa Gandy put it, “Businesses don’t always know what it is that brings their customers back. That information is important, because it could be your marketing material, your hook for attracting new customers.

“What was the reason these guys came to you and stayed with you? We use surveys and Voice of the Customer feedback to find a lot of that information as well.”

Based on a combination of CSat score and NPS, customers can be segmented further. Some choose to do business with you through a sense of identifying with your brand, while others opportunistically respond to incentives like pricing.

Customers who score somewhat poorly are likely to leave, and when low satisfaction is coupled with low NPS, customers may generate negative commentaries.

Armed with this information, customers service and retention teams can do much more to tailor their communications and foster better outcomes.

### The Pitfalls of CSat Scores

There are also some common issues for businesses when measuring their levels of CSat Scores.

#### 1. Acting on Faulty Data

A basic principle for companies that want to solicit consumer feedback is “act on what you learn”. If you’re not changing your strategy based on new information, why even bother to ask?

But there is another side to this. How do you know that what you’ve learnt is relevant? While it’s a problem to ignore good feedback, it’s much worse to act on unrepresentative feedback.

While it’s a problem to ignore good feedback, it’s much worse to act on unrepresentative feedback.

Jeff Toister comments, “A lot of customer service leaders worry about their response rates, but I emphasize two things that are much more important.

One is representation. Does your survey sample (the people who respond) fairly represent the customers you want feedback from?

The second is usefulness. Do you get useful data that you can use to improve customer service?

“Here’s an example: Let’s say you want to survey customers who use a self-service feature on your website. If you only survey people who complete a certain transaction, you’ll completely miss anyone who tried to complete the function but gave up and tried calling customer service instead. So a great response rate doesn’t necessarily mean you’re getting the data you need to improve.”

To avoid falling into this trap, make sure you understand where your data is coming from, and whether there are any customer groups that it does not represent. Part of the core process around implementing change should be answering these two questions:

• Who is this going to benefit?
• How do we know that?

#### 2. Survey Fatigue

Between 1995 and 2015 the response rates for online surveys dropped from 20% to 2%.

That number includes non-targeted surveys, so it’s not a direct parallel with the way businesses survey their customers. Nonetheless, it does tell us something about public willingness to engage.

The average consumer is invited to participate in three surveys a week, and fatigue easily sets in.

Businesses need to have clear rules on the frequency with which they will survey a customer, and the amount of time they are going to ask respondents to give up.

“My rule of thumb is that no one customer will receive a survey more than every three months, because it just becomes trash in the mailbox. How much will their opinion have changed in a month? They’re extremely unlikely to fill it in, and it actually impacts the customer experience, because they’re getting annoyed with you for sending surveys all the time,” said Teresa Gandy

#### 3. Satisfaction Is Not Well Defined

What does a customer mean when they confirm they are satisfied? Does rating 5 out 5 mean they are fully happy or that they completely agree that they’re quite happy?

This is largely an issue of how a business chooses to word its survey questions, and what they take away from the responses.

Teresa Gandy thinks satisfaction might not be an adequate target: “I have big issues with using the word ‘satisfaction’. Satisfied customers will go to the competition when there’s an offer on – loyal ones won’t.

“What you want in a business are your loyal customers, your raving fans, because they’re the ones who will stay with you and tell everyone they know to give you their business as well.”

What you want in a business are your loyal customers, your raving fans, because they’re the ones who will stay with you and tell everyone they know

High levels of satisfaction do correlate with customer loyalty, which is one of the most useful characteristics to measure. As this is only true of the most satisfied customers, an increasing CSat score does not necessarily tell a business much about their ability to increase retention.

#### 4. The Metric Is Hard to Benchmark

Businesses have to choose the scoring method that best fits their survey goals. If they handle scoring internally they might create their own system entirely from scratch; if they use an established survey company, that company will have its own preference.

In practical terms, this makes it impossible for businesses to compare their results against their peers. It can also make it challenging to communicate positive results to customers and stakeholders.

#### 5. Statistical Significance Can Change the Story

Without a very large surveyed group, individual surveys can have a big impact on scores. This might not be a problem for large companies that are able to poll thousands of people; for smaller companies, it could be a major difficulty.

This is really an issue with survey technique rather than the CSat metric. The area where this is most troubling is when customer feedback is used to measure individual members of staff. In theory, basing targets and bonuses on individual CSat seems very intuitive.

The problems arise when only a few customers respond. Scores which are fractionally different can have an impact on agent confidence, even though they may be random fluctuations.

### Is There a Best Way to Gather Survey Data?

Businesses have more options than ever before when it comes to surveying customers. Website, phone, IVR – there are numerous options with different benefits for cost and response rates.

Teresa Gandy told us, “The best way to get survey responses is mobile phones. Not SMS, but texting the link to the survey.

“Don’t email the link, because people open texts more than they open emails. They just press the link and your survey is there. So when people are sat on the bus, sat on the tube, watching TV in the evening, they hit one button to open the survey.”

### What Does a Good Survey Look Like?

Almost all CSat data is sourced from surveys, so making sure that surveys are fit for purpose is vital. These are a few of the most important considerations for putting together a survey:

• Keep It Short, Keep It Simple
• Use an Odd-Numbered Scale
• Include an N/A Option in the Answers
• Use Text Fields, but Don’t Make Them Mandatory
• Keep Your Surveys Consistent to Track Change

#### Keep It Short, Keep It Simple

The average customer will devote around 90 seconds to completing a survey. That’s enough for a general CSat question, a question about their specific experience, and an NPS question.

#### Use an Odd-Numbered Scale

Jeff Toister says, A good survey should have an odd-numbered scale because an odd number of scale points gives participants a middle-ground or neutral response option. Believe it or not, there are things that many customers just don’t care about either way.

“Proponents of even-scaled survey questions like to remove the middle ground to force a customer to choose. That’s exactly why you shouldn’t do it! There’s no point in designing a survey that influences the customer to give anything other than their real feedback.

#### Include an N/A Option in the Answers

A major annoyance for customers completing surveys is finding that they are obliged to answer questions which are not relevant to them.

Obviously, businesses should target their surveys to avoid irrelevant questions in the first place. But, as a contingency, customers need the option to select something like “I don’t know”.

#### Use Text Fields, but Don’t Make Them Mandatory

The information that customers volunteer in text fields is often the most useful. However, adding text should be optional, or else many customers will simply give up.

Bear in mind that ticked boxes are going to be very easy for you to process, whereas a large volume of text needs to be read through and understood.

“I generally recommend including an additional comments box, because if they do feel passionate and you haven’t given them the option, it’s going to annoy them. But you don’t need more than one comments box.

“If it’s a survey where they have included their name – you should always give the option of anonymity – and they’ve made a comment, feedback to them directly. That’s how you can turn an unhappy customer back into a loyal customer, listening to what they’ve said, and telling them how you’re going to deal with it.”

Teresa Gandy

Use text fields, but don’t make it compulsory for completing the survey

#### Keep Your Surveys Consistent to Track Change

Surveys are almost always intended to track change over time. Teresa Gandy again: “If you’re changing your survey questions you’re not getting consistency. You might be able to add something in or tweak it slightly, but you’re not going to get long-term quality data if you keep changing what you’re asking.”

With thanks to:

Teresa Gandy, Managing Director at Clarity CX
Jeff Toister, Author of the Service Culture Handbook