Forget the old pen and paper method to find out what your customers are thinking. Instead, listen to Martin Roberts as he investigates the role of post-call interactive voice response (IVR) surveys in aligning performance with customer satisfaction.
Do you know whether the agents in your contact centre that achieve the top performance scores are the same as those providing the highest levels of customer satisfaction? It seems an obvious question, but how to measure this and resolving any discrepancies is a considerable challenge for any contact centre manager.
Many call centres will measure quality but this doesn’t always extend to incorporating customer feedback. Agents are traditionally evaluated using ‘hard’ statistical metrics such as number of calls handled and the average duration of calls. The problem with this is that it doesn’t give any insight regarding how customers feel about speaking with your agents. Failure to bridge this gap can result in a contact centre that is distanced from its customers’ needs and expectations: one that rewards efficiency but fails to recognise effectiveness.
Is an efficient agent always an effective agent?
To offer an example, Alison (a regular customer) phones the contact centre on Monday morning and explains that there is a mistake on her latest invoice. She spent 350 but she has been invoiced for 660. The agent – one of the highest scoring in the contact centre – apologises and confidently reassures her that it will be rectified immediately and closes the call as a completed interaction.
|What are post-call IVR surveys?
Post-call IVR surveys provide organisations with valuable customer insight that has previously been difficult to capture. This type of survey generally boasts a 10% increase in response rate against traditional e-mail, post and telephone methods.By introducing a mix of questions designed to evoke particular answers, organisations are able to use the data collected in these surveys to benefit the entire enterprise, from the call centre to marketing and research and development.It is important though that organisations ensure that the questions used to form the customer satisfaction survey reflect those set for individual agent assessments. By using the latest technology, organisations can combine the data captured in each survey and use it to visualise the bigger picture, to draw comparisons and to create foundations for organisational improvement.
A few days later, the customer calls again, speaks to an agent and advises that the problem has reoccurred. This time Alison is slightly more aggrieved. Once again the agent swiftly takes action and logs a successful interaction. However, the following Monday evening Alison spends 15 minutes waiting in queue and is clearly very angry when speaking to the agent for the third time. The agent does complete the call – seemingly to the customer’s satisfaction. Yet in the next fortnight, Alison cancels her contract and moves to a competitor.
By looking at the key performance indicators (KPIs), the story is summarised as agents diligently completing three inbound calls within the average call duration. There is nothing to explain why the customer made the decision to leave. Were the agents responsible? Did the software the agents used to make the correction fail? Why was the customer waiting in queue for so long? Perhaps there was a problem with staffing on that particular shift? Is a process not working correctly? Or was it something totally unconnected?
My point is that performance needs to be aligned with customer satisfaction if you are to get a full picture of the contact centre’s service delivery.
The emergence of the post-call IVR survey
A tool in the customer satisfaction measurement armoury is the post-call survey. These are usually distributed some time after the call has taken place by mail, e-mail or telephone. Yet, because of the lag between the interaction and request for feedback, such attempts often generate low to no response rates. Gradually UK organisations – particularly in the financial services and insurance markets – are learning from the emergence of post-call IVR surveys in the US contact centre market.
Although the use of post-call IVR surveys is still in its infancy in the UK, uptake is growing, fuelled by increased response rates of up to 10% – considerably higher than the other more common methods of post-call survey.
A post-call IVR survey gives the customer an immediate opportunity to provide feedback regarding the interaction. Once a caller has finished speaking with an agent, they are invited to provide feedback by answering a few questions. These questions are usually related to how long they waited before they spoke to an agent, the greeting they received, comprehension of the enquiry by the agent, how the caller felt about the interaction, and whether the caller got a resolution.
The beauty of post-call IVR is not only its immediacy – a customer is more likely to provide honest feedback if they are asked straight after the interaction – but also the flexibility of its implementation. It can be used across all calls over a specific time, focused on a specific campaign, used on selected agents as part of their ongoing training programme, or linked to specific customer types such as the highest value to the company.
Aligning success with satisfaction
Completed post-call IVR surveys are valuable in their own right, giving an indication of trends in customer opinion. However, to derive the greatest benefit these surveys should be part of the quality monitoring programme, as only by bringing this customer feedback in to the equation is it possible to assess and align customer satisfaction with how the agent managed the interaction – reviewing and replaying the voice interactions, the process steps they took, the screens they used, and customer relationship management (CRM) data that is held about the customer.
|The case for post-call IVR surveys: Home DepotOne real-world example where post-call IVR surveys are revealing compelling return on investment is the US company The Home Depot. Since implementing post-call IVR surveys, the company has reported savings of US$85,000 per year through improved survey processes. It has also seen survey responses increase by 400% to 4,500 completed surveys per month, and is now able to identify customers at risk far earlier and pinpoint improvements needed to improve overall customer satisfaction.Making post-call IVR feedback surveys an integral part of the contact centre’s operations enables performance to be aligned with customer satisfaction and supports more informed decision-making and training (with agents focused on delivering efficient and effective customer service) while rewarding agent success accordingly.
This wealth of insight facilitates more informed decision-making about the training, processes, technology and perhaps broader product or organisational issues that are needed to improve both performance and customer satisfaction in the contact centre and enterprise as a whole.
Putting it in to practice
In the example used earlier it is highly likely that the agents who managed these three calls with Alison will have scored highly on their KPIs. But this time let’s revisit the interactions with a post-call IVR survey in place. We immediately learn that Alison was invited to submit feedback on her third call. She accepted and the comments were negative. Armed with this information the team leader retrieves the interactions from the archive and replays all three original agent/customer conversations in their entirety. Upon review it is clear that being kept waiting and failure of the issue to be resolved first time were the main reasons for complaint.
Prompted by the feedback the rota for the Monday evening shift is reviewed and an under-staffing problem is uncovered that is causing longer than usual wait times. This is clearly not the fault of the agent.
However, the team leader is also able to use screen capture technology to review the steps the agent took on-screen to revise the invoice. It is clear that the agent is following the wrong process. The team leader reviews a sample of calls from the same agent over the past three weeks and discovers that this has happened on a number of occasions. Taking swift action the leader schedules a number of proactive calls to apologise to each mistreated customer.
Upon speaking with other agents it becomes clear that the system being used is causing confusion. The team leader sets aside some time during each agent’s shift to provide a refresher course. Within one week the problem is eliminated and, as a result, a small decrease in call duration is reported.
Martin Roberts, vice president, marketing business development at NICE Systems
Tel: + 44 8707 22 4000