Recorded Webinar: 7 Ideas to Improve Your Quality Management


Click here to view the replay

In this webinar we looked at practical ideas on how to improve your quality management programme to drive bottom-line results.

Panellists

  • Introductions – Jonty Pearce, Call Centre Helper
  • Duncan White – horizon2

Click here to view the slides

  • Frank Sherlock – CallMiner

Click here to view the slides

Topics to be discussed

  • Quality monitoring
  • Positively changing advisor behaviours
  • Coaching
  • Identifying poor performers, best practices and recurring problems
  • Stopping quality becoming a box-ticking exercise
  • Speech analytics
  • Quality management technology (QMT)
  • Top tips from the audience
  • Winning tip – “It’s incredibly important to encourage self-assessment. It’s easy to tell people what they’re doing wrong and most likely they are not going to improve. Give your agents the chance to evaluate their calls/livechats and they will be able to find their flaws and will more likely try to rectify them in the future. They will be more aware of how to improve if they evaluate themselves. That’s the way to make them really focus and experience the customer service they provide from the outside point of view.Regular and constructive feedback session is key as well. Don’t just tell your agents what they’re doing wrong, make them understand WHY and HOW to develop and change. If they see that you’re passionate about the customer service and their personal improvement they are more likely to take to the heart everything you’ve said. It’s simple but unfortunately a lot of people forget about it nowadays.” Thanks to Tom26

1. We’re looking to rebrand our QA department…..what do you call your Quality Team”?”

I think this was a generic question for the audience, however I have seen names such as Customer Assurance Team, Compliance and Quality team, Service Assurance Team etc. used

2. Are speech analytics able to bring ROI for all sized contact centres? for instance, would a 15 seater reap similar benefits and ROI as a 200 seater?

The ROI does tend to differ in smaller contact centres, for example savings in the amount of people involved in Quality Monitoring and agent supervision, the impact of analytics on improved business outcomes etc. are all going to less in smaller contact centres. That being said at CallMiner we have partners who specialise in smaller contact centres and can put in place commercial models, either software or service based that may be attractive to these contact centres. Anybody interested should contact frank.sherlock@callminer.com for further information

3. Do your Supervisors evaluate calls as well, and if so, do those scores count towards the Supervisor’s personal KPIs?

I don’t think we can answer this generically, but we would advocate supervisors spend more time coaching agents and less time listening to calls. Let technology do the heavy lifting of call listening and use the supervisors to take the output to produce, holistic and focussed coaching for the agents and change the supervisors KPI’s to measure coaching sessions and the effect of coaching.

4. How are you getting Directors to sign off the QA resource/time? As a Contact Centre Manager I find it hard to get buy in and therefore cost sign off for adequate QA resource.

This can vary, for example in heavy regulated environments where the failure of compliance can have stark business impact, the costs of QA are easier to bear as they are a necessary safeguard. I would say organisations need to understand the impact of lack of/ineffective assurance, it leads to negative agent engagement which has a direct impact upon CX and also opportunities for improving the operational cost of the CC or driving improved business outcomes are lost. Analytics will help evidence the link between the three positive outcomes of experience, effectiveness & efficiency and provide a base from which to build a business case around QA.

5. What is the minimum % of calls/emails to monitor to ensure you are getting a fair accurate score/measure? (so if they took 100 calls how many would you monitor? and do you then scale those volumes up?)

This is dependent on the time frames that you are looking to cover and the size of the evaluation scale you are using. as per the example, using a ten point scale would require 7.7% or 8 calls to give you a +/- 10% margin of error.

6. Any information on costs associated to improving our company’s Quality framework?

This really depends on a number of factors we would be happy to discuss the particular circumstances of the person who raised the question and assess the needs and costs a well-structured, analytics led, readback exercise to inform this debate would be circa £20k- £30

7. Does anyone combine their voice QA with their non-voice QA to give the agent an overall average/%?

We do this in CallMiner against different aspects of agent quality, such as for example, satisfaction. We measure satisfaction across all the channels, identify the score against the attribute for the individual channels and produce a weighted score for the average. Analytics technology allows you to achieve this at scale so the scoring will be both statistically and directionally accurate. Look at this from the customers perspective, they interact with you on different channels, they expect the experience on all channels to be measured.

8. Does anyone use alternatives to scoring agents? if so what does a good call look like?

You can score by team and use external scoring (feedback etc.) to provide individual coaching but at the end of the day conversations are individual experiences so tend to need to be evaluated in some way at that level

9. Does anyone use any specific software for monitoring?

Try using CallMiner analytics, we would be happy to set up a demo for any interested people!!

10. How would you sell in a QA Process to the client and justify the ROI?

The ROI falls into a number of main buckets: Efficiency, Effectiveness and CX. If a client understands how to drive better customer experience, at lower or the same costs and ensure that internal processes and people can be optimised and improved it justifies the QA investment

11. Impressed by 100% coverage. Does CallMiner have any metadata that allows tying back interactions to customer post-contact surveys?

Yes you can add metadata, such as survey scores to call centre metadata and analyse across any dimension, for example compare the call quality scores to the post survey calls and identify specific patterns of the call dynamics that lead to high (or low) survey scores and use this feedback to improve products, people or processes

12. Do you think there is any value in QA if you don’t have the ability to record calls? E.g. scoring the content of the ticket without the conversation.

From a technology perspective you can use in-call (or Real Time) analytics when you have no recording of calls, Real Time, or Live analytics, generally tends to be focussed around providing guidance , reminders and next best action prompts to agents whilst they are talking to consumers. You would not use Real Time analytics to score the call, but in the process of integrating the Live analytics you will determine an audio acquisition solution on which calls can be reordered should you wish and analysed for post call analysis and the production of scorecards. Alternatively there are many economically cloud based recording systems on the market today if you wish to acquire a call recording solution, independent of analytics. Finally a QA program could combine internal and external data sources alone to evaluate and not rely on recordings, however we would not recommend this as it could get complex with data aggregation, acquisition and error factors to be considered.

13. How many items are the ideal to evaluate?

We would advocate all the calls using technology but it also depends on how many items have an impact on outcomes…factor analysis for example can identify those items that consistently behave in the same way, allowing the final evaluation form to be much simplified

14. We have a small call centre and we have tried the quality team doing the coaching however found that this didn’t really work – agents did not improve due to lack of buy-in. Is there a specific coaching model that could assist in this?

It’s a question of judgement, if agents are being sampled at low level of calls, manually and subjectively, it does not matter who is doing the coaching there will be resistance. If however all calls are being measured, the measures are consistent across all agents and the output truly representative in identifying the skills and coaching needs of individual agents, which is what technology can deliver for you, the agent buy in will be much higher in our experience. If you can evidence the link between coaching and improved outcomes that benefit the agent as well as the customer then this can help buy in

15. weight by Agent – Does this mean create custom QA forms per agent?

No it means you might sample more calls from agents that have historically been the poorer performers and less from the better performers weight the sample based on performance

16. What would be a best practise number of evaluations?

See above it depends on call volumes, durations, time periods and evaluations scales amongst others. generally the more the better which is why analytics technology should be considered

17. Which conversation you consider the most important to evaluate? Initial or 2,3, follow up ?

A cross section of all if you’re looking to monitor overall performance and calls should be sampled randomly for each agent. If you have a specific question to answer i.e. what’s driving repeat contact for example, then it makes sense to evaluate those calls that might support that analysis


company logo
This webinar was brought to you by Call Centre Helper and is sponsored by CallMiner.

Click here to view the replay.



Read more about - Recorded Webinars ,


 
css.php