Charles Watson takes us through the do’s and don’ts of contact centre forecasting.
Forecasting is where it all starts. Everything that happens in a contact centre related to service levels, occupancy, average speed of answer (ASA), and abandon rates can be traced back to how accurate or actionable a forecast was.
There are a lot of views on how best to measure forecasting accuracy. Often the reasons given for the various views are related to the industry the contact centre is in, the types of contacts forecast, or the availability of good data to use in a forecast.
In reality, most of the core principles apply to all contact centres. You must move past the “but we’re different because” trap that so many organisations fall into. You should anchor to a common set of principles.
This allows you to benchmark more easily with other organisations and onboard new talent, because the position becomes more interchangeable.
Let’s start with what NOT to do:
1. Do not set a target because a leader says “Well, we had 98% accuracy at my last contact centre”
Even as I write this, that statement makes me cringe, as I’m sure it does to many of you who get measured based on statements like this.
Unless you define the threshold (e.g. the forecast was published 90 days out), or the forecast metric (was it calls, workload, FTE?), and the frequency (daily, weekly, monthly?), then this statement means nothing.
What this statement is actually saying is that the leader isn’t comfortable they are getting a quality output and feels the need to set a target to drive improvement.
2. Do not use “average” as your measurement
If I put my head in the freezer and my feet in the oven, “on average, I feel fine”.
Averages can be very misleading, and they can make you think you’re doing a great job, when it isn’t providing the business with the information it needs.
Averages are simple to communicate and easy to aggregate, so it’s understandable why people go this route.
3. Do not produce a forecast that’s “technically accurate” but unusable
What does this mean? It means your science and measurements are rock solid, but your audience can’t comprehend it to make business decisions.
This is probably the biggest challenge forecasters face. Your job is a balance between having an accurate forecast and having a useful forecast.
The contact centre will be better off having a less accurate forecast than one that isn’t understandable.
Okay, now that we have all the negativity out of our system, let’s talk about what you should do!
1. Always start with a baseline forecast
This should be simply taking your actual historical data and using it to project the future. This is clean, simple, and tells you what your path is likely to be with no changes.
This also gives you something to track back to. As you start layering in future changes (e.g. new clients, changes in productivity, other business intelligence), it’s very easy to lose sight of how each of these inputs changes the forecast.
In order to do a proper variance analysis, you have to be able to track the variance to the intelligence and variables used to forecast it.
2. Always leverage technology for your forecast
Sorry, Excel just isn’t good enough any more. Yes, it’s readily available, and you know how to manipulate the data. You probably have tons and tons of spreadsheets all linked together in your treasure trove of spreadsheets.
Even if Excel can technically get you through the day, as contact centres (and customer preferences for how they contact you) evolve, you are going to have a lot of new inputs that can’t be modelled the same way as call volumes.
Additionally, Excel makes it incredibly difficult to communicate your results in a simple, clear way.
3. Always provide an expectation of accuracy
When you see maps tracking a hurricane, the projections get wider as the timeframe progresses. They know where it is today. Forecasters have a small margin of error when projecting where it will be tomorrow, and that “cone of uncertainty” grows as you project multiple days out.
It is completely reasonable that a forecast you produce 90 days out (perhaps for hiring decisions) is less accurate than the one you produce 30 days out (for making staffing adjustments).
Be open and transparent about this reality.
This will buy you credibility, and it starts to move people away from thinking about forecasting as one accuracy number (remember the “98% accuracy” mentioned above?).
Once you have a forecast, it’s important to share it with members of your operations or leadership group. Get their thoughts and feedback.
Forecasting is a very collaborative process that is based on both science and art.
Science is the easy part, because it’s process and measures. Art is the challenging part, because it’s adding in subjective aspects, such as someone’s opinion about whether customers may contact you more frequently due to a product change.
Bring these people into the process and actively engage them (and let them take some accountability with you!).
These relationships can also help to poke holes into your methodologies or outcomes in a safe, constructive environment.
If the first time they see the forecast is in a large group with their leadership, they are much more likely to challenge or attack the results. If they’ve already seen them, even if they don’t like the numbers, you know what to expect and you can highlight that the concern has been raised and is being (or has been) addressed.
Never catch people off-guard in a public forum with your forecast.
So, what are your next steps?
Look at your forecasting process today:
- Is it a one-way communication, or collaborative?
- Do you heavily rely on Excel, or do you have a forecasting tool to support you?
- Are you using an average to measure your variance, or standard deviation?
- How open is your environment, both technically and operationally, to making changes?
Talk to your leadership about how you can make your forecasting more accurate and, most importantly, start challenging the status quo.
This blog post has been re-published by kind permission of injixo – View the original post