Brent Haferkamp at NICE explores how forecasting is part art and part science in this blog.
Political scientist Phillip Tetlock spent nearly 20 years asking experts about their predictions for political outcomes, and what he found “mildly traumatized” pundits, according to The Economist: The predictions made by the group of mostly political scientists and economists he queried were only marginally more accurate than random guesses.
His findings illustrate what many a workforce manager already knows: Expert judgment is an important part of forecasting, but it can also be an area of substantial risk to the forecasting process. Forecasting schedules to eliminate under- and over-staffing is both an art and a science.
The “art” consists of the judgmental or qualitative aspect of forecasting, relying on the expert opinion of the forecaster and others, while the “science” harnesses quantitative methods, leveraging historical data and statistical techniques, through forecasting software.
Forecasting requires both accuracy and deep knowledge of the contact centre environment. Although forecast accuracy is clearly linked to improved customer service, many organizations fail to effectively measure how close their forecasted needs are to actual intraday requirements.
One study found that nearly one in five contact centres fails to measure forecast accuracy, and nearly 40% of contact centres that do measure accuracy have a variance of 6 to 20% in either direction.
As a rule of thumb, a forecaster should always try to apply some quantitative technique – any quantitative technique –before relying solely on expert judgment.
Methods for forecasting are numerous, and the choice alone of which to use can be overwhelming. Modern, AI-driven platforms simulate best-fit models and identify which is the best to help your contact centre adapt to changing staffing requirements.
Tetlock’s work to improve forecasting accuracy didn’t stop with that group of experts, and his later research helped prompt the Intelligence Advanced Research Projects Agency to hold a forecasting tournament to see whether competition could lead to better predictions.
Five teams entered the competition, including one led by Tetlock and his wife, the decision scientist Barbara Mellers. Tetlock and Mellers’ team demonstrated the ability to generate increasingly accurate forecasts that exceeded “even some of the most optimistic estimates at the beginning of the tournament,” according to The Washington Post.
They did so, in part, by identifying people who are better at making predictions, grouping these “super forecasters” into concentrated teams, then constantly fine-tuning the algorithms used to combine individual predictions into a collective prediction – once again, illustrating the importance of both the art and the science of forecasting for greater accuracy.
This blog post has been re-published by kind permission of NICE – View the original post
To find out more about NICE, visit their website.
Call Centre Helper is not responsible for the content of these guest blog posts. The opinions expressed in this article are those of the author, and do not necessarily reflect those of Call Centre Helper.