Forecasting is Part Art, Part Science

This is the first in a three-part blog series focused on forecasting in the contact center.

Political scientist Phillip Tetlock spent nearly 20 years asking experts about their predictions for political outcomes, and what he found “mildly traumatized” pundits, according to The Economist: The predictions made by the group of mostly political scientists and economists he queried were only marginally more accurate than random guesses.

His findings illustrate what many a workforce manager already knows: Expert judgment is an important part of forecasting, but it can also be an area of substantial risk to the forecasting process. Forecasting schedules to eliminate under- and over-staffing is both an art and a science.

The “art” consists of the judgmental or qualitative aspect of forecasting, relying on the expert opinion of the forecaster and others, while the “science” harnesses quantitative methods, leveraging historical data and statistical techniques, through forecasting software.

Forecasting requires both accuracy and deep knowledge of the contact center environment. Although forecast accuracy is clearly linked to improved customer service, many organizations fail to effectively measure how close their forecasted needs are to actual intraday requirements. One study found that nearly one in five contact centers  fails to measure forecast accuracy, and nearly 40% of  contact centers that do measure accuracy have a variance of 6 to 20% in either direction.

As a rule of thumb, a forecaster should always try to apply some quantitative technique – any quantitative technique –before relying solely on expert judgment. Methods for forecasting are numerous, and the choice alone of which to use can be overwhelming. Modern, AI-driven platforms simulate best-fit models and identify which is the best to help your contact center adapt to changing staffing requirements.

Tetlock’s work to improve forecasting accuracy didn’t stop with that group of experts, and his later research helped prompt the Intelligence Advanced Research Projects Agency to hold a forecasting tournament to see whether competition could lead to better predictions. Five teams entered the competition, including one led by Tetlock and his wife, the decision scientist Barbara Mellers. Tetlock and Mellers’ team demonstrated the ability to generate increasingly accurate forecasts that exceeded “even some of the most optimistic estimates at the beginning of the tournament,” according to The Washington Post.  They did so, in part, by identifying people who are better at making predictions, grouping these “super forecasters” into concentrated teams, then constantly fine-tuning the algorithms used to combine individual predictions into a collective prediction – once again, illustrating the importance of both the art and the science of forecasting for greater accuracy.

Learn more about how NICE WFM’s AI-powered simulation capabilities deliver the art and the science you need in forecasting, eliminating hours of research and make forecasting for multi-skill agents and real-world prioritization a breeze.