How machine learning is reshaping the future of demand forecasting?
Estimated reading time: 6 min
How machine learning is reshaping the future of demand forecasting?
Supply chain experts have, for long, relied on the traditional forecasting models and tools to derive demand forecast for their goods and services. While it has helped them reduce the demand-supply mismatch to some extent, there’s lot of catching up to do to improve the forecasting models.
Thanks to machine learning, the demand planners now have the right set of tools and algorithms to deliver available to promise which in turn will bring about remarkable improvements to customer satisfaction, revenue gains and inventory cost optimization.
In this article, I explain to you the hows and whys of how machine learning will disrupt the usual ways of running demand forecasts.
If you’ve arrived this at this article by picking up ‘machine learning’ (ML) as the context, you’ve made a great choice. A healthy supply chain heavily relies on the balance between demand and supply. While firms do their best to minimize this gap, there are umpteen factors that disrupt this balance and renders the demand planning exercise futile. While there are off the shelf products available to help you with demand forecasting, they do a sloppy job at running forecasts for products that sell in high volume and experience heavy volatility.
Fortunately for demand planners, ML can now help further improve the forecast from 40% of actual to 70% of actual. There’s a thumb rule that suggests that one can reduce your planned inventory by2.5% with 1% improvement in inventory forecast. With above guidance, ML clearly has the potential to help demand planners reduce planned inventory and service levels by great proportions. It can bring to life availability-to-promise and match demand with the production plans.
The demand planner’s problem statement
Before we get into how ML plays a role, let’s understand what goes on in the life of a demand planner while running forecast for an SKU. For any demand planner looking to build forecast, there are two variables to classify SKUs – (a) sales volume (b) demand volatility. The combination of these variables defines what kind of models should one use for accurately predicting the forecast.
For a low volatility SKU, statistical models like moving averages and time series with added regressors will provide a high degree of forecast accuracy. Conversely, when we talk of a high sales volume/high demand volatility SKU, our traditional forecasting algorithms fail to provide an accurate forecast.
Failure to accurately predict demand forecast will lead to higher inventory costs, lost sales, lower customer satisfaction levels, reduced margins and lot of other similar repercussions. So, why is it that the current demand planning forecasting models fail to address the problem effectively? Let’s do a deep dive into that now.
Why do traditional forecasting models fail for high sales volume/high volatility? I’ve worked with several demand planners and I’ve seen two reasons why this happens – see below
The forecast algorithms used by demand planners work on best fit model selection which doesn’t work well for the high volume/volatility SKUs
These algorithms don’t have external variables like PMI index, inflation, social media sentiment, competitor pricing, weather forecast, demand sensing, to name a few factored into their logic.
The traditional POS data-based demand planning approach doesn’t work well when the correlation involves external variables like the ones listed above. There is clearly a need to radically improve the traditional ways of generating demand forecast.
Why machine learning rarely fails?
Love for volatility (depth of volatility): ML led models act as black hole to any variables that might reduce the forecast accuracy. A lousy co-brand performance OR that great brand ambassador – it all feeds into the forecast.
Intelligent sensing (breadth of volatility): ML led models recognize both linear and the non-linear dependencies. It can identify complex patterns, trends and relationship between various variables that are not possible with traditional forecasting models.
Mimics human’s evolution: just as humans get wiser with experience and ageing, ML models get better at prediction as they work on more and more data sets.
What’s at stake?
Now that you’re aware of what ML can do to the quality of your forecast, try answering these three questions, at the back of this thumb rule that I introduced at the start of this article: "you can reduce your planned inventory by 2.5% if you improve your inventory forecast by 1%".
What’s the quality of your demand forecast – predicted vs. actual?
How much of the forecast is sentiment/gut driven instead of scientifically driven?
How much scope do you think there’s to improve the quality?
Also, it is necessary to note the nature of tradeoffs here: a product unit which is predicted to have a backorder in upcoming weeks, when manufactured would lead to inventory costs. Also, if back order is not predicted with accuracy, that would lead to a lost sale. Hence, profit per unit of a product needs to be plotted against its expected probability of back order to take an informed decision.
The opportunity is huge and it’s at a time when most of your competitors will take advantage of ML to drive superior target levels
The three-step process to machine learning led model creation
I’ll walk you through the three critical steps that are required while creation of a machine learning led model. The basic pre-requisite to build these models is a the right amount of data set.
Outlier detection and treatment: Outlier treatment is quite crucial to creation of stable ML predictive models. For this, the most common method is to use statistical information about data such as inter quartile range and box plotting methods and clip the outliers based on this information. Using these techniques most of data remains within range and effect of high turbulence in data due to outliers is reduced.
Build the machine learning model: The output from the outlier detection/treatment feeds into this step. Exploratory data analysis is run to identify patterns and trends. It’s very important to analyze the trends and patterns and then decide upon the machine learning models to use. ARIMA is the most popular method for stationary data which is following any trend/pattern in a set of intervals. For some cases the data may not have any seasonality /trends. For these cases predictive analytics models are better fit, which work on features derived from original data such as lag variables or rolling window pattern. XGBOOST , LSTMs and various automated ML models work best on these scenarios.
Model evaluation: this last step is used to compare the accuracy scores of various models and arrive at the best fit model by calculating the Mean Absolute Error (MAE) or Root Mean Square Error (RMSE).
Smarter planning for a better future
In nutshell, the entire process can be summed up in below steps:
Demand planning and strategy: lay out the objectives, planning horizon and establish hierarchies
Demand plan creation: figure out the demand data sources, build baseline forecast, and build forecast frequency and processes
Enrichment of demand plan: augment existing data sources with new parameters
Build consensus of demand plan: drive forecast approval and consensus process including governance
With the levels of incremental improvements that ML can drive, companies should augment their as-is demand forecasting techniques with machine learning powered models. It’s time to look beyond traditional ways of demand forecasting and embrace digital to minimize the demand-supply gap as much as possible. The off the shelf products will never be able to match the capabilities of machine learning models and there’s no other way out but to embrace this change soonest.