Does My Forecast Suck? Calculate Your MAD to Find Out

2
1571

All budgets and forecasts are inherently inaccurate at the onset. Even the most freshly minted prediction of the future 2 minutes ahead have unknown inaccuracies that make it imprecise. Many people get confused with this and think that because a forecast can never be 100% accurate that it is useless. That is not the case. Financial forecasters need to work on an asymptotic basis pushing the envelope at every iteration to make it better than last time.

Financial forecasting typically falls to the Financial Planning and Analysis (FP&A) team within an organization. Many times FP&A is simply a consolidator of information from the business units and does not actually forecast. Instead, they provide insight into how the business’ profitability, financial covenants, and other performance indicators will react based on what the business units provide, but what if FP&A could help senior management by providing them not only a consolidated view of the future of the business and possible outcomes but also, historically, how accurate these forecasts have been? If a CEO new that marketing and sales was only correct in their sales forecast 10% of the time and it fluctuated 150% in either direction, then strategic decisions may be shifted and resources reallocated. Even better, what if the FP&A could work to help the business units get more accurate every time a new forecast is generated? This would help executive management make better decisions and provide the business unit great credibility.

Measuring Forecast Performance

There are many different measures of forecast performance the ones mentioned below are some of the most common, but every FP&A team should work to find the measures that best work for their organization. In addition, forecast performance should not be limited to only the top line. Forecast performance should be tracked at the lowest level including business drivers.

Mean Absolute Deviation

The mean absolute deviation (MAD for short) is a measure of the absolute error of the forecast over the period of time being measured. The smaller your MAD the more accurate the forecast. It also allows you to evaluate the forecast performance of several different forecast scenarios. For example, was your annual budget more accurate or your updated quarterly re-forecasts?

The equation is as follows:

Mean Absolute Deviation (MAD) = 080814_1850_DoesMyForec1.png

t = the number of periods

A = Actuals in time period t

F = Forecast in time period t

= # of periods

Example

Below is an actual full year budget of a company I had worked with previously from a prior year. I have also included a download to the Excel spreadsheet so you can see the formulas within the cells here.

080814_1850_2.pngTo get the MAD you take the sum of the monthly MAD (24.32) and divide by the number of periods (12 months). This equals 2.03. By subtracting 2.03 from the monthly forecasts you can calculate per month downside forecast risk. Doing the reverse and adding 2.03 to the forecasts gets you the upside forecast opportunity. This is represented in the graph below.

080814_1850_3.pngUsing the same information in the table above you can now calculate:

Mean Absolute Percentage Deviation (MAPD) = 080814_1850_DoesMyForec5.png = 2.26%.

Cumulative Error = 080814_1850_DoesMyForec6.png = 5.46, or 0.51% of actual annual total.

Average Error = 080814_1850_DoesMyForec7.png = 0.46

In this example, it is important to analyze what the data is telling us. Although on an annual basis the forecast was only less than 1% off from actual, it diverted from forecast dramatically in the beginning of the year. This resulted in a net forecast error of 22% of the MAD. Additional investigation into the drivers that caused such a significant fluctuation should be undertaken.

If given a choice, most companies would rather miss their forecast due to higher sales then lower sales so although not as dramatic, an investigation as to why April and June that produced significant net negative forecast to actual should be undertaken and understood.

Forecast Control Measure

Anyone can analyze the past, it is forecasting the future that is the hard part. How can a company work to continuously improve their forecast and provide management with greater certainty? The key is to set up a forecast control metric via a tracking signal.

This is where the analysis of a financial forecast and one of a statistical forecast may diverge. When analyzing a process in order to keep it within tolerance a forecast that is biased high or low may be given the same weight. For most companies the risk of actuals being lower than forecast is of much higher concern than actuals coming in above forecast. You also have to take into account the choppiness of sales and accounting cycles. Therefore, I suggest comparing the sum of the previous 3-months errors divided by the sum of the previous 3-month MAD and setting up control parameters around those numbers.

In the above example, the sum of the trailing twelve month of errors equaled 5.46. Dividing that by the MAD for the year of 24.32 produces a potential control parameter of 22.47%. For the following year you should have a target that no 3-month period has a +/- deviation of greater than 22.47%. If that measure is tripped then a deep dive as to the cause of the variance should be performed and a possible re-forecast of the remainder of the year, or next 12-months, should be done.

Play Your Handicap

Regardless of whatever metrics you perform remember that financial forecasts are inherently inaccurate and therefore every effort should be made to understand when uncertainty needs to be accepted. For example, if looking at historical information over the past 5-years you are able to determine that your company has yet to be with X% of a forecast in January, then note that in your forecast deck to senior management. You will want to continuously make this error smaller, but maybe there is some degree of uncertainty that you or your team is unable to ascertain about your customer’s behavior in that month. Instead of only trying to forecast for it better, plan for the variance and set up a series of flexible measures that allow you to capture potential upside and mitigate downside during that period of known uncertainty.

email

2 COMMENTS

  1. Excellent stuff!

    While there are only 12 data points, turning the data on it’s side and looking at a histogram is illuminating. Assume 10 of your data points are tightly congested with minimal deviation from the forecast. But then you have two large outliers which are explainable and defendable.

    Would you ever 1) consider throwing the outliers out, or 2) using the median instead of the average?

    Another brainstorm completely unrelated … your MAD technique could be applied in large finance departments where there are thousands of GL accounts to monitor/analyze. The formula could be applied to variable expense ratios, fixed expenses, etc. for trailing 18-month data where there should be minimal deviation. Follow-up analysis would be used where your MAD result exceeds ___ (x).

    Great stuff and thanks for sharing.
    Mark

Comments are closed.