5 methods for measuring sales forecast accuracy

Why Measure?

Failing to learn from your forecasting mistakes makes you a lot less likely to become a better sales forecaster.  Measuring accuracy should be a positive thing (and definitely not a stick for beating sales forecasters with).

However, calculating accuracy data over hundreds, possibly thousands of items can be a real challenge – especially finding a balanced ‘single number’ measure for accuracy.  Here, we explain the different alternatives of calculating accuracy available in Prophecy, Data Perceptions sales forecasting software solution.  They are mostly well-known methods for getting to a single, summary number that describes the overall accuracy of a group of many separate forecasts.

Exceptions Analysis

Before we get to exceptions analysis, let’s remember that summary measurement is useful for tracking accuracy over time.  Exceptions analysis – identifying and explaining the reasons for the biggest / most expensive forecast errors – arguably has a bigger payoff.  It provides the valuable opportunity to learn from mistakes and apply the lessons of experience to current and future forecasts.

Therefore, step 1 in accuracy analysis should be a process for rapidly identifying the exceptions – the big deviations that caused the most problems.  Prophecy’s Accuracy Analysis control dialog lets you specify an exception percent and only items exceeding this limit are listed in the resulting detail report:

Step 1 is to review these exceptions and try to explain them.  Could the reasons have been anticipated?  Forecasting is constrained to skillfully playing the hand the forecaster was dealt at the time the forecast was made.  What’s to learn from reviewing the exceptions?  Clearly, if better information or interpretation was available at the time the forecast was made there is a lesson to apply to future forecasts.

Moving on to obtaining a single number to describe forecast accuracy, there are several common measures and a couple of deviated ones.  Prophecy’s Accuracy module calculates them ‘for free’ – Excel forecasters will need a little more work but they are mathematically simple.

The important thing to bear in mind when looking at all these measures is that they measure what they measure – i.e. simple maths.  They DO NOT measure the real impact on your company’s supply chain or sales, revenue and profit.

Weighted Average % Error

This statistic calculates a percentage deviation by dividing the sum of the absolute, unit deviations by the sum of the actuals, as illustrated in the following example:

The ‘simple’ % error, 15%, shown in the ‘TOTAL’ line does not allow for a mix of over and under-forecasts.  They cancel each other out and are therefore misleading.

The weighted average method effectively weights the absolute variance by the size of the actual.  This gives a more appropriate indication of overall forecast accuracy relative to volume, but assumes a product selling twice as much has twice as much impact.  This is not necessarily the case.  For example, where one item has a very high price and the other a very low price.  In that case, consider measuring accuracy using revenue instead of quantity.

However, if you’re trying to measure supply chain impact, both quantity and revenue can be misleading.  Sku-1 could have a long lead time, Sku-2 a short lead time.  If they sell the same and are the same price, how can accuracy measures measure supply chain impact?

Alternate Weighted Average % Error

This calculation is similar to the Weighted Average % Error.  However, instead of dividing by the sum of the actuals as shown in the example above, it divides by the sum of the forecast or actual for each line, whichever is greater.

This means that actual=0/forecast=100 has the same weight as actual=100/forecast=0.  For two items with these forecasts and actuals, the weighted percent is 200% (misleadingly high?) whereas the Alternate Weighted percent is 100% (perhaps closer to ‘correct’ because both items were effectively 100% out?).

The Alternate Weighted Average % approach was suggested by a Prophecy customer and is thought to be non-standard.  Nevertheless, it has the stated advantages over the more usual calculation.

Mean Absolute Percent Error (MAPE)

This is a commonly used measure in sales forecasting.  It is the mathematical average of the individual percent errors, ignoring their signs, as shown in the cell outlined in red here:

It is a moot point whether to divide by 3 or 4 in the example shown, where Sku-4 has no actual.  If you divide by 4 you are rewarded with a lower MAPE than dividing by 3, but if you divide by 3 you are effectively ignoring Sku-4!

In addition, this method gives equal weight to a percent deviation, irrespective of the relative size of the forecast or actual.  An item that sells a million has the same weight as one that sells 10.

Mean Average Deviation (MAD)

This measure definitely has the most appropriate acronym!  It is calculated from the mathematical average of the individual unit deviations, regardless of their sign:

The advantage of MAD over MAPE is that instances like Sku-4, where there is no actual, are included in the measure.  The same percentage error on a high volume item has a much bigger effect on the result.  This may or may not be desirable where, for example, the high seller is low value and the low seller is high value.

In addition, unlike a percent measure, it can be hard to know if the MAD is good or bad – it is just a number.

Conclusions

As with so many areas of sales forecasting, there is no right answer or single ‘best’ measure that can be used to describe sales forecasting accuracy.

The first and most beneficial purpose of accuracy analysis is to learn from your mistakes.  In that sense, exceptions analysis has the highest return.  The summary measures are helpful for tracking cumulative improvement over time and are control barometers.

Our next post on this subject will reflect on comparability.  You have two forecasters.  One has a lower MAPE or Weighted Error than another.  Does that mean forecaster one is a better forecaster, or just has easier products or customers to forecast?

Author: Peter Boulton, Data Perceptions.

Sales Forecasting with Excel Spreadsheets

This won’t be the first or last blog post on the internet about the dangers of large spreadsheets!

It is inspired by a recent Prophecy implementation where the customer, like many Prophecy customers, is migrating from a spreadsheet-based system that had grown over the years and now suffered many of the (not unusual) issues of complex spreadsheets:

  • The original developer was long gone.
  • It did not support multiple concurrent forecasters.  (All customers were forecasted in a single spreadsheet, avoiding the need for multiple linked spreadsheets but preventing multi-user access.)
  • There appeared to be design errors which lead to incorrect financial forecasts
  • It contained no sales history, other than year to date.
  • It was cumbersome to navigate and locate items quickly.

There are, of course, many additional shortcomings associated with using Excel spreadsheets for sales forecasting and they are documented on our main site at https://www.dataperceptions.co.uk/spreadsheets.html.

A recent discovery, for me at least, is research and data on the incidence of serious errors (i.e. errors that affect key numbers) in corporate spreadsheets.

The bottom line from this research is that more than 90% of corporate spreadsheets contain serious errors.  Yes, 90%.  Ray Panko is a Professor of IT Management and has curated the results of numerous studies on corporate spreadsheet errors on his website at http://panko.com/ssr/Audits.html.  If you doubt the claim about 90% errors, please do hop over to his site and see for yourself!

So, the point of this post is really to say that there is a high probability (90%!) that relying on spreadsheets for sales forecasting imposes hidden costs on your organisation through formula and other errors.  As well as the limitations of spreadsheets in the 10% or so that are actually error free.

That’s why a purpose-built, proven solution like Prophecy™ has to be the right way to go.  If it does nothing else, it avoids the high probability of spreadsheet errors, as well as opening up a raft of serious, forecaster-specific features that will help your forecasters develop better, more defensible sales forecasts, in less time.

 

The effect of Spectre and Meltdown on Prophecy performance

The Spectre and Meldown ‘bugs’ have, rightfully, been hitting the headlines these last two to three weeks.  There is still a lot of ‘noise’ and uncertainty, as well as a degree of hysteria about these bugs.   And rightfully so – Intel (for example) have been inconsistent about the impact of the microcode ‘fixes’ they have released in response.  Initially, they recommended everyone should apply them but when the number of spontaneous reboots increased ‘out there’ they reined back on that recommendation.  PC vendors were initially encouraged to push the fixes, until some of the deployment issues emerged.  Principally these were increased reboots and performance ‘hits’.

What should we do about Spectre and Meltdown?  Definitely apply the fixes, wait till first adopters work out the bugs?  It is surprising that there has been hardly any questioning over how dangerous these issues actually are in the real world.  The enormity if Spectre/Meltdown relates to the scope of cpus affected – i.e. all from about 1995 onwards rather than the size of the risk.  The following mitigations appear to be correct:

  1. Exploiting these vulnerabilities requires a significantly greater level of technical ability than more traditional malware development.
  2. The data that can theoretically be obtained is much more limited – i.e. no hard disc data, no keystrokes etc..
  3. Like traditional malware, something has to run in order to exploit the vulnerability – ‘drive-by’ is not possible.
  4. OS and browser vendors have already moved to reduce the likelihood of these vulnerabilities having real world impacts.

So overall, there are so many cherries lower down the tree waiting to be picked.  Why would common-garden malware use these exploits when the degree of technical difficulty is so high and the data you can get from them is so limited?

The lowest risk is of course to just patch.

To test the performance impact of the current Windows 10 Spectre/Meltdown mitigations we ran some Prophecy benchmarks, with and without the patched Windows, on the same machine – a gen 3 i5 with 8gb of memory.

Bottom line – the patched Windows took 13% more time in our tests.

So, everyone has to make their own decision and what we see on Prophecy, is sure to be different to other applications.  But a 13% speedup is non-trivial.

Incidentally, you can test your software in the same way.  Steve Gibson / GRC have a small utility which evaluates your machine for the vulnerability and allows you to toggle the protections on and off.  It’s here:

https://www.grc.com/inspectre.htm

Struggling with WordPress

Very hard to add a simple blog to an existing non-Wordpress website.  I just want https://www.dataperceptions.co.uk/blog to point to my blog page, not to a whole new website home page.

Any time I try to tweak the theme it reverts to giving me a home page etc..  I DON’T WANT THAT!

This is really just a test page and I hope no one reads it, as I’m so ashamed of not being able to tame WordPress.  FWIW, I am the developer of Prophecy, which is written in C++ and I implemented the main Data Perceptions website.  So I feel entitled to think I ought to be able to bend WordPress!