Swedroe: Measuring Value In Forecasts

In this article:

The financial media tends to focus much of its attention on market forecasts by so-called gurus. They do so because they know that is what gets the investment public’s attention. Investors must believe they have value or they wouldn’t tune in. Nor would they subscribe to investment newsletters and other financial publications such as Barron’s.

Unfortunately, there’s a large body of evidence demonstrating that market forecasts have no value in terms of adding alpha (though they provide plenty of fodder for my blog)—the accuracy of forecasts is no better than one would randomly expect.

For investors who haven’t learned that forecasts should only be considered as entertainment, or what Jane Bryant Quinn called “investment porn,” they actually have negative value, because forecasts can cause them to stray from well-developed plans. This is especially true when forecasts confirm their own views, subjecting them to confirmation bias.

Despite the evidence, many investors rely on market experts and forecasters when making investment decisions, such as when to buy or sell securities. To help you, I’ll review the findings of two studies on the accuracy of guru forecasts.

Key Forecast Research

One of the first major studies providing evidence on forecasting accuracy was done by CXO Advisory Group. CXO set out to determine if stock market experts, whether self-proclaimed or endorsed by others (e.g., in publications), reliably provide stock market timing guidance.

To find the answer, from 2005 through 2012, they collected and investigated 6,584 forecasts for the U.S. stock market offered publicly by 68 experts, bulls and bears employing technical, fundamental and sentiment indicators. Their collection included forecasts, all of which were publicly available on the Internet, from as far back as the end of 1998. They selected experts, based on web searches for public archives, with enough forecasts spanning enough market conditions to gauge accuracy.

Their methodology was to compare forecasts for the U.S. stock market to the return of the S&P 500 Index over the future interval(s) most relevant to the forecast horizon. They excluded forecasts that were too vague, and forecasts that included conditions requiring consideration of data other than stock market returns. They matched the frequency of a guru’s commentaries (such as weekly or monthly) to the forecast horizon, unless the forecast specified some other timing.

And importantly, they considered the long-run empirical behavior of the S&P 500 Index. For example, if a guru said investors should be bullish on U.S. stocks over the year, and the S&P 500 Index was up by just a few percent, they judged the call incorrect (because the long-term average annual return had been much higher). Finally, they graded complex forecasts with elements proving both correct and incorrect as both right and wrong (not half right and half wrong).

Following is a summary of their findings:

  • Across all forecasts, accuracy was worse than the proverbial flip of a coin—just under 47%.

  • The average guru also had a forecasting accuracy of about 47%.

  • The distribution of forecasting accuracy by the gurus looked very much like the bell curve—what you would expect from random outcomes. That makes it very difficult to tell if there is any skill present.

  • The highest accuracy score was 68% and the lowest was 22%.

There were many well-known forecasters among the “contestants.” I’ve highlighted five of the more famous, each of whom makes regular appearances on CNBC, along with their forecasting score.

  • Jeremy Grantham, chairman of GMO LLC, a global investment management firm: His score was 48%.

  • Dr. Mark Faber, publisher of The Gloom, Boom & Doom Report: His score was 47%.

  • Jim Cramer, CNBC superstar: His score was 47%.

  • Gary Shilling, Forbes columnist and founder of A. Gary Shilling & Co.: His score was just 38%.

  • Abby Joseph Cohen, former chief U.S. investment strategist at Goldman Sachs: Her score was just 35%.

Of course, a few gurus had good records. But only five of the 68 had scores above 60% (among them was David Dreman with a score of 64%), yet 12 had scores below 40%. It’s also important to keep in mind that strategies based on forecasts have no costs, but implementing them does.

Further Evidence

David Bailey, Jonathan Borwein, Amir Salehipour and Marcos Lopez de Prado followed up the CXO study with their own: “Do Financial Gurus Produce Reliable Forecasts?” To be consistent with the CXO study, their focus was also on forecasts made for the S&P 500 Index. While using the same database as CXO (so the sample period was the same, December 1998 through December 2012), their study differed from CXO’s in that they treated each individual forecast according to two factors: the time frame of the forecast, and its importance/specificity.

They explain: “A forecast referring to the next few weeks should be treated differently than the one referring to the next few months; in particular, long-term forecasts should be treated as more significant than the short-term forecasts. After all, in the short-term anything could happen, as a matter of randomness, but in the long term underlying trends, if any, tend to overcome short-term noise. For these reasons, we give more weight to longer-term forecasts, since they imply investing skill with greater confidence. In this regard our study contrasts to the study of CXO Advisory team, which treated every forecast as equally significant.” The weighting scheme they used was as follows: up to one month, 0.25; up to three months, 0.50; up to nine months, 0.75; beyond nine months (up to two to three years), 1.00. If the forecast did not include a time frame, they assigned a weight of 0.25.”

Their data set included 6,627 forecasts made by 68 forecasters. The following is a summary of their findings:

  • Just 48% of all forecasts were correct.

  • 66% of the forecasters had accuracy scores of less than 50%—worse than randomly expected.

  • 40% of forecasters had accuracy scores of 40-50%; 19% had scores of 30-40%; 4% had scores of 20-30%; and 3% had scores of 10-20%.

  • 18% of forecasters had scores of 50-60%; 10% had scores of 60-70%; and 6% had scores of 70-80%.

Among the notables with poor accuracy scores: Jeremy Grantham, 41%; Marc Faber, 39%; Jim Cramer, 37%; Abby Joseph Cohen and Gary Shilling, 34%; and Robert Prechter (famous for the Elliott Wave theory), 17% (the worst score). Among the notables with the best scores: David Dreman, 70%; Louis Navellier, 66%; Laszlo Birinyi, 64%; and Bob Doll, 60%. The best score was John Buckingham’s 79%.

The authors noted that their results were worse than those found by CXO—holding forecasters to stricter standards resulted in lower accuracy rates. They concluded that, while some forecasters did well, the majority performed at levels not significantly different from chance.

Summary

The bottom line is that the research shows that when it comes to predicting economic growth, interest rates, currencies or the stock market, the only value of investment gurus is to make weather forecasters look good. Keep this in mind the next time you find yourself paying attention to the latest guru’s forecast. You’re best served by ignoring it.

As I point out in Think, Act and Invest Like Warren Buffett, that’s exactly what Buffet himself does, and what he advises you to do: Ignore all forecasts, because they tell you nothing about the direction of the market, but a whole lot about the person doing the predicting.

Unfortunately, the financial media isn’t required to present the track record of forecasters they provide exposure to. The reason they don’t is that accountability would spoil the game, and you would stop tuning in. Jason Zweig put it this way: “Whenever some analyst seems to know what he’s talking about, remember that pigs will fly before he’ll ever release a full list of his past forecasts, including the bloopers.

Larry Swedroe is the director of research for The BAM Alliance, a community of more than 130 independent registered investment advisors throughout the country.

Recommended Stories


Permalink | © Copyright 2019 ETF.com. All rights reserved

Advertisement