Economic forecasting is still broken

11-Side copy

Economists still get a lot of flak for failing to predict the 2007-2009 recession. These criticisms are often misguided. Nonetheless, there’s an important sense in which forecasting models were badly mistaken — and probably remain so today.
Critics of forecasting tend to misunderstand its purpose. Forecasters know perfectly well that, in a random world, the one certainty is that their predictions will be wrong. If an economist says that she expects real gross domestic product to grow about 2 percent in 2017, she is really just providing something close to the midpoint of what she knows to be a wide range of possible outcomes. She is almost completely certain that growth will turn out to be higher or lower than 2 percent.
The goal, then, isn’t to predict exactly what will happen. The whole point of building forecasting models is to get a sense of the range of possible outcomes and assign probabilities to them. And it is here that the criticisms have a lot more bite, because the model-based probabilities have been highly inaccurate.
Let’s go back to the fourth quarter 2007, when it was already abundantly clear that a global financial crisis was underway. More than half of the participants in the Survey of Professional Forecasters assigned a probability of zero to the decline in real GDP that actually took place in the coming calendar year.
The Federal Reserve fared no better. As of December 2007, its main economic model saw a less-than-5-percent chance that the unemployment rate would be above 6 percent in two years. The rate actually hit 10 percent, an event that the model would have said was close to impossible. I believe it would have assigned an even lower probability to where we’ve ended up today, with real GDP more than 10 percent below 2007 forecasts. In other words, both private sector and Fed models viewed the events that unfolded over the next one, two and ten years as essentially impossible. My own sense is that the typical academic models were just as inaccurate. These kinds of errors suggest fundamental flaws in the way the models are built.
So have economists used the subsequent decade to address the problem? So far, I’ve seen little response from academics. The public documentation for the Fed’s baseline model does not reflect any material change of this kind. As of May 2017, the average participant in the Survey of Professional Forecasters saw only a 1-in-200 chance that the unemployment rate would increase by 2 percentage points over the next eighteen months — a prediction that appears grounded in the same kind of risk modelling that was used in 2007.
How does this matter? Well the Fed’s monetary policy aims in part to insure the economy against impending risks — a task that requires having some sense of how serious those risks might be. The central bank remains set on raising interest rates because it sees downside risks, such as a sharp decline in growth and hiring, as being relatively small. But there seem to be good reasons to worry that, just as in 2007, any model-based assessment of those risks is overly
optimistic — and perhaps wildly so.
— Bloomberg

Leave a Reply

Send this to a friend