Why Are Economists So Bad at Forecasting?

  • Share
  • Read Later
Getty

In recent weeks, the Obama Administration has taken heat for having underestimated how bad the economy would get. With the unemployment rate surpassing the Administration's best predictions, ABC's George Stephanopoulos asked Vice President Joe Biden if the stimulus package was too small or if the White House simply didn't have a handle on the seriousness of our economic ills. When an interviewer posed a similar question on National Public Radio, the chair of the President's Council of Economic Advisers, Christina Romer, replied, "It's important to realize that none of us has a crystal ball."

That may sound like a cop-out. At least until you consider how many other economic forecasters got it wrong. Dozens of states have found themselves with budget shortfalls, some quite massive, partly because economists weren't ratcheting down expectations of tax revenue nearly fast enough. In late November, the Federal Reserve Bank of Philadelphia's Survey of Professional Forecasters indicated GDP would decline at an annual pace of 1.1% for the first three months of 2009. The economy proceeded to shrink at a pace of 5.5%.

The problem is twofold. First, it's hard to predict the future. Second, it's really hard to predict the future when so many parts of the economy are in flux. "This has been an extraordinarily difficult period for forecasters," says Harvard economist James Stock. "Our models aren't really designed for predicting massive changes." Philip Joyce, a professor of public policy and administration at George Washington University, figures that in normal times, budget projections a couple of years out tend to be pretty reliable, at five years less so and at 10 years not much at all. "But these aren't normal times," he says. "In recessions, even the short-term numbers aren't very good, because a lot of the factors that go into them are based on assumptions that the economy will behave within some narrow band of reality, and the way it behaves is outside of that band."

The fundamental problem is that economic models — series of equations meant to describe how different parts of the economy fit together — depend on historical data. If you want to know how high unemployment is going to be six months from now, you start with how high it is today. If the economy isn't stable and the old relationships don't hold as well as they used to, then the models break down. In recent years, there have been great advances in economic modeling, says Stock, but forecasts haven't necessarily gotten much better since the economy itself has grown more complex.

One way to compensate is to pay attention to a broader range of forecasts. That's why there's no shortage of publishing and financial firms surveying groups of economists, presenting all of their opinions as "consensus" forecasts. A 2003 study by researchers at the Federal Reserve Bank of Atlanta found that the Blue Chip Consensus Forecast, which polls some 50 economists each month, is consistently better than any of its individual members. The researchers dubbed that result a "reverse Lake Wobegon effect": everyone was below average. During economic turning points — like the one we're currently in — the individual forecasts veered further off the mark.

Even with averaging, though, forecasts can still be wildly disappointing — as the Philadelphia Fed's Survey of Professional Forecasters shows. In mid-February, the economists collectively predicted a second-quarter unemployment rate of 8.3%. The difference between that and the actual figure, 9.3%, translates into 1.5 million more people unemployed.

What might help us better use economic forecasts, then, is to more explicitly take into account the limits that come with any forecast. You wouldn't find an economist publishing a paper in a journal without margins of error around the data — and yet we routinely drop such nuance when we talk about economic variables in public conversation. "One of the things that gets lost is the fact that there are ways of trying to assess errors in forecasts," says Robert Eisenbeis, a former researcher at the Atlanta Fed who is now chief monetary economist at the money-management firm Cumberland Advisors. "It's possible to think about these forecasts not as 'GDP is going to be 0.6% this year' but as 'GDP is going to be 0.6% plus or minus something.' What's relevant is how big that plus or minus is."

There's an easy way to represent this visually: the fan chart. What starts as a line quickly becomes a blurry fan-shaped region, underscoring that as time marches forward, the variable being forecast falls into an ever broader range of possibilities. "The more you think in terms of distribution of outcomes, the better," says Gregory Mankiw, a Harvard economist who chaired President George W. Bush's Council of Economic Advisers. "You're always keeping in mind the inherent uncertainty." The Bank of England is a big user of the fan chart when its economists talk about inflation forecasts. Plenty of U.S. agencies, like the Social Security Administration, use them, too.

Highlighting the imprecision embedded in any forecast would help when we then go on to, say, evaluate whether a public-policy initiative — like a $787 billion stimulus package — is changing the outcome of events. Is unemployment lower than it would have been otherwise? A good starting point is being realistic about our limits of knowing what unemployment would have been otherwise.

Yet admission of that sort of nuance may wind up undermining part of the appeal of forecasts: how a single number can quickly jump from an economist's spreadsheet to a politician's stump speech or a businessman's PowerPoint presentation. "Forecasts satisfy a deep psychological need that we live in a somewhat predictable and controllable world," says Philip Tetlock, a professor of organizational behavior at the Haas School of Business at the University of California, Berkeley. "Those are essential stories. People just find the truth" — that the future is unknowable — "too dissonant."