The High Price of Faulty Government Data

Economic policy is written on the basis of figures that simply don't add up. Time for new math

  • Illustration by Harry Campbell for TIME

    At 8:30 a.m. on Friday, June 4, the Bureau of Labor Statistics released its much anticipated monthly jobs report. Expectations were running high that the U.S. job market was finally rebounding. And at first glance, the numbers released looked decent, with 431,000 jobs added and the unemployment rate dropping modestly, to 9.7%. On closer inspection, the report didn't seem as good: more than 400,000 of those new jobs were due to the government's hiring Census workers rather than companies' ramping up for growth. The jobless rate decreased only because hundreds of thousands of people became so discouraged, they dropped out of the workforce.

    Financial markets took the report badly. Commentators were equally quick to pounce and warn of a double-dip recession, and the report became another arrow in the quiver of those assailing current economic policy.

    There is just one problem: these numbers are wrong.

    They always are. The jobs numbers are revised each month and then again in subsequent years. Sometime later this year, we may learn that twice as many jobs were lost in May as we thought or that, actually, hundreds of thousands more were created. The numbers are generated by surveys and then smoothed by complicated statistical formulas, but however sophisticated all that may be, the world is simply more complex than our ability to measure it in real time.

    This problem transcends the jobs report. If you examine almost any government statistic or calculation more closely, you will find that it is a guesstimate. Take government predictions of health care costs, an issue that has been in full focus in the past year. In projecting the costs of the drug-benefit program Medicare Part D, the nonpartisan Congressional Budget Office saw its estimates change radically. Its 2006 calculation was less than $600 billion through 2015; when it was later recalculated, its estimate rose to $800 billion — a mere 33% variance. Had the full costs been recognized initially, it's less likely the program would have become law.

    On almost every level, we are making national economic policy on the basis of problematic data and inadequate models. The only way to improve the models would be to spend considerable time and money creating new ones, but that seems unlikely in today's budget-conscious climate.

    Meanwhile, we limp along in the same old way. Take GDP figures. In late April, the Bureau of Economic Analysis (BEA) released its report for first-quarter GDP, which showed that the economy grew 3.2%. Then at the end of May, it released a revision showing that growth was actually 3%. The BEA will release another revision in late June, which will be taken as the final number, except that it will be revised again four years from now.

    Then there's inflation. First there is headline inflation, which reflects the cost changes of a basket of goods weighted by the intensity of their use by consumers in the statistical sample. Then there is core inflation, which excludes the more volatile food and fuel costs and is preferred by economists as an indicator of prices in general. And then there is hedonic pricing, which attempts to account for technological change, so that the 2010 Honda Civic with all its computer chips that you just bought for $22,000 can cost less, statistically speaking, than the 1994 Honda Civic you bought for $14,000.

    The flimsiness of statistics and the way we assemble them are not the result of human stupidity. They are the consequence of increasingly complex systems and a globalized economy of supply chains that have evolved more quickly than our ability to measure them.

    The problem is that politics in general and economic policy in particular demand simplicity and predictability. You can't pass major legislation in the U.S. by saying, "Well, there are too many variables to know for sure how this will add to the budget deficit." You can't claim to have fixed problems unless there is clear evidence that you have done so, even if the whole idea of clear evidence is a fiction.

    The problem with our data maps, in short, isn't just that they're inexact. It's that we decide how to spend trillions of dollars, invest trillions more and answer the simple question "How are we doing?" using outmoded methods and questionable figures. We need better models, and we need them urgently. Given the nature of the information we currently use for our collective vital statistics, it's a wonder we still have an economy to argue about.

    Karabell is the president of River Twice Research and a co-author of the forthcoming book Sustainable Excellence