The economics data revolution has growing pains

9-Bottom copy

By now, most people who read about economics have heard about the empirical revolution in the field. An economist used to be someone who spun theories out of reasonable-sounding assumptions to tell stories about why the world works the way it does. Nowadays, economists still have to understand theory, but their day-to-day work involves combing through data and crunching statistics.
The empirical revolution is a good thing — it will make people take economists more seriously as scientists, and result in fewer nasty surprises for policy makers who in the past might have relied too much on speculative theory.
But rapid growth usually comes with growing pains, and empirical economics is no different. As evidence becomes more and more important to the discipline, it was inevitable that the methods empirical researchers use would come under increasing scrutiny. And that scrutiny was bound to find some systematic mistakes and methodological issues.
One example of this scrutiny comes from Alwyn Young of the London School of Economics. In a recent paper, Young evaluates the use of a common empirical technique known as instrumental variables, or IV. IV is used to separate causation from correlation. For example, suppose you want to find the effect of marriage on income. If you find that higher income people are more likely to be married, that could mean marriage makes you richer, or it could mean that richer people feel more comfortable getting married. To find out which causes which, you could try to find a third thing — say, a change in divorce laws — that affects marriage but doesn’t directly affect income. That third thing is called an instrument.
In the past, criticisms of IV have mostly focused on cases where the instrument is weak. But Young shows that even in cases where the instrument is strong, it often introduces lots of noise to measurements. That noise can easily make economists’ estimates unreliable, leading to false claims of statistical significance.
That economists routinely ignore this problem is just one case of a larger issue. Economists generally pretend that their data sets are huge, when in fact they tend to be rather small. This leads them to ignore the problems and tradeoffs that arise from small samples.
Another researcher who has focused on economists’ undersized data sets is John Ioannidis, a professor at Stanford University’s medical school. The author of a famous paper called “Why most published research findings are false,” Ioannidis has built a reputation as a relentless watchdog of empirical sloppiness in the health and medical professions. He found that when researchers only gathered a small amount of data, they were more subject to false positives.
That makes intuitive sense. If you meet five Dutch people and they’re all a little bit short, it’s easy to come to the incorrect conclusion that Dutch height is below average — when in fact, Dutch people tend to be very tall. If you met 1,000 Dutch people, you wouldn’t make this mistake. As in Young’s paper, the key here is statistical error — small data sets leave more room for uncertainty, which researchers too often ignore.
In a new paper called “The power of bias in economics research,” co-authored with T.D. Stanley and Hristos Doucouliagos, Ioannidis applies his basic insights to the econ field. Examining a staggering 6,700 economics studies — Young, in comparison, looked at only 32 — Ioannidis et al. find that most economists use data sets that are problematically small relative to the size of the effects they report. This means that a sizable fraction of the findings reported by economists are simply the result of publication bias — the tendency of academic journals to report accidental results that look statistically significant.
So like other fields, empirical economics is probably afflicted with a lot of spurious results. In the long run, that won’t prevent the truth from being discovered — even if one study fails to establish the facts, 100 will usually do the trick. But it represents wasted effort if professors are hard at work pumping out findings that later get disproven. And it does lead to excessive media hype about the latest hot finding.
These two papers aren’t the only ones to criticize modern empirical economics methods. A large team of experimental economists including Cal Tech’s Colin Camerer recently found that a substantial minority of econ experiments fail to replicate. This ‘replication crisis’ mirrors the one in experimental psychology.
And in a 2010 paper, famously skeptical economist Ed Leamer pointed out that the quasi-experimental studies that have become enormously popular in the past few years might not have wide
applicability. For example, a study that looks at the effects of raising the minimum wage from $4.25 an hour to $5.05 in New Jersey in 1992 might not tell us much about the effects of raising it from $7.70 to $10 in St. Louis in 2015.
So smart people are lining up to take shots at the empirical economics revolution. In the short term, these lessons may well be invoked by those who want econ to revert to a theory-first discipline. But economists will not heed the scattered calls to give up on evidence and go back to being mathematical philosophers. Instead, young economists will see these criticisms and take them to heart.
They’ll search for larger data samples, and be more careful with how they report statistical significance. They’ll be more careful with their experiments, and more circumspect about how they generalize from single studies. And the quality of evidence in econ will go up and up.
— Bloomberg

Noah Smith is a Bloomberg View columnist. He was an assistant professor of finance at Stony Brook University, and he blogs at
Noahpinion

Leave a Reply

Send this to a friend