For a few decades, economists used to imagine how the world works, write down a theory describing their idea, and call it a day. If some statisticians came along and found some support for the theory, well, great! But usually they didn’t, and that was fine too. As one old joke put it, if an idea worked in practice, economists would ask whether it worked in theory. That began to change in the late 1980s and 1990s. As my Bloomberg View colleague Justin Fox has documented, that was when the discipline started shifting toward empirics and evidence.
The key was the explosion of affordable information technology that made it easier to gather and analyze data. By the ’90s, there was such a huge stock of untested theories and such a wealth of new data that it made more sense for young, smart economists to turn their efforts in empirical directions. Unlike in physics, where theory and experiment call for very different skill sets, most economists found they could switch from theory to data relatively easily. Prizes like the prestigious John Bates Clark Medal awarded to rising economics stars under age 40 started to flow to people whose work emphasized data and practical applications.
But there’s a second shift in progress — a sort of Stage 2 of the data revolution in economics. The tools of empirical economists are changing. And that may cause a change in the kinds of theories that economists use as well.
The core of economics theory, as it’s practiced today, is based on individual optimization. For example, economists often assume that businesses maximize profits or minimize costs. This is known as a structural model, because economists usually assume that this sort of optimization represents the deep, fundamental structure of the economy, just like everything in your body is made up of atoms and molecules. Comparing this kind of model to data is called structural estimation, and for a while it formed the core of empirical economics.
But structural estimation has its limitations. Since structural models are usually very complicated, the answers they give to simple questions — for example, “How many people will lose their jobs if we raise the minimum wage?†— can be very sensitive to the assumptions of the model. Tweak one assumption, and the answer might come out completely wrong.
So in recent years, many economists have been turning to an alternative approach and chucking theory out the window entirely. Instead of a complicated model about optimization and utility functions and blah blah blah, just look for a case where some kind of random change in the economy — a so-called natural experiment — offers a window into some important question. For example, you could study a random influx of refugees to answer the question of how immigration affects local labor markets. You don’t need a complicated theory of how workers and companies behave — all you need is a simple linear model of how X affects Y.
The chief evangelists of this approach are economists Joshua Angrist and Jörn-Steffen Pischke. They have called the advent of natural experiments — also called quasi-experimental methods — the “credibility revolution.†And their book about the subject is titled “Mostly Harmless Econometrics.†The implication is that quasi-experimental studies, because they are more humble than structural models, are also less likely to give us the wrong answers to our most important questions. And so far, the revolution is winning. As economists Matthew Panhans and John Singleton document in a recent paper, quasi-experimental techniques are an increasingly large piece of academic publishing. They search the economics literature for terms related to the technique, and find that these words are much more common than they were two decades earlier:
This is still a small percent of econ articles, but the growth rate is impressive. And the results of quasi-experimental studies seem to be getting more attention and exposure from policy wonks and the media. This is probably because complicated structural models are easy to call into question — just challenge one or two of the (inevitably unrealistic) assumptions involved. It’s also because quasi-experimental studies, with their simple math, are just easier for most people to understand.
Of course, this type of empirical economics has major limitations. Because it doesn’t give you an underlying theory about how the economy works, its predictive power diminishes rapidly as conditions change. What they gain in reliability, these studies lose in generality.
But if quasi-experimental methods continue to gain currency, they will have many economists asking why they bother with ambitious structural theories in the first place. Perhaps econ’s famous obsession with theory will become less all-consuming in the decades to come.
—Bloomberg
Noah Smith is a Bloomberg View columnist. He was an assistant professor of finance at Stony Brook University, and he blogs at Noahpinion