Sunday, September 10, 2017

a16z podcast on trade


I recently had the pleasure of appearing on the a16z podcast (a16z stands for Andreessen Horowitz, the venture capital firm). The topic was free trade, and the other guest was Russ Roberts of EconTalk.

Russ is known for making the orthodox case for free trade, and I've expressed some skepticism and reservations, so it seemed to me that my role in this podcast was to be the trade skeptic. So I thought of three reasons why pure, simple free trade might not be the optimal approach.


Reason 1: Cheap labor as a substitute for automation

Getting companies and inventors to innovate is really, really hard. Basically, no one ever captures the full monetary benefit of their innovations, so society relies on a series of kludges and awkward second-best solutions to incentivize innovative activity.

One of the ideas that has always fascinated me is the notion that cheap labor reduces the incentive for labor-saving innovation. This is the Robert Allen theory of the Industrial Revolution - high wages and cheap capital forced British businesspeople to start using machines, which then opened up a bonanza of innovation. It also pops up in a few econ models from time to time.

I've written about this idea in the context of minimum wage policy, but you can also apply it to trade. In the 00s, U.S. manufacturing employment suddenly fell off a cliff, but after about 2003 or so manufacturing productivity growth slowed down (despite the fact that you might expect it to accelerate as less productive workers were laid off first). That might mean that the huge dump of cheap Chinese labor onto the world market caused rich-world businesses to slack off on automation.

That could be an argument for limiting the pace at which rich countries open up trade with poor ones. Of course, even if true, this would be a pretty roundabout way of getting innovation, and totally ignores the well-being of the people in the poor country.

Also, this argument is more about the past than the future. China's unit labor costs have risen to the point where the global cheap labor boom is effectively over (since no other country or region is emerging to take China's place as a high-productivity cheap manufacturing base).


Reason 2: Adjustment friction

This is the trade-skeptic case that everyone is waking up to now, thanks to Autor, Dorn and Hanson. The economy seems to have trouble adjusting to really big rapid trade shocks, and lots of workers can end up permanently hurt.

Again, though, this is an argument about the past, not the future. The China Shock is over and done, and probably won't be replicated within our lifetime. So this consideration shouldn't affect our trade policy much going forward.


Reason 3: Exports and productivity

This is another productivity-based argument. It's essentially the Dani Rodrik argument for industrial policy for developing countries, adapted to rich countries. There is some evidence that when companies start exporting, their productivity goes up, implying that the well-known correlation between exports and productivity isn't just a selection effect.

So basically, there's a case to be made that export promotion - which represents a deviation from classic free trade - nudges companies to enter international markets where they then have to compete harder than before, incentivizing them to raise their productivity levels over time. That could mean innovating more, or it could just mean boosting operational efficiency to meet international standards.

This is the only real argument against free trade that's about the future rather than the past. If export promotion is a good idea, then it's still a good idea even though the China Shock is over. I would like to see more efforts by the U.S. to nudge domestically focused companies to compete in world markets. It might not work, but it's worth a try.


Anyway, that's my side of the story. Russ obviously had a lot to say as well. So if you feel like listening to our mellifluous voices for 38 minutes, head on over to the a16z website and listen to the podcast! And thanks to Sonal Chokshi for interviewing us and doing the editing.

Friday, September 08, 2017

Realism in macroeconomic modeling


Via Tyler Cowen, I see that Ljungqvist and Sargent have a new paper synthesizing much of the work that's been done in labor search-and-matching theory over the past decade or so.

This is pretty cool (and not just because these guys are still doing important research at an advanced age). Basically, Ljungqvist and Sargent are trying to solve the Shimer Puzzle - the fact that in classic labor search models of the business cycle, productivity shocks aren't big enough to generate the kind of employment fluctuations we see in actual business cycles. A number of theorists have proposed resolutions to this puzzle - i.e., ways to get realistic-sized productivity shocks to generate realistic-sized unemployment cycles. Ljungqvist and Sargent look at these and realize that they're basically all doing the same thing - reducing the value of a job match to the employer, so that small productivity shocks are more easily able to stop the matches from happening:
The next time you see unemployment respond sensitively to small changes in productivity in a model that contains a matching function, we hope that you will look for forces that suppress the fundamental surplus, i.e., deductions from productivity before the ‘invisible hand’ can allocate resources to vacancy creation. 
The fundamental surplus fraction is the single intermediate channel through which economic forces generating a high elasticity of market tightness with respect to productivity must operate...The role of the fundamental surplus in generating that response sensitivity transcends diverse matching models... 
For any model with a matching function, to arrive at the fundamental surplus take the output of a job, then deduct the sum of the value of leisure, the annuitized values of layoff costs and training costs and a worker’s ability to exploit a firm’s cost of delay under alternating-offer wage bargaining, and any other items that must be set aside. The fundamental surplus is an upper bound on what the “invisible hand” could allocate to vacancy creation. If that fundamental surplus constitutes a small fraction of a job’s output, it means that a given change in productivity translates into a much larger percentage change in the fundamental surplus. Because such large movements in the amount of resources that could potentially be used for vacancy creation cannot be offset by the invisible hand, significant variations in market tightness ensue, causing large movements in unemployment.
That's a useful thing to know.

Of course, I suspect that recessions are mostly not caused by productivity shocks, and that these business cycle models will ultimately be improved by instead considering shocks to the various things that get subtracted from productivity in the "fundamental surplus". That should affect unemployment in much the same way as productivity shocks, but will probably have advantages in explaining other business cycle facts like prices. Insisting that the shock that drives unemployment be a productivity shock seems like a tic - a holdover from a previous age. But that's just my intution - hopefully some macroeconomist will do that exercise.

But anyway, I think the whole field of labor search-and-matching models is interesting, because it shows how macroeconomists are gradually edging away from the Pool Player Analogy. Milton Friedman's Pool Player Analogy, if you'll recall, is the idea that a model doesn't have to have realistic elements in order to be a good model. Or more precisely, a good macro model doesn't have to fit micro data, only macro data. I personally think this is silly, because it ends up throwing away most of the available data that could be used to choose between models. Also, it seems unlikely that non-realistic models could generate realistic results.

Labor search-and-matching models still have plenty of unrealistic elements, but they're fundamentally a step in the direction of realism. For one thing, they were made by economists imagining the actual process of workers looking for jobs and companies looking for employees. That's a kind of realism. Even more importantly, they were based on real micro data about the job search process - help-wanted ads in newspapers or on websites, for example. In Milton Friedman's analogy, that's like looking at how the pool player actually moves his arm, instead of imagining how he should move his arm in order to sink the ball.

It's good to see macroeconomists moving away from this counterproductive philosophy of science. Figuring out how things actually work is a much more promising route than making up an imaginary way for them to work and hoping the macro data is too fuzzy to reject your overall results. Of course, people and companies might not search and bargain in the ways that macroeconomists have so far assumed they do. But because labor search modelers tend to take micro data seriously, bad assumptions will probably eventually be identified, questioned, and corrected.

This is good. Chalk labor search theory up as a win for realism. Now let's see macroeconomists make some realistic models of business investment!


Update

For some reason, a few people read this post as claiming that labor search theory is something new. It's not! I was learning this stuff in macro class back in 2008, and people have been thinking about the idea since the 70s. In fact, if anything, there seems to be a mild dampening of enthusiasm for labor search models recently, though this is hard to gauge. One exception is that labor search models have been incorporated into New Keynesian theory, which seems like a good development.

Sadly, though, I haven't seen any similar theory trend dealing with business investment. This post was supposed to be a plug for that.

Thursday, September 07, 2017

An American Whitopia would be a dystopia


In a recent essay about the racial politics of the Trump movement, Ta-Nehisi Coates concluded with a warning:
It has long been an axiom among certain black writers and thinkers that while whiteness endangers the bodies of black people in the immediate sense, the larger threat is to white people themselves, the shared country, and even the whole world. There is an impulse to blanch at this sort of grandiosity. When W. E. B. Du Bois claims that slavery was “singularly disastrous for modern civilization” or James Baldwin claims that whites “have brought humanity to the edge of oblivion: because they think they are white,” the instinct is to cry exaggeration. But there really is no other way to read the presidency of Donald Trump.
Yes, at first glance, the notion that Trumpian white racial nationalism is a threat to the whole world, or the downfall of civilization, etc. seems a bit of an exaggeration. Barring global thermonuclear war, Trump and his successors aren't going to bring down human civilization - the U.S. is powerful and important, but it isn't nearly that powerful or important.

But there's an important truth here. An America defined by white racial nationalism - an American Whitopia - would be an economic and cultural disaster movie. It would be a dysfunctional, crappy civilization, sinking into the fetid morass of its own decay. Some people think that an American Whitopia would be bad for people of color but ultimately good for whites, but this is dead wrong. Although nonwhite Americans would certainly suffer greatly, white American suffering under the dystopia of a Trumpist society would be dire and unending. 

Here is a glimpse of that dark future, and an explanation of why it would fail so badly.


Don't think Japan. Think Ukraine.

First, a simple observation: Racial homogeneity is no guarantee of wealth. Don't believe me? Just look at a night photo of North Korea and South Korea:


The red arrow and white outline point to North Korea. It's completely pitch dark at night because it's poor as hell. People starve there. But it's every bit as ethnically pure and homogeneous as its neighbor South Korea - in fact, it's the same race of people. North Korea, in fact, puts a ton of cultural emphasis on racial homogeneity. But that doesn't save their society from being a dysfunctional hellhole.


OK, so North and South Korea are an experiment. They prove that institutions matter - that a homogeneous society can either be rich and happy or poor and hellish, depending on how well it's run.

It's not just East Asia we're talking about, either. It's incredibly easy to find deeply dysfunctional white homogeneous countries. Ukraine, for instance. Ukraine's per capita GDP is around $8,300 at purchasing power parity. That's less than 1/6 of America's. It's also a deeply dysfunctional society, with lots of drug use and suicide and all of that stuff, and has been so since long before the Donbass War started. 

It's worth noting that Ukraine also has an economy largely based on heavy industry and agriculture - just the kind of economy Trump wants to go back to. So being a homogeneous all-white country with plenty of heavy industry and lots of rich farmland hasn't saved Ukraine from being a dysfunctional, decaying civilization. 

Alt-righters explicitly call for America to be a white racial nation-state. Some cite Japan as an example of a successful ethnostate. Japan is great, there's no denying it. But I know Japan, and let me assure you, an American Whitopia would not be able to be Japan. It definitely wouldn't be Sweden or Denmark or Finland. It couldn't even be Hungary or Czech or Poland. It would probably end up more like Ukraine. 

Here's why.


Where are your smart people?

Modern economies have always depended on smart people, but the modern American economy depends on them even more than others and even more than in the past. The shift of industrial production chains to China has made America more dependent on knowledge-based industries - software, pharmaceuticals, advanced manufacturing, research and design, business services, etc. Even the energy industry is a high-tech, knowledge-based industry these days. Take away those industries, and America will be left trying to compete with China in steel part manufacturing. How's that working out for Ukraine?

If you want to understand how important knowledge-based industries are, just read Enrico Moretti's book, "The New Geography of Jobs". Cities and towns with lots of human capital - read, smart folks - are flourishing, while old-line manufacturing towns are decaying and dying. Trump has sold people a fantasy that his own blustering bullshit can reverse that trend, but if you really believe that, I've got a bridge to sell you.

So here's the thing: Smart Americans have no desire to live in a Whitopia. First, let's just look at smart white people. Among white Americans with a postgraduate degree, Clinton beat Trump in 2016 by a 13-point margin, even though Trump won whites overall by a 22 point margin. Overall, education was the strongest predictor of which white people voted for Trump and which went for Clinton. Also note that close to 2/3 of the U.S.' GDP is produced in counties that voted for Clinton. 

Richard Florida has been following smart Americans around for a long time, and he has repeatedly noted how they like to live in diverse places. Turn America into an ethnostate, and the smart white people will bolt for Canada, Australia, Japan, or wherever else isn't a racist hellhole.

Now look beyond white people. A huge amount of the talent that sustains America's key industries comes from Asia. An increasing amount also comes from Africa and the Middle East, though Asia is still key. Our best science students are mostly immigrants. Our grad students are mostly immigrants. Our best tech entrepreneurs are about half immigrantshttps://blogs.wsj.com/digits/2016/03/17/study-immigrants-founded-51-of-u-s-billion-dollar-startups/. You make America into Whitopia, and those people are gone gone gone.

I'm not saying every single smart American would leave an American white ethnostate. But most would, and many of those who remain wouldn't be happy. 

There's a clear precedent for this: Nazi Germany. Hitler's persecution of Jews made Jewish scientists leave. But it also prompted an exodus of non-Jewish scientists who weren't Jewish but who didn't like seeing their Jewish colleagues, friends, and spouses get persecuted - Erwin Schroedinger, for example, and Enrico Fermi. This resulted in a bonanza of talent for America, and it starved Nazi Germany of critical expertise in World War 2. Guess who built the atom bomb? 


How you get there matters

There are just about 197 million non-Hispanic white people in the United States. But the total population of the country is 323 million. That means that around 126 million Americans are nonwhite. Among young Americans, nonwhites make up an even larger percentage. 

To turn America into a white racial nation-state - into Whitopia - would require some combination of four things:

1. Genocide

2. Ethnic cleaning (expulsion of nonwhites)

3. Denial of legal rights to nonwhites

4. Partition of the country

To see how these would go, look to historical examples. 

Genocide is usually done against a group that's a small minority, like Armenians or Jews. Larger-scale genocides are occasionally attempted - for example, Hitler's plan to wipe out the bulk of the Slavs, or the general mass murder of 25% of the population in Pol Pot's Cambodia. These latter attempts at mega-genocide killed a lot of people (Hitler slaughtered 25 million Slavs or so), but eventually they failed, with disastrous consequences for both the people who engineered them and the countries that acquiesced to the policies.

Denial of legal rights to minorities also has a poor record of effectiveness. The Southern slavery regime in the U.S., the apartheid regime in South Africa, and the Jim Crow system in the U.S. all ended up collapsing under the weight of moral condemnation, economic inefficiency, and war. 

Ethnic cleansing and partition have somewhat less disastrous records - see India/Pakistan, or Israel/Palestine, or maybe the Iraqi Civil War that largely separated Sunni and Shia. But "less disastrous" doesn't mean "fine". Yes, India and Pakistan and Israel survived intact. But those bloody campaigns of separation and expulsion left scars that still haven't healed. The cost of Israeli partition was an endless conflict and a garrison state. The cost of Indian partition was a series of wars and an ongoing nuclear standoff, not to mention terrorism in both India and Pakistan. 

In America, a partition would lead to a long bloody war. Remember, 39% of whites voted for Hillary Clinton. And the 29% of Asians and Hispanics who voted for Trump are unlikely to express similar support for a policy that boots them out of their country or town. Furthermore, nonwhite Americans are not confined to a single region that could be spun off into a new country, but concentrated in cities all over the nation. Thus, any partition would involve a rearrangement of population on a scale unprecendented in modern history. That rearrangement would inevitably be violent - a civil war on a titanic scale. 

That war would leave lots of bitterness and social division in its wake. It would leave bad institutions in place for many decades. It would elevate the worst people in the country - the people willing to do the dirty deeds of ethnic cleansing. In an earlier post about homogeneity vs. diversity, I wrote about how a white ethnostate created byan exodus of whites from America or Europe would probably be populated by the most fractious, violent, division-prone subset of white people. A white ethnostate created by a titanic civil war and mass ethnic cleansing would be run by an even worse subset.

This is why a partition or ethnic cleansing of America would lead to lower social trust, bad institutions, a violent society, and a kakistocracy. In other words, a recipe for a country that looks more like Ukraine (or even North Korea) than it does like Japan. 


It's already happening

This isn't just theoretical, and it isn't just based on historical analogies either. There are already the first signs of dysfunction and dystopia in the new America that Trump, Bannon, Sessions, Miller, and others are working to create. 

First of all, the places that voted for Trump are not doing so well economically or socially. Not only do Trump counties represent only about a third of the nation's GDP, but they also tend to be suffering disproportionately from the opiate epidemic. States that shifted most strongly toward Trump from 2012 to 2016, like Ohio, tend to be Rust Belt states with low levels of education, low immigration, and low percentages of Asians and Hispanics. Imagine all the things that make Ohio slightly worse off than Texas or California or New York or Illinois, then multiply those things by 1000 - and take away all the good economic stuff in Ohio, like the diverse urban revival in Columbus - to see what a Trumpian Whitopia would look like. 

Second, Trump is already creating a kakistocracy. His administration, of course, is scandal-ridden and corrupt. His allies are the likes of Joe Arpaio, who is reported to have tortured undocumented immigrants. His regime has emboldened murderous Nazi types to march in the street, and his condemnation of those Nazis has been rather equivocal

That episode caused business leaders - some of the smartest, most capable Americans - to abandon the Trump administration. If even business leaders - who are mostly rich white men - abandon an administration with even a whiff of white nationalism, imagine who would be in charge in a Whitopia. It would not be the Tim Cooks and Larry Pages and Elon Musks of the world. It would be far less competent people. 

So already we're seeing the first few glimmerings of a dystopian Whitopia. We're still a long way off, of course - things could get a million times worse. But the Trump movement gives us a glimpse of what that path would look like, and it ain't pretty. 


Whitopia: a self-inflicted disaster of epic proportions

Refashioning America as a white ethnostate would be a self-inflicted catastrophe of epic, unprecedented proportions. It would drive America from the top rank of nations to the middle ranks. It would involve lots of pain and death and violence for everyone, but the white Americans stuck in Whitopia would suffer the longest. Nonwhite Americans would move away and become refugees, or die in the civil wars. But the ones who survived would escape the madness and begin new lives elsewhere, in more sane functional countries. 

Meanwhile, white Americans and their descendants would be trapped in the decaying corpse of a once-great civilization. A manufacturing-based economy making stuff no one else wanted to buy, bereft of the knowledge industries and vibrant diverse cities that had made it rich. A violent society suffering long-lasting PTSD from a terrible time of war and atrocity. A divided society, with simmering resentment underneath the surface, like Spain under Franco. A corrupt, thuggish leadership, with institutions that keep corrupt, thuggish leaders in power. 

This is what it would take to turn America from a diverse, polyracial nation into a white ethnostate. That is the price that white Americans, and their children, and their children's children would pay. 

It's not worth it.

Thursday, August 24, 2017

The Market Power Story


So, there's this story going around the econosphere, which says that the economy is being throttled by market power. I've sort of bought into this story. It certainly seems to be getting a lot of attention from top economists. Autor, Dorn, Katz, Patterson and van Reenen have blamed industrial concentration for the fall in labor's share of income. Now there's a new paper out by De Loecker and Eeckhout blaming monopoly power for much more than that - lower wages, lower labor force participation, slower migration, and slow GDP growth. The paper is getting plenty of attention.

That's a big set of allegations. Everyone knows that the U.S. economy has been looking anemic since the turn of the century, and now a growing chorus of papers by well-respected people is claiming that we've found the culprit. Monopoly power could potentially become Public Enemy #1 for economists, the way taxes and unions were in the 70s, and antitrust could become the new silver bullet policy.

With those kind of stakes, it was inevitable that pushback and skepticism would rev up - after all, you don't just let a big theory like that go unchallenged. My Bloomberg View colleague Tyler Cowen is one of the first to step up to the plate, with a blog post criticizing the De Loecker and Eeckhout paper (BTW I just spelled those both correctly from memory. I want some kind of prize.)

Tyler's post really made me think. It raises some important issues and caveats. But ultimately I don't think it does that much to derail the Market Power Story. Here are some of my my thoughts on Tyler's points.


1. Monopolistic Competition

Tyler:
There are two ways these mark-ups go could up: first there may be more outright monopoly, second there may be more monopolistic competition, with high mark-ups but also high fixed costs, and firms earning close to zero profits....Consider my local Chinese restaurant.  Maybe the fixed cost of a restaurant has gone up, due to rising rents and the need to invest in information technology.  That can mean higher fixed costs, but still a positive mark-up at the margin.
First of all, and most importantly, monopolistic competition is perfectly consistent with the Market Power Story. Monopolistic competition in general does not produce an efficient outcome. Though monopolistic competition doesn't generate long-term profits like monopoly does, it does generate deadweight losses. This is true even when market power comes from product differentiation, as in the typical Dixit-Stiglitz formulation. Monopolistic competition does involve market power, so could also explain the drop in labor share, wages, etc.

So this objection of Tyler's doesn't really go against the Market Power Story, which was always about monopolistic competition rather than outright monopoly.

What about markups vs. profits? In general, Tyler is right - higher markups could indicate higher fixed costs rather than higher profit margins.

But what would these fixed costs be? Tyler suggests rent, but that is a variable cost, not a fixed cost. He also suggests information technology costs -- buying computers for your office, software for the computers, point-of-sale tech, etc. But advances in IT seem just as likely to reduce fixed costs as to raise them. Typewriters cost as much in the 60s as computers do now, but computers can do infinitely more. So much business can be done on the internet, using freely available tools like Google Sheets and Google Docs and free chat apps for workplace communications. Internet outsourcing also dramatically lowers fixed costs by turning them into variable costs.

I'm open to the idea that fixed costs have increased, but I can't easily think of what those fixed costs would be. Maybe modern business organizations are more complex, and therefore require more up-front investment in firm-specific human capital? I'm just hand-waving here.


2. Profits

Tyler:
The authors consider whether fixed costs have risen in section 3.5.  They note that measured corporate profits have increased significantly, but do not consider these revisions to the data.  Profits haven’t risen by nearly as much as the unmodified TED series might suggest.
Tyler is referring to the fact that foreign sales aren't counted when calculating official profit margins, leading these margins to be overstated. Here is Jesse Livermore's corrected series, which uses gross value added in the denominator:


Profit margins are at an all-time high, but not that much higher than in the 50s and 60s.

A more accurate measure of true economic profits (i.e., what you'd expect market power to produce) would include opportunity costs (cost of capital) in the numerator. Simcha Barkai does this in a recent paper, also using gross value added in the denominator. Here's his graph for the last 30 years:


His series tells basically the same story as Livermore's - profits have gone up up up. But he doesn't extend back to the 50s, so it's not clear whether higher capital costs back then would reduce the high profit margins seen on Livermore's graph. Interest rates were similar in the 50s and 60s to what they are now, so it seems likely that Barkai's method would also produce a large-ish profit share back then as well.

So it does seem clear that profit has gone way up in recent decades. But a full account should say why profit was also high in the 50s and 60s, and whether this too was caused by market power.

Also, as an interesting side note, Barkai mentions how corporate investment has fallen. That's interesting, because it definitely doesn't square with the "increasing fixed costs" story. Here's Barkai's graph:


If this is a rise in fixed costs we're looking at, where's the investment spending?


3. Market Concentration

Tyler:
In most areas we have more choice, maybe much more choice, than before...ask yourself a simple question — in how many sectors of the American economy do I, as a consumer, feel that concentration has gone up and real choice has gone down?  Hospitals, yes.  Cable TV?  Sort of, but keep in mind that program quality and choice wasn’t available at all not too long ago.  What else There are Dollar Stores, Wal-Mart, Amazon, eBay, and used goods on the internet.  Government schools.  Hospitals.  Government.  Did I mention government?
Hmm. Autor et al. show that market concentration has increased in basically all broad industrial categories. On one hand, that doesn't take geography and local market power into account - if there's only one store in town, does it matter if it's an indie store or a Wal-Mart? But I think it gives us reliable information that Tyler's anecdotes don't. 

Also, Tyler is thinking only of consumer sectors. Much of the economy consists of intermediate goods and services - B2B. These could easily be getting more concentrated, even though we don't come into contact with them very often. 

(And one random note: Tyler at one point seems to equate product choice with market concentration, in the case of TV channels. But that's not right. If Netflix is the world's only distribution service, even if it has infinite movies and TV shows, it can jack up the price for watching TV and movies.)

That said, the example of retail is an interesting one. Autor shows that retail concentration has gone up, but I'm sure people now have more choice of retailers than they used to. I think the distinction between national concentration and local concentration probably matters a lot here. And that means maybe it matters for other industries too.

But as for which industries seem more concentrated before, just off the top of my head...let me think. Banks. Airlines (which is why they aren't now all going bankrupt). Pharma. Energy. Consumer nondurables. Food. Semiconductors. Entertainment. Heavy equipment manufacturing. So anecdotally, it does seem like there's a lot of this going on, and it's not just health care and government. 


4. Output restriction

Tyler:
Similarly, the time series for manufacturing output is a pretty straight upward series, especially once you take out the cyclical component.  If there is some massive increase in monopoly power, where does the resulting output restriction show up in that data?  Once you ask that simple question, the whole story just doesn’t add up.
This is an important point. The basic model of monopoly power is that it restricts output. That's where the deadweight loss comes from (and the same for monopolistic competition too). But overall output is going up in most industries. What gives?

I think the answer is that it's very hard to know a counterfactual. How many more airline tickets would people be buying if the industry had more competition? How much more broadband would we consume? How many more bottles of shampoo would we buy? How many more miles would we drive? It's hard to know these things.

Still, I think this question could and should be addressed with some event studies. Did big mega-mergers change output trends in their industries? That's a research project waiting to be done. 


So overall, I think that while Tyler raises some interesting and important points, and provides lots of food for thought, he doesn't really derail the Market Power Story. Even more importantly, that story relies on more than just the De Loecker and Eeckhout paper (and dammit, I had to look up the spelling this time!). The Autor et al. paper is important too. So is the Barkai paper. So are many other very interesting papers by credible economists. So is the body of work showing how antitrust enforcement has weakened in the U.S. To really take down the story, either some common problem will have to be found with all of these papers, or each one (and others to come) will have to be debunked independently, or some compelling alternate explanation will have to be found.

The Market Power Story is still alive, and still worrying. 


Update

Forgot to mention this in the original post, but basically I see the case of the Market Power Story - or any big economic story like this - as detective work. We're collecting circumstantial evidence, and while no piece of evidence is a smoking gun, each adds to the overall picture. IF the economy were being throttled by increased market power, we'd expect to see:

1. Increased market concentration (Check! See Autor et al.)

2. Increased markups (Check! See De Loecker and Eeckhout)

3. Increased profits (Check! See Barkai)

4. Decreased investment (Check! See Gutierrez and Philippon)

5. Increased prices following mergers (Probably check! See Blonigen and Pierce)

6. Weakened antitrust enforcement (Check! See Kwoka)

7. Decreased output (Not sure yet)

So, as I see it, the evidence is piling up from a number of sides here. Economists need to investigate the question of whether output has been restricted. But those who want to come up with an alternate story for the recent changes in industrial organization need one that's consistent with the various facts found by these various sleuthing detectives.


Update 2

Robin Hanson and Karl Smith both have posts responding to De Loecker and Eeckhout's paper and attacking the Market Power Story. Both give reasons why they think rising markups indicate monopolistic competition, rather than entry barriers. But both seem to forget that monopolistic competition causes deadweight loss. Just because it has the word "competition" in it does NOT mean that monopolistic competition is efficient. It is not.  


Update 3

Tyler has another post challenging the De Loecker and Eeckhout paper and the Market Power Story in general. His new post makes a variety of largely unconnected points. Briefly...

Tyler on general equilibrium:
If every sector of an economy becomes monopolistic, output will contract in each sector, and it might appear that productivity will decline.  But for the most part this output reduction will not be achieved by burning crops in the fields.  Rather, less will be produced and factors of production will be freed up for elsewhere.  New sectors will arise, and offer goods and services too, perhaps with monopolies as well... 
You can cite the deadweight loss of monopoly all you want, but we’re getting more outputs of other stuff.  Value-added could be either higher or lower, productivity too.
This seems like a hand-waving argument that economic distortions in one sector are never bad, because they free up resources to be used elsewhere. That's obviously wrong, though. To see this, suppose the government levied a 10000% tax on food. Yes, the labor and capital freed up from the contraction of the food industry would get used elsewhere. NO, overall this outcome would not be good for the economy. Monopoly acts like a tax, so a similar principle applies. 

No, resource reallocation does not make market distortions efficient. 

Tyler on innovation: 
The Schumpeterian tradition, of course, suggested that market power would boost innovation.  There are at least two first-order effects pushing in this direction.  First, the monopoly has more “free cash” for R&D, and second there is a lower chance of the innovation benefiting competing firms too.  I don’t view the “monopoly boosts innovation” hypothesis as confirmed, but it probably has commanded slightly more sympathy from researchers than the opposite point of view.  Bell Labs did pretty well.
This is actually a good and important point, and I don't think we can dismiss it at all. There are economists who argue monopoly reduces innovation, and others who argue it increases it. 

Tyler on product diversity:
[Y]ou must compare [the efficiency loss from monopolistic competition] to the rise in product diversity that follows from monopolistic competition.
Does market power increase product diversity? That was certainly Edward H. Chamberlin's theory back in the 1930s. When you start getting technical, the question becomes less clear.

Tyler on De Loecker and Eeckhout, again:
But under those same conditions, profits are zero and so the mark-up arguments from the DeLoeker and Eeckhout paper do not apply and indeed cannot hold.
That seems incorrect to me. The fact that long-term profits are zero does NOT make monopolistic competition efficient. So the De Loecker and Eeckhout argument can indeed hold, quite easily. This basic fact - the inefficiency of monopolistic competition in standard theory - keeps coming up again and again. It appears to be a key fact the bloggers now rushing to attack the De Loecker and Eeckhout paper have not yet taken into account.

Thursday, August 17, 2017

"Theory vs. Data" in statistics too


Via Brad DeLong -- still my favorite blogger after all these years -- I stumbled on this very interesting essay from 2001, by statistician Leo Breiman. Breiman basically says that statisticians should do less modeling and more machine learning. The essay has several responses from statisticians of a more orthodox persuasion, including the great David Cox (whom every economist should know). Obviously, the world has changed a lot since 2001 -- where random forests were the hot machine learning technique back then, it's now deep learning -- but it seems unlikely that this overall debate has been resolved. And the parallels to the methodology debates in economics are interesting.

In empirical economics, the big debate is between two different types of model-makers. Structural modelers want to use models that come from economic theory (constrained optimization of economic agents, production functions, and all that), while reduced-form modelers just want to use simple stuff like linear regression (and rely on careful research design to make those simple models appropriate).

I'm pretty sure I know who's right in this debate: both. If you have a really solid, reliable theory that has proven itself in lots of cases so you can be confident it's really structural instead of some made-up B.S., then you're golden. Use that. But if economists are still trying to figure out which theory applies in a certain situation (and let's face it, this is usually the case), reduced-form stuff can both A) help identify the right theory and B) help make decently good policy in the meantime.

Statisticians, on the other hand, debate whether you should actually have a model at all! The simplistic reduced-form models that structural econometricians turn up their noses at -- linear regression, logit models, etc. -- are the exact things Breiman criticizes for being too theoretical! 

Here's Breiman:
[I]n the Journal of the American Statistical Association JASA, virtually every article contains a statement of the form: "Assume that the data are generated by the following model: ..." 
I am deeply troubled bythe current and past use of data models in applications, where quantitative conclusions are drawn and perhaps policy decisions made... 
[Data generating process modeling] has at its heart the belief that a statistician, by imagination and by looking at the data, can invent a reasonably good parametric class of models for a complex mechanism devised bynature. Then parameters are estimated and conclusions are drawn. But when a model is fit to data to draw quantitative conclusions... 
[t]he conclusions are about the model’s mechanism, and not about nature’s mechanism. It follows that...[i]f the model is a poor emulation of nature, the conclusions maybe wrong... 
These truisms have often been ignored in the enthusiasm for fitting data models. A few decades ago, the commitment to data models was such that even simple precautions such as residual analysis or goodness-of-fit tests were not used. The belief in the infallibility of data models was almost religious. It is a strange phenomenon—once a model is made, then it becomes truth and the conclusions from it are [considered] infallible.
This sounds very similar to the things reduced-form econometric modelers say when they criticize their structural counterparts. For example, here's Francis Diebold (a fan of structural modeling, but paraphrasing others' criticisms):
A cynical but not-entirely-false view is that structural causal inference effectively assumes a causal mechanism, known up to a vector of parameters that can be estimated. Big assumption. And of course different structural modelers can make different assumptions and get different results.
In both cases, the criticism is that if you have a misspecified theory, results that look careful and solid will actually be wildly wrong. But the kind of simple stuff that (some) structural econometricians think doesn't make enough a priori assumptions is exactly the stuff Breiman says (often) makes way too many

So if even OLS and logit are too theoretical and restrictive for Breiman's tastes, what does he want to do instead? Breiman wants to toss out the idea of a model entirely. Instead of making any assumption about the DGP, he wants to use an algorithm - a set of procedural steps to make predictions from data. As discussant Brad Efron puts it in his comment, Breiman wants "a black box with lots of knobs to twiddle." 

Breiman has one simple, powerful justification for preferring black boxes to formal DGP modeling: it works. He shows lots of examples where machine learning beat the pants off traditional model-based statistical techniques, in terms of predictive accuracy. Efron is skeptical, accusing Breiman of cherry-picking his examples to make machine learning methods look good. But LOL, that was back in 2001. As of 2017, machine learning - in particular, deep learning - has accomplished such magical feats that no one now questions the notion that these algorithmic techniques really do have some secret sauce. 

Of course, even Breiman admits that algorithms don't beat theory in all situations. In his comment, Cox points out that when the question being asked lies far out of past experience, theory becomes more crucial:
Often the prediction is under quite different conditions from the data; what is the likely progress of the incidence of the epidemic of v-CJD in the United Kingdom, what would be the effect on annual incidence of cancer in the United States of reducing by 10% the medical use of X-rays, etc.? That is, it may be desired to predict the consequences of something only indirectly addressed by the data available for analysis. As we move toward such more ambitious tasks, prediction, always hazardous, without some understanding of underlying process and linking with other sources of information, becomes more and more tentative.
And Breiman agrees:
I readily acknowledge that there are situations where a simple data model maybe useful and appropriate; for instance, if the science of the mechanism producing the data is well enough known to determine the model apart from estimating parameters. There are also situations of great complexity posing important issues and questions in which there is not enough data to resolve the questions to the accuracy desired. Simple models can then be useful in giving qualitative understanding, suggesting future research areas and the kind of additional data that needs to be gathered. At times, there is not enough data on which to base predictions; but policydecisions need to be made. In this case, constructing a model using whatever data exists, combined with scientific common sense and subject-matter knowledge, is a reasonable path...I agree [with the examples Cox cites].
In a way, this compromise is similar to my post about structural vs. reduced-form models - when you have solid, reliable structural theory or you need to make predictions about situations far away from the available data, use more theory. When you don't have reliable theory and you're considering only a small change from known situations, use less theory. This seems like a general principle that can be applied in any scientific field, at any level of analysis (though it requires plenty of judgment to put into practice, obviously).

So it's cool to see other fields having the same debate, and (hopefully) coming to similar conclusions.

In fact, it's possible that another form of the "theory vs. data" debate could be happening within machine learning itself. Some types of machine learning are more interpretable, which means it's possible - though very hard - to open them up and figure out why they gave the correct answers, and maybe generalize from that. That allows you to figure out other situations where a technique can be expected to work well, or even to use insights gained from machine learning to allow the creation of good statistical models.

But deep learning, the technique that's blowing everything else away in a huge array of applications, tends to be the least interpretable of all - the blackest of all black boxes. Deep learning is just so damned deep - to use Efron's term, it just has so many knobs on it. Even compared to other machine learning techniques, it looks like a magic spell. I enjoyed this cartoon by Valentin Dalibard and Peter Petar Veličković (tweeted by Dendi Suhubdy):




Deep learning seems like the outer frontier of atheoretical, purely data-based analysis. It might even classify as a new type of scientific revolution - a whole new way for humans to understand and control their world. Deep learning might finally be the realization of the old dream of holistic science or complexity science - a way to step beyond reductionism by abandoning the need to understand what you're predicting and controlling.

But this, as they say, would lead us too far afield...


(P.S. - Obviously I'm doing a ton of hand-waving here, I barely know any machine learning yet, and the paper I'm writing about is 16 years out of date! I'll try to start keeping track of cool stuff that's happening at the intersection of econ and machine learning, and on the general philosophy of the thing. For example, here's a cool workshop on deep learning, recommended by the good folks at r/badeconomics. It's quite possible deep learning is no longer anywhere near as impenetrable and magical as outside observers often claim...)

Monday, July 03, 2017

Why did Europe lose the Crusades?


A little while ago, I started to wonder about a historical question: Why did Europe lose the Crusades? The conventional wisdom, at least as I've always understood it, is that Europe was simply weaker and less advanced than the Islamic Middle Eastern powers defending the Holy Land. Movies about the Crusades tend to feature the Islamic armies deploying fearsome weapons - titanic trebuchets, or even gunpowder. This is consistent with the broad historical narrative of a civilizational "reversal of fortunes" - the notion that Islamic civilization was much more highly advanced than Europe in the Middle Ages. Also, there's the obvious fact that the Middle East is pretty far from France, Germany, and England, leading to the obvious suspicion that the Middle East was just too far away for medieval power projection.

Anyway, I decided to answer this question by...reading stuff about the Crusades. I read all the Wikipedia pages for the various crusades, and then read a book - Thomas Asbridge's "The Crusades: The Authoritative History of the War for the Holy Land". Given that even these basic histories contain tons of uncertainty, we'll never really know why the Crusades turned out the way they did. But after reading up a bit, here are my takes on the main candidate explanations for why Europe ultimately lost.


Explanation 1: Technological Inferiority

To my surprise, this probably wasn't that big of a deal. From movies, and from reading Mongol history - the Mongols hired lots of Middle Easterners to improve their siege technology in the 1200s - I had thought that the armies of the Seljuk Turks and other Middle Eastern powers would be far in advance of that of Christian Europe. But apparently they were about equal. The Crusaders built a cool modular siege tower during the siege of Jerusalem in the First Crusade, allowing them to quickly move their tower to the other side of the city where defenses weren't ready for them. Also, during the siege of Acre in the Third Crusade, it was the Crusaders under Richard the Lionheart who built catapults of unprecedented size, not Saladin. Also, catapults were mainly used to fling stuff into cities, not to batter down city walls - only with the invention of cannon did big medieval walls become obsolete.

As for the gunpowder thing, it was probably deployed only very late in the Crusades, after the Mongols had already used it against European armies in their aborted invasion of East Europe.

Muslim civilization probably was technologically superior to Christian Europe at the time of the Crusades, but the differences were nowhere near the enormous sorts of disparities that opened up in the world after the Industrial Revolution. The Middle East had better medicine, but medicine just wasn't that great anywhere. The Middle East also had some stuff like lateen sails, which allowed them to sail the Indian Ocean, but their ships weren't big enough to create really huge sea trade with places like China.

Militarily, the Middle Easterners had one important technology that European armies lacked: Horse archers. I have no idea why Europeans didn't use horse archers, but this lack seemed to put them at a consistent disadvantage relative to Central Asian armies in the Middle Ages. The Mongols, especially, used expert large-scale horse archery to run right over every army that fought them in the field, including European armies. In the Crusades, constant skirmishing by Turkish horse archers often kept European armies on the defensive in open battles.

But for some reason, the Seljuk Turks and other Muslim armies just don't seem to have used horse archery as decisively as the Mongols regularly did. Despite being usually outnumbered and often faced with horse archers, Crusader armies won their fair share of battles. In the Third Crusade, Richard the Lionheart beat Saladin every time they fought. In the First Crusade and after, the Crusader armies won several pitched battles. Maybe Mongols had perfected the art of horse archer warfare in a way that others hadn't - after all, they also managed to consistently defeat all of their Central Asian enemies, including Turkish armies, in horse archery warfare.

Anyway, it does not seem like the Muslims of the Middle East stomped the Crusaders using superior technology.


Explanation 2: Political Division

The European Crusaders, and the rulers of the Crusader States, were certainly politically divided. There were tensions between the Crusaders and the Byzantines, through whose territory they often traveled to reach the Middle East - in fact, this eventually led to the Crusaders actually sacking the Byzantine capital and effectively ending that empire's power. There was distinct lack of coordination between Crusader leaders on most of the major crusades. The Crusader States were plagued by secession disputes and backstabbing. Rivalries between the Crusader kings in the Third Crusade were one big reason they eventually abandoned that Crusade to go back to Europe and fight each other.

Obviously, this had a very deleterious effect on Crusader effectiveness. But actually, the Muslim world was just as divided as the Christian one, which dramatically weakened Muslim resistance to the Crusades. The Abbasid-Fatimid division probably allowed the First Crusade to seize Jerusalem in the first place, because Jerusalem was on the boundary between those two rival Muslim powers' territories. The main anti-Crusade leaders, Nur ad-Din and Saladin, spent a lot of their time and effort and resources subduing Muslim Syria and/or Muslim Iraq instead of fighting the Crusaders. Saladin came to power by overthrowing the Fatimids in Egypt and rebelling against his Zangid overlords in Syria. In general, the Muslims of the Middle East seemed to spend only sporadic and occasional effort kicking the Crusaders out of the Levant, and a lot more time fighting one another.

So political division was probably a wash here.


Explanation 3: Geographic Distance

This is certainly a big factor. The Mongols could easily gallop across the plains of Central Asia with their herds of animals, but most medieval armies were limited by expensive transport, crappy ships, and the political fragmentation of intervening territories. It's a long way from northern France to Israel. Crusaders had to either beg for help from the Byzantines (with whom they often fought) or buy ships from the Italian city-states. The history of the Crusades is filled with episodes where Crusade expeditions ended up fighting locals on the way over, or got ambushed, or suffered desertions, or had their leaders accidentally die. What's more, even after the First Crusade succeeded and established the Crusader States, they could only receive an intermittent trickle of European reinforcements. As a result, they were chronically outnumbered by their Muslim neighbors by huge margins.


Europeans were much more effective at driving the Muslims out of Spain, where they had the advantage of proximity. In fact, both the Crusader States and the fate of Muslim Spain show how geography led to an enduring, though porous, border between Europe and the Middle East.

So geographic distance has to be a factor. In the Middle Ages, unless you were a Central Asian warlord with a mounted army, you just couldn't conquer a very large swathe of territory, because it was so hard to get your army from Point A to Point B.

But after reading the history of the Crusades, I'm actually reasonably convinced that geography was only the second-biggest reason Europe ultimately lost...


My Explanation: Lack of Motivation

When we modern folks think of war, we tend to think of huge, dramatic, to-the-bitter-end conflicts like the World Wars. We think of FDR saying "The American people in their righteous might will win through to absolute victory", or French and German armies dying by the millions in the trenches. But I think that for most of history's wars, the question of "why we fight" was just a lot harder to answer, and subject to constant change.

In the Crusades, this is most clearly illustrated by the Third Crusade. Richard the Lionheart handily defeated the main Muslim leader, Saladin, in a series of battles and sieges. He advanced his army to within a short distance of Jerusalem - and then quit without taking the city. He tried to convince the army to attack Egypt instead, but the troops weren't interested in that. Much of his army deserted and everyone ridiculed him, so he gathered another army and again advanced near to Jerusalem. Saladin's army basically ran away, and Saladin was preparing to surrender the city. But again, Richard quit. He worked out a deal with Saladin and headed back to Europe to fight other Europeans.

This lack of will to fight was also in evidence in the later Crusades. The Fourth Crusaders decided they'd rather attack the Byzantines than the Muslims. Enthusiasm for the Crusades steadily fell after the first two, leading to smaller and smaller European armies. The Crusader States struggled to defend themselves, but European armies seemed far more noncommittal.

Why did Europeans prosecute most of the Crusades in such a lackluster fashion? Asbridge suggests that after the first two Crusades, Europe began transitioning from a deeply religious society to one more concerned with worldly politics. There were still spontaneous outpourings of religiously driven crusading fervor from the general populace - for example, the Children's Crusade - but their enthusiasm wasn't generally matched by experienced military types. Only the First Crusade seems to have resulted from a mass outpouring of religious devotion among people who actually knew how to fight wars and lead armies.

While the First Crusade was led by experienced warlords who seemed to genuinely believe that crusading would expunge their sins, later Crusades were mostly led by kings and other nobles whose main aim seems to have been building their prestige in Europe. Richard the Lionheart was a super-effective military leader, but the places he was really interested in conquering and ruling were England and France.

I also suspect that the territories the religious zealots wanted to take - especially Jerusalem - were just not that economically valuable. Acre, Tyre and other Levantine ports were valuable because of trade, but Jerusalem was basically a symbolic prize surrounded by crappy farmland. It's important to remember that pretty much everyone in the Middle Ages, and certainly every country, was desperately poor and frequently on the edge of starvation (except for Sung China, which was enjoying a golden age). Every war therefore had to have an economic dimension as well as a political one - there were just no surplus resources for ideological conflict.

My hunch that Jerusalem was economically worthless comes from the details of the Crusades themselves. Muslim leaders consistently avoided conquering the Christian Kingdom of Jerusalem, generally focusing their efforts on Syria, Egypt, or Mesopotamia. Richard the Lionheart tried to get his troops to bypass Jerusalem and attack Egypt - which makes economic sense, because Egypt had great riverside farmland and valuable ports. In the Fifth Crusade, the Egyptian Muslim leaders offered to just give Jerusalem to the Crusaders to get them to leave the Muslims alone; the Crusaders said no (and ended up losing on the battlefield). In the Sixth Crusade the Muslim leader actually did just give Jerusalem to the Crusaders (they lost it again later). The troops on both sides of the conflict seem to have been strongly religiously motivated and wanted Jerusalem, but the leaders thought in economic terms and tended not to care about the supposed main objective.

So I think that although geography was a difficult obstacle, if there had really been a long-term point to the Crusades, the Europeans would have put forth a greater effort after the First Crusade. They might not have held Jerusalem forever, but they would have made a much better showing than they did.


The Real Lesson of the Crusades

In fact, despite the incredible wealth of the modern world, I think the question of "Why are we even fighting this war?" still matters crucially. In Vietnam, the U.S. defeated the Viet Cong decisively and could have easily stomped any force North Vietnam threw at us, but we (wisely) decided that there was nothing worth fighting for there. Using massive force of arms to force a country not to go communist when it wants to go communist is just a dead-end objective. We lost the war because not because winning was militarily too difficult, but because there was no such thing as winning.

Iraq was clearly not just a military but also a political victory for the United States - our preferred government still sits in power there, and every opposing army has been crushed. Most people throughout history would label that a "victorious" war, as would Wikipedia. But lots of Americans still think we "lost" in Iraq. My hunch is that what they're really sensing is that there was nothing at all worth fighting for in Iraq (at least up until the appearance of ISIS), and therefore there was no such thing as winning.

The Crusades also bear lessons for modern would-be Crusaders who think the West is locked in an eternal struggle with Islam. They should stop more often to think, in the immortal words of Basil Fawlty: "I mean, what is the bloody point??"

Wednesday, June 21, 2017

Noah Smackdown, illegal immigration edition


In February, I wrote a Bloomberg View post called "The Myth of the Immigration Crisis" that got a fair bit of attention. In it, I wrote:

Illegal immigration to the U.S. ended a decade ago and, according to the Pew Research Center, has been zero or negative since its peak in 2007: 


About a million undocumented immigrants left the country in the Great Recession. But even after the end of the recession, illegal immigration didn’t resume.
Now, my Twitter buddy Lyman Stone of the USDA has written a post alleging that my post is "bad" and "false". Well, my mom always told me "Son, don't **** with the USDA," and that advice has served me well for many years. However, given the importance of this issue, I may have to ignore my mother's wise words, and rebut Lyman's post. Which won't be that hard to do, because Lyman, being the perspicacious fellow he is, in fact agrees with me on almost every substantive point.


In which Lyman agrees with me on essentially everything important

I'm just going to shamelessly cherry-pick the parts where Lyman agrees with me and then goes on to cite more evidence in support of my thesis:
[Noah's evidence shows] that the illegal immigrant population has fallen since its peak. I 100% agree there. He’s totally correct. The stock of unauthorized residents in the US is almost certainly well below historic highs... 
Pew gets their estimate [of the number of unauthorized immigrants] by starting from American Community Survey 1-year estimates of the foreign-born population, then subtracting naturalized citizens. Then they use non-ACS data to estimate how many non-citizens are lawful permanent residents (LPRs) or legal temporary residents (LTRs). The residual must be unauthorized residents. 
This is the best method we have available and Pew does very good work. I have no criticism of Pew’s estimates insofar as they go.. 
Now, again, we can say with substantial confidence that the illegal immigrant population was declined since 2007... 
Let me be clear. I think Noah is [quite a handsome dude, and is also] correct that net migration of illegal immigrants has been negative in some periods since 2007. And I am very confident that he is correct that the illegal immigrant population is falling... 
What frustrates me is that Noah’s basic point, that illegal immigration is a vastly smaller problem now than 10 or 15 years ago, is totally correct. There’s tons of data to support it...He could have just shown the trend in border apprehensions, or shown the illegal immigrant share of the population, or other kinds of data. If he really wanted to be clever, he could have just lined up border apprehensions with deportations by fiscal year to see what direct migration trends might look like...
OK, I might have taken a few liberties there with the brackets, but the point is, Lyman agrees with me that according to the best estimates we have available, the population of unauthorized immigrants in the U.S. has fallen from its peak. Given that he agrees with both my thesis and the substance of my point, it strikes me as a bit odd that he characterizes my post as "false" and "bad", but as a man who once pasted Paul Krugman's head on a giant cartoon robot, I probably shouldn't criticize bloggers' use of hyperbole.

Lyman is also right that if I expressed the unauthorized population as a percent of the total, the decline would be even more stark. I'm not sure what increased border apprehensions tell us.

So, to reiterate, Lyman agrees with my basic point. The rest of his post consists of A) quibbles about vocabulary and messaging, B) a dubious point about error bars, C) an interesting but ultimately non-game-changing point about mortality, and D) bikini pics of Jim Heckman from 1971.

Well, no, not (D). Lyman's many things, but he's no monster.


Like, dude, what does "illegal immigration" even mean? 

First, note that following Bloomberg convention, I say "unauthorized immigrants" as the noun and "illegal immigration" as the verb. Because an act can be illegal, but a person can't (though I'm sure Jeff Sessions is working on it). So git off my back, y'all SJWs.

Anyway, when we talk about "the amount of illegal immigration", what does that mean? It could mean a couple things:

1. Gross illegal inflows: The number of people who enter the U.S. illegally or overstay their visas over a given period of time

2. Net migration of unauthorized immigrants: The number of people who enter the U.S. illegally or overstay their visas, minus the number of unauthorized residents who exit the country, over a given period of time

What the Pew numbers report, and what I reported, was neither of these. I reported the net change in the unauthorized resident population, That is similar to #2 above, but also includes the effect of mortality (as I'll talk about in a bit).

Anyway, which number do people think of when they hear "illegal immigration"? I'm sure some people do think of the first one. If you're a law-and-order type who is really upset about our porous border, then I'm sure you care about gross flows across that border. Lyman thinks that gross illegal inflows = the One True Definition of the term "illegal immigration":
The point is, everyone who works in this field, all the actual experts, including the folks at Pew whom Noah cites, use “illegal immigration” to refer to inflows which do not have legal authorization. That’s what the term means. It’s not just me. Here’s dictionary.com:


It means inflows. Exclusively.
Well, call me a lawyer, but it seems to me that if you're going to cite dictionary.com to tell you what "illegal immigration" means, you should at least use the dictionary.com page for "illegal immigration" (which BTW doesn't exist).

But that's not the point. The point is come on, brah, my Bloomberg post wasn't fooling anybody. First of all, I define exactly what I mean by "illegal immigration", because the graph is labeled "Annual change in unauthorized immigrant population". It's right there in the graph! I defined my terms! Neener!!

Second of all, that graph has negative numbers on it. How big of a critical theorist dum-dum do you have to be to think a negative number represents gross inflows? Gross inflows can't go negative! They are bounded below by zero! They are defined on the set Z+! Is there someone out there looking at my chart and mistakenly believing that half a million antimatter people snuck across the border in 2008??

God, I hope not. Please let there not be such a reader. But if there is, I'm not sure what it would take on my part to avoid misleading him.


OK, down to brass tacks. What number should we care about here?

Like I said, if you're the type of person who lies awake at night fuming that someone managed to sneak past the almighty Border Patrol unnoticed, then you care a lot about gross illegal inflows. I don't, really. Oh, I think there are a few reasons to care - linguistic assimilation, for example. If the unauthorized population keeps getting switched out, it'll slow the rate at which that population becomes proficient in English, the language of dubbed anime American business and culture. In fact, that's probably one reason unauthorized immigrants tend to assimilate more slowly.

But overall, what I mostly care about - and what I think everyone else should mostly care about - is the stock of unauthorized immigrants living in the country at any given time. First of all, this is what should matter for labor markets. The data has convinced me that the labor market impact of low-skilled immigration is small, but I'm not 100% certain of that, and even a small negative impact on America's most vulnerable workers is bad. But it's the stock, not the gross flow, of unauthorized immigrants that should determine the severity of labor competition faced by low-wage American workers.

Also, the stock is what matters for the welfare state. Low-skilled immigrants probably take as much or more in govt benefits as they pay into the system in taxes, so unauthorized immigrants put pressure on the sustainability of the welfare state. But again, it's the stock, not the gross flow, that matters for welfare payments.

So if what I care about is the stock, why do I talk about changes in the stock? Why do I act like there's no problem just because the stock is hovering at a constant number?

It's all about urgency. If the total number of unauthorized immigrants isn't increasing, there's no reason to panic. There's no reason to start calling for a big shift in our immigration policy. The Obama approach of increased border security and increased criminal deportations is doing a great job of keeping the U.S. from being swamped by illegal immigration, even if it didn't do a great job of winning anti-illegal-immigration voters over to the Democrats.

So I feel like by using the term "illegal immigration" to mean "the change in the total number of unauthorized residents", I was getting at the quantity that really matters.


Did I ignore margins of error?

Yeah. I reported point estimates without talking about margins of error. Let he who is without sin cast the first Stone.

SEE? It was a pun! Lyman's last name is Stone! Get it?? BUAHAHAHA

...OK, anyway. Let's talk about margins of error. Lyman produces a graph of year-on-year changes in unauthorized immigrant population with some error bars he cooked up:


Wow, what looks like zero could actually be an increase of half a million unauthorized immigrants per year, right??

Wrong. The errors don't add up over time. If Pew were measuring border crossings and using that to infer the total unauthorized population, then yeah, the errors in their estimates would cumulate. But what they're doing is re-measuring the unauthorized population over and over each year. Which means that if we want to measure the change in total unauthorized population between Time A and Time B, we don't care about any of the measurement errors in between A and B.

(Random note: Blogger's spell-checker doesn't recognize "cumulate". What sort of fallen world do we live in?)

OK, anyway. I don't know how Lyman produced the graph you see above, since he doesn't include his methodology. It sort of looks like he just added up Pew's standard errors on the yearly population estimates for each pair of years, and then added maximum potential rounding error to each year. But I am an honorable man, and Lyman is an honorable man, and I would never accuse him of making such an undergrad-level math mistake. 

In any case, let's talk about how you calculate the error bars of a difference. 

So, let A be the total number of unauthorized immigrants in 2007, and B be the number in 2014. What we're interested in is the quantity B - A. We have unbiased estimates of B and A, and some random measurement errors e_B and e_A:

Bhat = B + e_B

Ahat = A + e_A

Suppose we want the variance of the difference between our two estimates: Var(Bhat - Ahat) = Var(e_B - e_A) = Var(e_B) + Var(e_A) -2Cov(e_B,e_A)

So the more correlated our measurement errors are between 2014 and 2007, the smaller the error bars will be on the difference of the two estimates. This is a fancy way of saying that if we miscount by the same number of people each year, we get the change in the total number of people exactly right, even if the amount we miscount by is huge. 

I was going to try to write down an expression for serially correlated errors here, with an autocorrelation coefficient of f, so I could use Cov(fe,fe), but I was too lazy.

So the more serially correlated the errors in the ACS and CPS estimates (which are used to derive Pew's estimates) are, the smaller the error bars should be on the difference between the estimates for two years. And I do suspect there is some serial correlation there. Suppose there's some group of unauthorized immigrants that these surveys reliably miss every year. Even if these groups are large - say, 1 or 2 million people - the fact that they aren't measured adds only a little bit to our uncertainty about the change in the total unauthorized population. (That little bit comes from the change in that unobserved subpopulation itself.)

So that's one potential problem with what Lyman is doing here. A second is that he discusses rounding errors. Pew's numbers are rounded to the nearest 100,000, meaning that they can be off by 50,000 in a given year. But those rounding errors obviously don't add up over time! When calculating the change in the unauthorized population over N years, you only have two rounding errors, not N rounding errors. 

The third thing Lyman overlooks is that the intervening years between 2007 and 2014 actually do contain some information. They show remarkable stability


If the measurement error of the yearly first differences were really on the order of 400,000 per year, as Lyman's graph shows, we'd expect to see the numbers jump around a lot more than they do. In fact, after 2008, we never see changes that big. This means Lyman may have made a mistake in how he calculates his error bars, but it also means that Pew may have overestimated its own error bars for the yearly population numbers. (Unless ACS and CPS are smoothing these numbers year to year in some way I am unaware of, which would be a bit naughty!)

Anyway, it's possible that measurement error concealed a moderate amount of (net) illegal immigration between 2009 and 2014. But given the likelihood that the ACS and CPS miss a lot of the same people each year, the number is unlikely to be big. And there's still basically no doubt that (net) illegal immigration was negative between 2007 and the present.


Outmigration to Heaven

As Lyman points out, there are multiple reasons the unauthorized population can decline. One is that people leave the country. Another is that people die. In my Bloomberg View post, I ignored mortality.

The reason I ignored it was that I didn't think of it (an excellent reason, if I do say so myself). But thinking about it later, I confirmed that it isn't that big of a deal, quantitatively. 

The crude death rate for unauthorized immigrants is about 3.9 per 1000, according to this random paper that I got by googling, i.e. The Most Reliable Source Ever. That's close to Lyman's own guess of about half the crude death rate of the U.S. as a whole. Using Pew's point estimates for the total unauthorized population each year, and again ignoring error bars, that's about 357,000 unauthorized immigrant deaths between 2007 and 2014, and about 264,000 between 2009 and 2014.

Let's compare this to the difference in Pew's totals for those years (i.e. what I called "illegal immigration"). The difference between 2007 and 2014 goes from -1.1 million to around -743,000 - still a very substantial decrease. The difference between 2009 and 2014 goes from -200,000 to around +64,000, turning a small decrease into a very small increase.

I still feel justified in saying that (net) illegal immigration halted between 2009 and 2014. As Lyman writes:
Mortality, like adjusting for ACS population estimation errors, has only a small impact.
The impact on Lyman's and my productivity is more substantial.


Summing up

So, ladies and gents and zombie thralls of the USDA Advanced Weapons Program, besides a general agreement with my thesis and main point, what we have here are:

1. A vocabulary complaint

2. An insistence that I'm focusing on the wrong number, which may or may not also be a vocabulary complaint

3. The very real fact that I didn't mention error bars (Bad social science columnist! Bad!)

4. Some dubious and mysterious calculations of error bars

5. That time I almost made a Cov(fe,fe) joke

6. A real, useful point about mortality, which I forgot because I'm a critical theorist dum-dum, but which isn't hugely important in the quantitative sense


I don't feel that I come out of this one looking too bad. 

*turns around and sees horde of zombie USDA attack cows converging*

Gulp.