Tuesday, October 10, 2017

Defending Thaler from the guerrilla resistance

So, Richard Thaler won the Nobel Prize, which is pretty awesome. If you've read Thaler's memoir, you'll know that it was a long, hard, contentious fight for him to get his ideas accepted by the mainstream. And even though Thaler is now a Nobelist and has been the AEA president - i.e., he has completely convinced the commanding heights of the econ establishment that behavioral econ is a crucial addition to the canon - resistance still pops up with surprising frequency in certain corners of the econ world. It's a sort of ongoing guerrilla resistance.

An example is this blog post by Kevin Bryan of A Fine Theorem. Kevin is one of the best research-explainers in the econ blogosphere, and his Nobel explainer posts have always been uniformly excellent. This time, however, instead of explaining Thaler's research, Kevin decided to challenge it, in a rather dismissive manner. In fact, his criticisms are pretty classic anti-behavioral stuff - mostly the same arguments Thaler talks about in his memoir.

Anyway, let's go through some of these criticisms, and see why they don't really hit the mark.

1. The invisible hand-wave

First, a random weird thing. Kevin writes:
Much of my skepticism is similar to how Fama thinks about behavioral finance: “I’ve always said they are very good at describing how individual behavior departs from rationality. That branch of it has been incredibly useful. It’s the leap from there to what it implies about market pricing where the claims are not so well-documented in terms of empirical evidence.”
This is Fama, not Kevin, but it's a very odd quote. Behavioral finance has been very good at documenting asset price anomalies - in fact, this is almost all of what it's good at. This is what Shiller got the Nobel for in 2013, and it's what Thaler himself is most famous for within the finance field. Behavioral finance has struggled (though not entirely failed) to explain most of these anomalies in terms of psychology, especially in terms of insights drawn from experimental psychology. But in terms of empirical evidence, behavioral finance is pretty solid.

Anyway, that might be a sidetrack. Back to Kevin:
[S]urely most people are not that informed and not that rational much of the time, but repeated experience, market selection, and other aggregative factors mean that this irrationality may not matter much for the economy at large. 
This is a dismissal that Thaler refers to as "the invisible hand wave". It's basically a claim that markets have emergent properties that make a bunch of not-quite-rational agents behave like a group of complete-rational agents. The justifications typically given for this assumption - for example, the idea that irrational people will be competed out of the market - are typically vague and unsupported. In fact, it's not hard at all to write down a model where this doesn't happen - for example, the noise trader model of DeLong et al. But for some reason, some economists have very strong priors that nothing of this sort goes on in the real world, and that the emergent properties of markets approximate individual rationality.

2. Ethical concerns

Kevin, like many critics of Thalerian behavioral economics, raises ethical concerns about the practice of "nudging":
Let’s discuss ethics first. Simply arguing that organizations “must” make a choice (as Thaler and Sunstein do) is insufficient; we would not say a firm that defaults consumers into an autorenewal for a product they rarely renew when making an active choice is acting “neutrally”. Nudges can be used for “good” or “evil”. Worse, whether a nudge is good or evil depends on the planner’s evaluation of the agent’s “inner rational self”, as Infante and Sugden, among others, have noted many times. That is, claiming paternalism is “only a nudge” does not excuse the paternalist from the usual moral philosophic critiques!...Carroll et al have a very nice theoretical paper trying to untangle exactly what “better” means for behavioral agents, and exactly when the imprecision of nudges or defaults given our imperfect knowledge of individual’s heterogeneous preferences makes attempts at libertarian paternalism worse than laissez faire.
There are, indeed, very real problems with behavioral welfare economics. But the same is true of standard welfare economics. Should we treat utilities as cardinal, and sum them to get our welfare function, when analyzing a typical non-behavioral model? Should we sum the utilities nonlinearly? Should we consider only the worst-off individual in society, as John Rawls might have us do?

Those are nontrivial questions. And they apply to pretty much every economic policy question in existence. But for some reason, Kevin chooses to raise ethical concerns only for behavioral econ. Do we see Kevin worrying about whether efficient contracts will lead to inequality that's unacceptable from a welfare perspective? No. Kevin seems to be very very very worried about paternalism, and generally pretty cavalier about inequality.

Perhaps this reflects Kevin's libertarian values? I actually have no idea what Kevin believes in. But hopefully the Nobel committee tries to make its awards based on the positive rather than normative considerations. After all, the physics Nobel often goes to scientists whose discoveries could be used to make weapons, right? I just don't see the need to automatically mix in ethics and values when assessing the importance of behavioral economics.

3. The invisible hand-wave, again

Kevin writes:
Thaler has very convincingly shown that behavioral biases can affect real world behavior, and that understanding those biases means two policies which are identical from the perspective of a homo economicus model can have very different effects. But many economic situations involve players doing things repeatedly with feedback – where heuristics approximated by rationality evolve – or involve players who “perform poorly” being selected out of the game. For example, I can think of many simple nudges to get you or I to play better basketball. But when it comes to Michael Jordan, the first order effects are surely how well he takes cares of his health, the teammates he has around him, and so on. I can think of many heuristics useful for understanding how simply physics will operate, but I don’t think I can find many that would improve Einstein’s understanding of how the world works.
This argument makes little sense to me. Most people aren't Michael Jordan or Einstein. And those people surely didn't compete all the other basketball players and physicists out of the market. Why does the existence of a few perfectly rational people mean that nudges don't matter in aggregate? Also, why should we assume that non-Michael-Jordans can quickly or completely learn heuristics that make nudges unnecessary? If that were true, why would players even have coaches?

It seems like another case of the invisible hand wave.

(Also, when it's used as an object, it's "you and me", not "you and I". This grammar overcorrection is my one weakness. If you ever need to defeat me in battle, just use "X and I" as an object, and I'll fly into an insane rage and walk right into your perfectly executed jujitsu move.)

Kevin continues:
The 401k situation [that Thaler's most famous nudge policy deals with] is unusual because it is a decision with limited short-run feedback, taken by unsophisticated agents who will learn little even with experience. The natural alternative, of course, is to have agents outsource the difficult parts of the decision, to investment managers or the like. And these managers will make money by improving people’s earnings. No surprise that robo-advisors, index funds, and personal banking have all become more important as defined contribution plans have become more common! If we worry about behavioral biases, we ought worry especially about market imperfections that prevent the existence of designated agents who handle the difficult decisions for us.
Assuming that a market for third-party advice will take care of behavioral problems seems like both a big leap and a mistake. First, there's the assumption that someone with nontrivial behavioral biases will be completely rational in her choice of an adviser. Big assumption. Remember that people are typically paying financial advisers a fifth of their life's savings or more. Big price tag. How confident are we that someone who treats opt-in and opt-out pensions differently is going to get good value for that huge and opaque expenditure?

Also, suppose that financial advisers really do earn their keep, i.e. a fifth of your life's savings. If the market for financial advice is efficient, and financial advice is all about countering your own behavioral biases, that means that behavioral biases are so severe that their impact is worth a fifth of your lifetime wealth! If a cheap little nudge could make all of that vast expenditure unnecessary - i.e., if it could get you to do the thing that you'd otherwise pay a financial adviser 20% of your lifetime wealth to do for you - then the nudge seems like a huge efficiency-booster.

So this point of Kevin's also seems to miss the mark.

4. Endowment effects and money pumps

Kevin writes:
Consider Thaler’s famous endowment effect: how much you are willing to pay for, say, a coffee mug or a pen is much less than how much you would accept to have the coffee mug taken away from you. Indeed, it is not unusual in a study to find a ratio of three times or greater between the willingness to pay and willingness to accept amount. But, of course, if these were “preferences”, you could be money pumped (see Yaari, applying a theorem of de Finetti, on the mathematics of the pump). Say you value the mug at ten bucks when you own it and five bucks when you don’t. Do we really think I can regularly get you to pay twice as much by loaning you the mug for free for a month? Do we see car companies letting you take a month-long test drive of a $20,000 car then letting you keep the car only if you pay $40,000, with some consumers accepting? Surely not.
First of all, the endowment effect isn't a money pump if it only works once with each object. It's only a money pump if you can keep loaning and reselling something to someone. Otherwise, people's maximum potential losses from this bias are finite - they're just some percent of their lifetime consumption. Maybe not 300%, but something.

But anyway, Kevin says that we don't see car companies letting you take a month-long test drive. Hmm. I guess that is true...for cars.

5. External validity of lab effects

Everyone knows external validity of laboratory findings is a big problem for experimental economics (and psychology, and biology...). Also problematic is ecological validity - even if a lab effect consistently exists in the real world, it might not matter quantitatively compared to other stuff. External and ecological validity do present big challenges for behaviorists who want to take insights from the lab and use them to predict real-world outcomes.

But Kevin chooses some highly questionable examples to illustrate the problem. For example:
Even worse are the dictator games introduced in Thaler’s 1986 fairness paper. Students were asked, upon being given $20, whether they wanted to give an anonymous student half of their endowment or 10%. Many of the students gave half! This experiment has been repeated many, many times, with similar effects. Does this mean economists are naive to neglect the social preferences of humans? Of course not! People are endowed with money and gifts all the time. They essentially never give any of it to random strangers – I feel confident assuming you, the reader, have never been handed some bills on the sidewalk by an officeworker who just got a big bonus! Worse, the context of the experiment matters a ton (see John List on this point). Indeed, despite hundreds of lab experiments on dictator games, I feel far more confident predicting real world behavior following windfalls if we use a parsimonious homo economicus model than if we use the results of dictator games.
Does Kevin seriously think that any behaviorist believes that dictator games imply that people walk around giving away half of any gifts they receive? That makes no sense at all. In the dictator game, there's one other person - in the real world, there are effectively infinite other people. What would it even mean for a person on the street to behave analogously to a person in a dictator game? The situations aren't equivalent at all.

As John List says, context matters. Wage negotiations at a company are different than family gift exchanges, which are different from financial windfalls, which are different from randomly being handed money on the street. Norms in these situations are different. If someone gives you a gift, there's probably a norm of not re-gifting it. If someone hands you money in a dictator game, you probably don't treat it as a personal gift. Etc.

To me, this is clearly not a reason to assume that norms and values only matter in the lab, and that real-world people always behave perfectly selfishly. Quite the contrary. It's a reason to pay more attention to norms and values, not less. Why does Bill Gates give away so much of his money? Why do people give money to some beggars and buskers but not to others? Do these behaviors bear any similarity to how people behave when asking for (or handing out) raises in the workplace? Do they bear any similarity to the way people haggle over the price of a car or a house?

These are not trivial questions to be waved away, simply because if you hand someone cash on the street they don't instantly hand half of it to the first person they see.

Kevin follows this up with what seems like another bad example:
To take one final example, consider Thaler’s famous model of “mental accounting”. In many experiments, he shows people have “budgets” set aside for various tasks. I have my “gas budget” and adjust my driving when gas prices change. I only sell stocks when I am up overall on that stock since I want my “mental account” of that particular transaction to be positive. But how important is this in the aggregate? Take the Engel curve. Budget shares devoted to food fall with income. This is widely established historically and in the cross section. Where is the mental account? Farber (2008 AER) even challenges the canonical account of taxi drivers working just enough hours to make their targeted income. As in the dictator game and the endowment effect, there is a gap between what is real, psychologically, and what is consequential enough to be first-order in our economic understanding of the world.
Kevin's argument appears to be that if mental accounting only matters in some domains, it doesn't matter overall. That makes no sense to me. If mental accounting is important for investing and driving, but not for food purchases or taxi jobs, does that mean it's not important "in the aggregate"? Of course not! Gas is a substantial monthly expense. The compounded rate of return on your stock portfolio can make a huge difference to your lifetime consumption. Even if mental accounting mattered only for these two things, it would matter in the aggregate.

So, Kevin's attacks on Thaler's research paradigm pretty much uniformly miss the mark. Because of this, I half suspect that Kevin - usually the most careful and incisive of bloggers - is playing devil's advocate here, taking cheap shots at behaviorism simply because it's fun. This guerrilla resistance is more like paintball.

Wednesday, September 27, 2017

Handwaving on health care

There's a particular style of argument that some conservative economists use to dismiss calls for government intervention in markets:

Step 1: Either assert or assume that free markets work best in general.

Step 2: List the reasons why this particular market might be unusual.

Step 3: Dismiss each reason with a combination of skeptical harumphing, handwaving, anecdotes, and/or informal evidence.

Step 4: Conclude that this market should be free from government intervention.

In a recent rebuttal to a Greg Mankiw column on health care policy, John Cochrane displays this argumentation style in near-perfect form. It is a master class in harrumphing conservative prior-stating, delivered in the ancient traditional style. Young grasshoppers, take note.

Mankiw's article was basically a rundown of reasons that health care shouldn't be considered like a normal market. He covers externalities, adverse selection, incomplete information, unusually high idiosyncratic risk, and behavioral factors (overconsumption).

Cochrane makes a guess at the motivation of Mankiw's column:
I suspect I know what happened. It sounded like a good column idea, "I'll just run down the econ 101 list of potential problems with health care and insurance and do my job as an economic educator."
That sounds about right. In fact, that actually was the reason for my similar column in Bloomberg a few months ago. Frankly, I think bringing readers up to speed on Arrow's classic piece on health care is a pretty good idea for a column. Mankiw generally did a better job than I did, although he didn't mention norms, which I think are ultimately the most important piece of the puzzle (more on that later).

Anyway, Cochrane wrote a pretty unfair and over-the-top response to that Bloomberg post of mine, which also made a rather unintelligent pun using my first name (there's an extra syllable in there, dude!). His response to Mankiw has more meat to it and less dudgeon, but is still rather ascerbic. Cochrane writes:
I am surprised that Greg, usually a good free marketer, would stoop to the noblesse oblige, the cute little peasants are too dumb to know what's good for them argument... 
[I]s this a case of two year old with hammer?... 
I suspect I know what happened. It sounded like a good column idea, "I'll just run down the econ 101 list of potential problems with health care and insurance and do my job as an economic educator." If so, Greg failed his job of public intellectual... 
The last section of After the ACA goes through all these arguments and more, and is better written. I hope blog regulars will forgive the self-promotion, but if Greg hasn't read it, perhaps some of you haven't read it either.
Grumpy indeed!

So, Cochrane's post consists of him hand-waving away the notion that externalities, high idiosyncratic risk, and adverse selection might matter enough in health care markets to justify large-scale government intervention. 

To summarize Cochrane's points about externalities:

  • Health externalities affect only a small subset of the things that Obamacare deals with.
  • Lots of other markets have externalities. 

To summarize Cochane's point about high idiosyncratic risk:

  • That's what insurance markets are for, duh!

To summarize Cochrane's points about adverse selection:

  • Doctors know more about your health than you do.
  • Adverse selection assumes rational patients, while behavioral effects assume irrational patients.
  • The government forces insurers not to charge people different prices based on their health status.
  • Other insurance markets, like car insurance, function without breaking down due to adverse selection.
  • Services to mitigate adverse selection exist in other insurance markets.
  • Most health expenses are predictable, and thus not subject to adverse selection. 

So, to rebut these, I could go through each point one by one and do counter-hand-waving. For example:

  • The idea that doctors know more about your health than you do assumes that you've already bought health care and are already receiving examinations. Prior to buying, you know your health better. 
  • People can be irrational in some ways (or in some situations) and rational in others, obviously.
  • The fact that the government forces insurers to pay the same price is part of the policy that's intended to mitigate adverse selection, and therefore can't be used as proof that adverse selection doesn't exist in the absence of government intervention.
  • Markets might have different amounts of adverse selection. For example, insurers might be able to tell that I'm a bad driver, but not that I just found a potentially cancerous lump in my testicle.
  • Adverse selection mitigation services are socially costly, and Carmax for health care might work much worse than Carmax for cars.
...and so on.

But who would be right? It really comes down to your priors. Priors about how irrational people are. Priors about how much asymmetric information exists and how much it matters in various markets, Priors about how costly and feasible Carmax for health care would be. Priors about how reputational effects work in health care markets. Priors about how efficient government is at fixing market failures. And so on. Priors, priors, priors.

Reiteration of priors can get tiresome.

Instead, here is a novel idea: We could look at the evidence. Instead of thinking a priori about how important we think adverse selection is in health care markets, we could think "Hey, some smart and careful economist or ten has probably done serious, careful empirical work on this topic!" And then we could fire up Google Scholar and look for papers, or perhaps go ask a friend who works in applied microeconomics or the economics of health care. 

In his health care article, "After the ACA", Cochrane cites a wide variety of sources, including New Yorker and Wall Street Journal and New York Times and Washington Post articles, a JAMA article and a NEJM, some law articles, a number of blog posts, a JEP article and a JEL article, some conservative think tank reports, Akerlof's "Market for Lemons" article, the comments section of his own blog, and a YouTube video entitled "If Air Travel Worked Like Health Care". (This last one is particularly funny, given that Cochrane excoriated me for claiming that he compared the health insurance industry to the food industry. As if he would ever imply such a thing!)

As silly as a couple of these sources may be, overall this is a fine list - it's good to cite and to have read a breadth of sources, especially on an issue as complex and multifaceted as health care. I certainly cannot claim to have read anywhere nearly as deeply on the subject. 

But as far as I can see, Cochrane does not engage with the empirical literature on adverse selection in health insurance markets. He may have read it, but he does not cite it or engage with it in this blog post, or in his his "After the ACA" piece, or anywhere I can find.

This is a shame, because when he bothers to read the literature, Cochrane is quite formidable. When he engaged with Robert Shiller's evidence on excess volatility in financial markets, and when he engaged with New Keynesian theory, Cochrane taught us new and interesting things about both of these issues. In both of these cases, Cochrane approached the issue from a perspective of free-market orthodoxy, and advanced the free-market (or efficient-market) case like a lawyer. But in both cases, he did so in a brilliant way that respected his opponents' arguments and evidence, and ultimately yielded new insight. 

But in the case of adverse selection in health insurance, Cochrane does not engage with the literature. And although I haven't read much of that literature, I know it exists, because I've read this 2000 literature review by David Cutler and Richard Zeckhauser. Starting on page 606, Cutler and Zeckhauser first present the basic theory of adverse selection, and then proceed to discuss a large number of studies that use a large and diverse array of techniques to measure the presence of adverse selection in health insurance. They write:
A substantial literature has examined adverse selection in insurance markets. Table 9 summarizes this literature, breaking selection into three categories: traditional insurance versus managed care; overall levels of insurance coverage; and high versus low option coverage.  
Most empirical work on adverse selection involves data from employers who allow choices of different health insurance plans of varying generosity; a minority of studies look at the Medicare market, where choices are also given. Within these contexts, adverse selection can be quantified in a variety of fashions. Some authors report the difference in premiums or claims generated by adverse selection after controlling for other relevant factors [for example, Price and Mays (1985). Brown at al. (1993)]. Other papers examine the likelihood of enrollment in a generous plan conditional on expected health status [for example, Cutler and Reber (1998)]. A third group measure the predominance of known risk factors among enrollees of more generous health plans compared to those in less generous plans [for example, Ellis (1989)].  
Regardless of the exact measurement strategy, however, the data nearly uniformly suggest that adverse selection is quantitatively large. Adverse selection is present in the choice between fee-for-service and managed care plans (8 out of 12 studies, with 2 findings of favorable selection and 3 studies ambiguous), in the choice between being insured and being uninsured (3 out of 4 studies, with 1 ambiguous finding), and in the choice between high-option and low-option plans within a given type (14 out of 14 studies). 
They proceed to list the studies in a table, along with brief summaries of the methods and the results.

Have I read any of these studies? In fact, I have read only one of them - a 1998 study of some government and university employees, also by Cutler and Zeckhauser. They document a market breakdown - the disappearance of high-coverage health plans. And they present evidence that this breakdown was due to the so-called "adverse selection death spiral", in which healthy people leave high-coverage plans until the plans can no longer be offered. And they show that a similar thing was starting to happen to the Group Insurance Commission of Massachussetts, before major reforms were made to the system that prevented the death spiral. 

So there is some evidence that adverse selection not only exists and creates costs in (at least some!) health insurance markets, but is so severe that it can cause market breakdown of the classic Akerlofian type. 

If I were setting out to dismiss the possibility of this sort of major adverse selection, I would read a number of these papers, or at least skim their results. I would also look for more recent work on the subject. 

I would also read the literature on adverse selection in other insurance markets, to see whether there's a noticeable difference between types of insurance. I'd read this Chiappori and Salanie paper on auto insurance, for example (which I had to study in grad school), which finds no evidence of adverse selection in car insurance. That would make me think "Hmm, maybe car insurance and health insurance are two very different markets."

I am not setting out to dismiss adverse selection, however. Nor am I setting out to claim that it's a big enough problem that it requires major government regulation of the health insurance market. Nor am I claiming that Obamacare passes a cost-benefit test as a remedy for adverse selection. In fact, I don't even think that adverse selection is the main reason we regulate health care! I think it's kind of a sideshow - an annoyance that we have to deal with, but not the central issue. I think the central issue of health care regulation is just a social norm - the widespread belief that everyone ought to have health care, and that the cost of health care ought to depend only on your ability to pay. Those norms, I believe, are why people embrace universal health care, and why they are now coming to embrace the radical solution of single-payer health care.

But that's just me. Cochrane thinks adverse selection is the big issue, so he goes after it, but without standing on the shoulders of the giants who have investigated the matter before. Instead, he waves the problem away. Unlike me, who am but a lowly journalist, Cochrane is a celebrated professional economist. He has done much better in the past, and he could do better now if he chose.

Saturday, September 23, 2017

Speech on campus: A reply to Brad DeLong

On Twitter, I wrote that I disagreed with Brad's ideas about speech on college campuses. Brad then requested that I write my ideas up in the form of a DeLong Smackdown. So here we go.

Brad's post was written in a particular context - the recent battles over right-wing speakers at Berkeley. More generally, the alt-right has been trying to provoke conflict at Berkeley, seeing an opportunity to gain nationwide sympathy. The murder of Heather Heyer by Nazis, and general white supremacist street violence, has turned the national mood against the alt-right. The alt-righters see (correctly) that the only way to recover rough parity is the "both sides" defense - in other words, to get people so worried about left-wing street violence that they equivocate between left and right. To this end, they are trying to stir up the most obvious source of potential leftist street violence: Berkeley. Brad, who works at Berkeley, is far closer to the action, and knows far more about the details on the ground than I do. For example, he noticed this flyer:

I am naturally coming at this from an outside perspective. Thus, my discussion will be more general than Brad's. That will naturally involve some degree of us talking past each other - I'll be thinking of things like campus speech codesaggressive protests against lefty professors by even more lefty students, etc. Thus, my arguments will not directly contradict Brad's.

But I believe there is something to be gained from this more general perspective. Brad, in writing his response to the New York Times' questions about free speech, seems to be starting with the particular example of alt-right "free speech" trolling, and generalizing from there. But generalizing from concrete examples can be dangerous, since the set of available examples is quite diverse. There is simply much more going on on college campuses in this country than the antics of the alt-right provocateurs at Berkeley. 

Anyway, on to Brad's post. Brad writes that universities should restrict speech whose intent and/or effect is to harm the discussion of useful ideas and/or drive people away from the university:
A university has three goals:
  • A university is a safe space where ideas can be set forth and developed.
  • A university is a safe space where ideas can be evaluated and assessed.
  • A university is a safe space where young scholars can develop, and gain intelligence and confidence.
Speech whose primary goal is to undermine and defeat one or more of those three goals does not belong on a university campus. 
If you come to Berkeley, and if your speech is primarily intended to—or even, through your failure to think through what you are doing, has the primary effect of (1) keeping us from developing ideas that may be great ones, (2) keeping us from properly evaluating and assessing ideas, or (3) driving members of the university away, your speech does not belong here.
At first glance, this seems reasonable. We all know that some people use speech as a weapon to shut down discussion or to hurt people - the aforementioned alt-right provocateurs are the paradigmatic example of this. More generally, we have all seen in the past three or four years how one very specific small group of people - the alt-right - has poisoned Twitter to the degree where it is less and less useful and fun for the vast majority of users, and has attempted to do the same to Facebook, Reddit, and YouTube.

There are very good reasons not to let a tiny group of bad people piss in the pool of free speech.

But the danger is that safeguards put in place to exclude this small minority of pool-pissers will wind up - to extend the metaphor - over-chlorinating the pool. The perfect example of this is the War on Terror. One guy tries (unsuccessfully) to hide a bomb in his shoe, and the next day we're stuck going through the bullshit security theater of shoe removal for all eternity. The danger of administrative overreaction should never be ignored.

One thing I notice about Brad's criteria for which kinds of campus speech should be administratively banned is that they are incredibly vague. For example, take the question of which ideas "may be great ones". What does "great" mean? The notion of what constitutes "proper" evaluation of ideas is also extremely vague. What does "proper" mean? 

In practice, these criteria are impossible to implement effectively without the personal judgment of a small and relatively self-consistent group of judges. For government speech restrictions, we rely on the judgment of federal and state judges and the Supreme Court to tell us what constitutes Constitutionally protected speech. Law is necessarily a subjective exercise, but it is a systematic subjectivity. 

But at the university level, the judges can be literally anyone on campus - administrators, faculty, and students. At different universities there will be different sets of judges, with different opinions. Unlike the world of U.S. law, where precedents are systematically logged and there is a huge well-trained legal profession dedicated to harmonizing standards and ideas and judgment across locations and situations, universities are a slapdash, haphazard patchwork of ad-hoc decision-making bodies. 

Thus, the standards Brad sets out are, in terms of actual content regarding the speech that is to be prohibited, effectively vacuous. 

But they are not vacuous statements overall - they connote a general endorsement of tighter speech restrictions than currently exist at most universities (or at least, at Berkeley). And that amounts to a directive to America's (or Berkeley's) entire vast, uncoordinated, untrained patchwork of campus stakeholders to go out and make a greater attempt to limit speech that they think is counterproductive. 

The effect of this advice, I predict, if widely heeded, will mainly be chaos. Given the lack of communication, coordination, shared values, training, and clearly recorded precedent among the various arms of the ad-hoc campus speech police, students and faculty at campuses across America will have little idea of what constitutes acceptable speech. In some situations, pro-Palestinian speech might be grounds for firing; in others, pro-Israel speech. At some universities we will see queer, mixed-race leftist professors berated to tears for wearing T-shirts saying "Poetry is lit". At other universities we will see faculty instructed not to ask students where they are from. 

In addition to the chaos of opinion regarding what constitutes counterproductive speech, there is the chaos of enforcement. The U.S. legal system has clear rules for how the law gets enforced - even though many police break those rules, it is much better to have the rules, and to identify who constitutes the police, than to rely on an ad-hoc patchwork of posses, lynch mobs, and other self-organized local militias to enforce the law.

On campus, the enforcement of local opinions about what constitutes counterproductive speech has become hopelessly patchwork. In some cases, professors and/or students are fired or disciplined by administrators. In other cases, faculty discipline students who say inappropriate things in class. In yet other cases, student protesters act as ad hoc militias, sometimes with the blessing of administrators and/or faculty, to enforce speech norms against professors. It's a jungle out there already. And calling for more speech restriction will only increase the demand for enforcement, making the jungle yet more chaotic. 

The chaos on U.S. campuses has been likened jokingly to China's Cultural Revolution. The comparison is obviously a joke - the Cultural Revolution mobilized millions of people, killed millions, and persecuted tens of millions, while the U.S. campus chaos has so far amounted to the firing of a few unlucky but (probably) financially secure academics and some (mostly) peaceful protests by a few thousand (mostly) spoiled upper-middle-class kids. 

But there is one clear parallel: the chaos. The Cultural Revolution was begun by Mao, who called for a general uprising to purge capitalist and reactionary elements from Chinese society. But because Mao by then had been stripped of most official power, and because Chinese institutions were weak, there was no real state apparatus to systematically prosecute Mao's goal. Instead, the task fell to self-organized militias across the country. Each militia had its own idea of what constituted true communism, and of what was required to achieve it. As a result, the militias did a bunch of crazy stuff, and even fought each other, and ultimately did nothing except to prolong China's century of suffering for another dozen years or so. (Eventually, the army cracked down and restored order.)

The lesson here is that forceful calls for vague revolutions have predictably chaotic consequences. Broadcasting the idea that there is lots of problematic speech on campuses that needs to be forcibly expunged, but offering neither a useful criterion for identifying such speech nor a useful method of punishing it, is a recipe for silliness, wasted effort, and general stress. That would be undesirable at any time, but a time when genuinely bad and threatening things are happening at the level of national politics, it seems like even more of an unneeded distraction. 

So what about the particular case of the alt-right and their Berkeley-poking? It seems clear to me that university administrators should stop provocateurs like Milo Yiannopolous from giving speeches intended mainly to provoke violent reactions. But this sort of provocation seems genuinely rare. Usually, when right-wing people give campus speeches, it's because they really believe in right-wing ideas. As much as Brad or I might disagree with those ideas, it seems counterproductive to ban them. 

In fact, it seems counterproductive to ban right-wing ideas from campus even if right-wing ideas are totally and completely wrong! The reason is that kids need something to argue and fight against. Like grouchy econ bloggers, college kids shape and refine their ideas through argumentation. Without John Cochrane tossing out terrible ideas about fiscal stimulus and health care, Brad and I would waste our mental effort in byzantine disputes with other left-leaning econobloggers over stuff like Verdoorn's Law. Similarly, without right-wing stuff to argue against, lefty college kids will turn their contentious intellectual passions against their left-wing professors. Given that right-wing ideas are still powerful outside of the university, I would rather not see America's premier source of left-wing energy and intelligence and ideas spend its fury devouring itself from within. 

So I believe that the proper approach to campus speech is a relatively hands-off one - to treat on-campus speech approximately like we treat off-campus speech. There will be some differences, of course - college kids live on campus, so there will need to be stronger protections against physical intimidation and threat. But in general, I believe that there should be no substantial increase in limitations of speech on American campuses. 

Attempted DeLong Smackdown complete.

Thursday, September 21, 2017

What we didn't get

I recently wrote a fairly well-received Twitter thread about how the cyberpunk sci-fi of the 1980s and early 1990s accurately predicted a lot about our current world. Our modern society is totally wired and connected, but also totally unequal - "the future is here, it's just not evenly distributed", as Gibson was fond of saying. Hackers, cyberwarfare, and online psyops are a regular part of our political and economic life. Billionaires build spaceships and collaborate with the government to spy on the populace, while working-class people live out of shipping crates and drink poison water. Hobbyists are into body modifications and genetic engineering, while labs are researching artificial body parts and brain-computer interfaces. The jetpack is real, but there's only one of it, and it's owned by a rich guy. Artificial intelligences trade stocks and can beat humans at Go, deaf people can hear, libertarians and criminals funnel billions of dollars around the world with untraceable private crypto-money. A meme virus almost as crazy as the one in Snow Crash swept an insane man to the presidency of the United States, and in Texas you can carry a sword on the street like a street samurai in Neuromancer. There are even artificial pop stars and murderous cyborg super-athletes.

We are, roughly, living in the world the cyberpunks envisioned.

This isn't the first time a generation of science fiction writers has managed to envision the future with disturbing accuracy. The early industrial age saw sci-fi writers predict many inventions that would eventually become reality, from air and space travel to submarines, tanks, television, helicopters, videoconferencing, X-rays, radar, robots, and even the atom bomb. There were quite a few misses, as well - no one is going back in time or journeying to the center of the Earth. But overall, early industrial sci-fi writers got the later Industrial Revolution pretty right. And their social predictions were pretty accurate, too - they anticipated consumer societies and high-tech large-scale warfare.

But there have also been eras of sci-fi that mostly got it wrong. Most famously, the mid-20th century was full of visions of starships, interplanetary exploration and colonization, android servitors and flying cars, planet-busting laser cannons, energy too cheap to meter. So far we don't have any of that. As Peter Thiel - one of our modern cyberpunk arch-villains - so memorably put it, "We wanted flying cars, instead we got 140 characters."

What happened? Why did mid-20th-century sci fi whiff so badly? Why didn't we get the Star Trek future, or the Jetsons future, or the Asimov future?

Two things happened. First, we ran out of theoretical physics. Second, we ran out of energy.

If you watch Star Trek or Star Wars, or read any of the innumerable space operas of the mid-20th century, they all depend on a bunch of fancy physics. Faster-than-light travel, artificial gravity, force fields of various kinds. In 1960, that sort of prediction might have made sense. Humanity had just experienced one of the most amazing sequences of physics advancements ever. In the space of a few short decades, humankind discovered relativity and quantum mechanics, invented the nuclear bomb and nuclear power, and created the x-ray, the laser, superconductors, radar and the space program. The early 20th century was really a physics bonanza, driven in large part by advances in fundamental theory. And in the 1950s and 1960s, those advances still seemed to be going strong, with the development of quantum field theories.

Then it all came to a halt. After the Standard Model was completed in the 1970s, there were no big breakthroughs in fundamental physics. There was a brief period of excitement in the 80s and 90s, when it seemed like string theory was going to unify quantum mechanics and gravity, and propel us into a new era to match the time of Einstein and Bohr and Dirac. But by the 2000s, people were writing pop books about how string theory has failed. Meanwhile, the largest, most expensive particle collider ever built has merely confirmed the theories of the 1970s, leaving little direction for where to go next. Physicists have certainly invented some more cool stuff (quantum teleporation! quantum computers!), but there have been no theoretical breakthroughs that would allow us to cruise from star to star or harness the force of gravity.

The second thing that happened was that we stopped getting better sources of energy. Here is a brief, roughly chronological list of energy sources harnessed by humankind, with their specific energies (usable potential energy per unit mass) listed in units of MJ/kg. Remember that more specific energy (or, alternatively, more energy density) means more energy that you can carry around in your pocket, your car, or your spaceship.

Protein: 16.8

Sugars: 17.0

Fat: 37

Wood: 16.2

Gunpowder: 3.0

Coal: 24.0 - 35.0

TNT: 4.6

Diesel: 48

Kerosene: 42.8

Gasoline: 46.4

Methane: 55.5

Uranium: 80,620,000

Deuterium: 87,900,000

Lithium-ion battery: 0.36 - 0.875

This doesn't tell the whole story, of course, since availability and recoverability are key - to get the energy of protein, you have to kill a deer and eat it, or grow some soybeans, while deposits of coal, gas, and uranium can be dug up out of the ground. Transportability is also important (natural gas is hard to carry around in a car).

But this sequence does show one basic fact: In the industrial age, we got better at carrying energy around with us. And then, at the dawn of the nuclear age, it looked like we were about to get MUCH better at carrying energy around with us. One kilogram of uranium has almost two million times as much energy in it as a kilogram of gasoline. If you could carry that around in a pocket battery, you really might be able to blow up buildings with a handheld laser gun. If you could put that in a spaceship, you might be able to zip to other planets in a couple of days. If you could put that in a car, you can bet that car would fly. You could probably even use it to make a deflector shield.

But you can't carry uranium around in your pocket or your car, because it's too dangerous. First of all, if there were enough uranium to go critical, you'd have a nuclear weapon in your garage. Second, uranium is a horrible deadly poison that can wreak havoc on the environment. No one is going to let you have that. (Incidentally, this is also probably why you don't have a flying car yet - it has too much energy. The people who decide whether to allow flying cars realize that some people would choose to crash those high-energy objects into buildings. Regular cars are dangerous enough!)

Now, you can put uranium on your submarine. And you can put it in your spaceship, though actually channeling the power into propulsion is still a problem that needs some work. But overall, the toxicity of uranium, and the ease with which fission turns into a meltdown, has prevented widespread application of nuclear power. That also holds to some degree for nuclear electricity.

As for fusion power, we never managed to invent that, except for bombs.

So the reason we didn't get the 1960s sci-fi future was twofold. A large part of it was apparently impossible (FTL travel, artificial gravity). And a lot of the stuff that was possible, but relied on very high energy density fuels, was too unsafe for general use. We might still get our androids, and someday in the very far future we might have nuclear-powered spaceships whisking us to Mars or Europa or zero-G habitats somewhere. But you can't have your flying car or your pocket laser cannon, because frankly, you're probably just too much of a jerk to use them responsibly.

So that brings us to another question: What about the most recent era of science fiction? Starting in the mid to late 1990s, until maybe around 2010, sci-fi once again embraced some very far-out future stuff. Typical elements (some of which, to be fair, had been occasionally included in the earlier cyberpunk canon) included:

1. Strong (self-improving) AI, artificial general intelligence, and artificial consciousness

2. Personality upload

3. Self-replicating nanotech and general assemblers

4. A technological Singularity

These haven't happened yet, but it's only been a couple of decades since this sort of futurism became popular. Will we eventually get these things?

Unlike faster-than-light travel and artificial gravity, we have no theory telling us that we can't have strong AI or a Singularity or personality upload. (Well, some people have conjectures as to reasons we couldn't, but these aren't solidly proven theories like General Relativity.) But we also don't really have any idea how to start making these things. What we call AI isn't yet a general intelligence, and we have no idea if any general intelligence can be self-improving (or would want to be!). Personality upload requires an understanding of the brain we just don't have. We're inching closer to true nanotech, but it still seems far off.

So there's a possibility that the starry-eyed Singularitan sci-fi of the 00s will simply never come to pass. Like the future of starships and phasers, it might become a sort of pop retrofuture - fodder for fun Hollywood movies, but no longer the kind of thing anyone thinks will really happen. Meanwhile, technological progress might move on in another direction - biotech? - and another savvy generation of Jules Vernes and William Gibsons might emerge to predict where that goes.

Which raises a final question: Is sci-fi least accurate when technological progress is fastest?

Think about it: The biggest sci-fi miss of all time came at the peak of progress, right around World War 2. If the Singularitan sci-fi boom turns out to have also been a whiff, it'll line up pretty nicely with the productivity acceleration of the 1990s and 00s. Maybe when a certain kind of technology - energy-intensive transportation and weapons technology, or processing-intensive computing technology - is increasing spectacularly quickly, sci-fi authors get caught up in the rush of that trend, and project it out to infinity and beyond. But maybe it's the authors at the very beginning of a tech boom, before progress in a particular area really kicks into high gear, who are able to see more clearly where the boom will take us. (Of course, demonstrating that empirically would involve controlling for the obvious survivorship bias).

We'll never know. Nor is this important in any way that I can tell, except for sci-fi fans. But it's certainly fun to think about.

The margin of stupid

Every so often, I see a news story or tweet hyping the fact that a modest but non-negligible percent of Americans said some crazy or horrible thing in a survey. Here are two examples:

The most chilling findings, however, involved how students think repugnant speech should be dealt with...It gets even worse. Respondents were also asked if it would be acceptable for a student group to use violence to prevent that same controversial speaker from talking. Here, 19 percent said yes. 

Update: It turns out this particular survey was a badly designed piece of crap. I'm not particularly surprised...

Racial slurs that have cropped up chants, e-mails and white boards on America's college campuses have some people worried about whether the nation's diverse and fawned-over millennial generation is not as racially tolerant as might be expected. 


So, from these two examples -- both of them in the Washington Post -- I'm supposed to believe that Millennials are a bunch of unreconstructed racists, except for the ones who go to college, who are a pack of intolerant leftists. 

It seems to me like there's something inherently suspicious about judging a group of people based on sentiments expressed by only 15 or 20 percent of those people. But beyond that, there's another problem here - the problem of whether we can really trust these surveys. 

Surveys give a false sense of precision, by reporting a "margin of error" (confidence interval). But that confidence interval comes purely from the fact that the sample is finite. It does not capture systematic error, like selection bias (Are the people who answer this survey representative of the population being sampled?). And it definitely doesn't capture the errors people themselves make when responding to surveys.

When I did happiness survey research with Miles Kimball, there was always the nagging question of whether people are really able to know how happy they are. Of course, the whole question of what "happiness" should mean is a difficult one, but presumably there are some neurochemical tests you could do to determine how good someone feels, at least relative to how they felt in the past. How well do survey responses reflect this "true" emotion? Do people in different countries have cultural pressures that make them respond differently? Do Americans feel the need to say they're happy all the time, while British people would be ashamed to admit happiness? And are people measuring their happiness relative to yesterday, or to their youth, or to how happy they think they ought to be?

These errors were things that we lumped into something we called "response style" (psychologists call it response bias). It's very very hard to observe response style. But I'd say we can make a pretty good guess that Americans - and possibly everyone - do a lot of random responding when it comes to these sorts of surveys.

For example, a 2014 survey reported that 26 percent of Americans said that the sun goes around the Earth. 

Now, maybe there are a bunch of pre-Copernican geocentrists out there in America (there certainly are the flat-earthers!). Or maybe people just don't think very hard about how they answer these questions. Maybe some people are confused by the questions. Maybe some are trolling. 

Whatever the cause, it seems like you can get 20 to 25 percent of Americans to say any ridiculous thing imaginable. "Do you think eating raccoon poop reduces the risk of brain cancer?" "23 percent of Americans say yes!" "Would you be willing to cut your toes off with a rotary saw if it meant your neighbor had to do the same?" "17 percent of Americans say they would!" Etc.

You can also see this just from looking at some of the crosstabs in the first survey above. 20 percent of Democrats and 22% of Republicans say it's OK to use violence to shut down speakers you don't like. This sounds kind of nuts, given the panic on the right over lefty violence against campus speakers. Why would Republicans even more likely than Democrats to condone this sort of violence? It makes no sense at all...unless you can get ~20 percent of Americans to say pretty much any ridiculous thing on a survey. 

I call this the margin of stupid. Unlike the margin of error, it's not even a roughly symmetric error -- because you can't have less than 0% of people give a certain answer on a survey, the margin of stupid always biases surveys toward showing some non-negligible amount of support for any crazy or stupid or horrible position. 

Whenever you read a survey like this, you must take the margin of stupid into account. Yes, there are Americans who believe crazy, stupid, and horrible things. But dammit, there aren't that many. Next time you see some poll breathlessly claiming that 21 percent of Americans support executing anyone whose name starts with "G", or that 18 percent of Millennials believe themselves to be the reincarnation of Kublai Khan, take it with a grain of salt. It's a lot easier to give a stupid answer on a survey than to actually truly hold a nuts belief.

Sadly, the margin of stupid also probably applies to voting.

Sunday, September 10, 2017

a16z podcast on trade

I recently had the pleasure of appearing on the a16z podcast (a16z stands for Andreessen Horowitz, the venture capital firm). The topic was free trade, and the other guest was Russ Roberts of EconTalk.

Russ is known for making the orthodox case for free trade, and I've expressed some skepticism and reservations, so it seemed to me that my role in this podcast was to be the trade skeptic. So I thought of three reasons why pure, simple free trade might not be the optimal approach.

Reason 1: Cheap labor as a substitute for automation

Getting companies and inventors to innovate is really, really hard. Basically, no one ever captures the full monetary benefit of their innovations, so society relies on a series of kludges and awkward second-best solutions to incentivize innovative activity.

One of the ideas that has always fascinated me is the notion that cheap labor reduces the incentive for labor-saving innovation. This is the Robert Allen theory of the Industrial Revolution - high wages and cheap capital forced British businesspeople to start using machines, which then opened up a bonanza of innovation. It also pops up in a few econ models from time to time.

I've written about this idea in the context of minimum wage policy, but you can also apply it to trade. In the 00s, U.S. manufacturing employment suddenly fell off a cliff, but after about 2003 or so manufacturing productivity growth slowed down (despite the fact that you might expect it to accelerate as less productive workers were laid off first). That might mean that the huge dump of cheap Chinese labor onto the world market caused rich-world businesses to slack off on automation.

That could be an argument for limiting the pace at which rich countries open up trade with poor ones. Of course, even if true, this would be a pretty roundabout way of getting innovation, and totally ignores the well-being of the people in the poor country.

Also, this argument is more about the past than the future. China's unit labor costs have risen to the point where the global cheap labor boom is effectively over (since no other country or region is emerging to take China's place as a high-productivity cheap manufacturing base).

Reason 2: Adjustment friction

This is the trade-skeptic case that everyone is waking up to now, thanks to Autor, Dorn and Hanson. The economy seems to have trouble adjusting to really big rapid trade shocks, and lots of workers can end up permanently hurt.

Again, though, this is an argument about the past, not the future. The China Shock is over and done, and probably won't be replicated within our lifetime. So this consideration shouldn't affect our trade policy much going forward.

Reason 3: Exports and productivity

This is another productivity-based argument. It's essentially the Dani Rodrik argument for industrial policy for developing countries, adapted to rich countries. There is some evidence that when companies start exporting, their productivity goes up, implying that the well-known correlation between exports and productivity isn't just a selection effect.

So basically, there's a case to be made that export promotion - which represents a deviation from classic free trade - nudges companies to enter international markets where they then have to compete harder than before, incentivizing them to raise their productivity levels over time. That could mean innovating more, or it could just mean boosting operational efficiency to meet international standards.

This is the only real argument against free trade that's about the future rather than the past. If export promotion is a good idea, then it's still a good idea even though the China Shock is over. I would like to see more efforts by the U.S. to nudge domestically focused companies to compete in world markets. It might not work, but it's worth a try.

Anyway, that's my side of the story. Russ obviously had a lot to say as well. So if you feel like listening to our mellifluous voices for 38 minutes, head on over to the a16z website and listen to the podcast! And thanks to Sonal Chokshi for interviewing us and doing the editing.

Friday, September 08, 2017

Realism in macroeconomic modeling

Via Tyler Cowen, I see that Ljungqvist and Sargent have a new paper synthesizing much of the work that's been done in labor search-and-matching theory over the past decade or so.

This is pretty cool (and not just because these guys are still doing important research at an advanced age). Basically, Ljungqvist and Sargent are trying to solve the Shimer Puzzle - the fact that in classic labor search models of the business cycle, productivity shocks aren't big enough to generate the kind of employment fluctuations we see in actual business cycles. A number of theorists have proposed resolutions to this puzzle - i.e., ways to get realistic-sized productivity shocks to generate realistic-sized unemployment cycles. Ljungqvist and Sargent look at these and realize that they're basically all doing the same thing - reducing the value of a job match to the employer, so that small productivity shocks are more easily able to stop the matches from happening:
The next time you see unemployment respond sensitively to small changes in productivity in a model that contains a matching function, we hope that you will look for forces that suppress the fundamental surplus, i.e., deductions from productivity before the ‘invisible hand’ can allocate resources to vacancy creation. 
The fundamental surplus fraction is the single intermediate channel through which economic forces generating a high elasticity of market tightness with respect to productivity must operate...The role of the fundamental surplus in generating that response sensitivity transcends diverse matching models... 
For any model with a matching function, to arrive at the fundamental surplus take the output of a job, then deduct the sum of the value of leisure, the annuitized values of layoff costs and training costs and a worker’s ability to exploit a firm’s cost of delay under alternating-offer wage bargaining, and any other items that must be set aside. The fundamental surplus is an upper bound on what the “invisible hand” could allocate to vacancy creation. If that fundamental surplus constitutes a small fraction of a job’s output, it means that a given change in productivity translates into a much larger percentage change in the fundamental surplus. Because such large movements in the amount of resources that could potentially be used for vacancy creation cannot be offset by the invisible hand, significant variations in market tightness ensue, causing large movements in unemployment.
That's a useful thing to know.

Of course, I suspect that recessions are mostly not caused by productivity shocks, and that these business cycle models will ultimately be improved by instead considering shocks to the various things that get subtracted from productivity in the "fundamental surplus". That should affect unemployment in much the same way as productivity shocks, but will probably have advantages in explaining other business cycle facts like prices. Insisting that the shock that drives unemployment be a productivity shock seems like a tic - a holdover from a previous age. But that's just my intution - hopefully some macroeconomist will do that exercise.

But anyway, I think the whole field of labor search-and-matching models is interesting, because it shows how macroeconomists are gradually edging away from the Pool Player Analogy. Milton Friedman's Pool Player Analogy, if you'll recall, is the idea that a model doesn't have to have realistic elements in order to be a good model. Or more precisely, a good macro model doesn't have to fit micro data, only macro data. I personally think this is silly, because it ends up throwing away most of the available data that could be used to choose between models. Also, it seems unlikely that non-realistic models could generate realistic results.

Labor search-and-matching models still have plenty of unrealistic elements, but they're fundamentally a step in the direction of realism. For one thing, they were made by economists imagining the actual process of workers looking for jobs and companies looking for employees. That's a kind of realism. Even more importantly, they were based on real micro data about the job search process - help-wanted ads in newspapers or on websites, for example. In Milton Friedman's analogy, that's like looking at how the pool player actually moves his arm, instead of imagining how he should move his arm in order to sink the ball.

It's good to see macroeconomists moving away from this counterproductive philosophy of science. Figuring out how things actually work is a much more promising route than making up an imaginary way for them to work and hoping the macro data is too fuzzy to reject your overall results. Of course, people and companies might not search and bargain in the ways that macroeconomists have so far assumed they do. But because labor search modelers tend to take micro data seriously, bad assumptions will probably eventually be identified, questioned, and corrected.

This is good. Chalk labor search theory up as a win for realism. Now let's see macroeconomists make some realistic models of business investment!


For some reason, a few people read this post as claiming that labor search theory is something new. It's not! I was learning this stuff in macro class back in 2008, and people have been thinking about the idea since the 70s. In fact, if anything, there seems to be a mild dampening of enthusiasm for labor search models recently, though this is hard to gauge. One exception is that labor search models have been incorporated into New Keynesian theory, which seems like a good development.

Sadly, though, I haven't seen any similar theory trend dealing with business investment. This post was supposed to be a plug for that.

Thursday, September 07, 2017

An American Whitopia would be a dystopia

In a recent essay about the racial politics of the Trump movement, Ta-Nehisi Coates concluded with a warning:
It has long been an axiom among certain black writers and thinkers that while whiteness endangers the bodies of black people in the immediate sense, the larger threat is to white people themselves, the shared country, and even the whole world. There is an impulse to blanch at this sort of grandiosity. When W. E. B. Du Bois claims that slavery was “singularly disastrous for modern civilization” or James Baldwin claims that whites “have brought humanity to the edge of oblivion: because they think they are white,” the instinct is to cry exaggeration. But there really is no other way to read the presidency of Donald Trump.
Yes, at first glance, the notion that Trumpian white racial nationalism is a threat to the whole world, or the downfall of civilization, etc. seems a bit of an exaggeration. Barring global thermonuclear war, Trump and his successors aren't going to bring down human civilization - the U.S. is powerful and important, but it isn't nearly that powerful or important.

But there's an important truth here. An America defined by white racial nationalism - an American Whitopia - would be an economic and cultural disaster movie. It would be a dysfunctional, crappy civilization, sinking into the fetid morass of its own decay. Some people think that an American Whitopia would be bad for people of color but ultimately good for whites, but this is dead wrong. Although nonwhite Americans would certainly suffer greatly, white American suffering under the dystopia of a Trumpist society would be dire and unending. 

Here is a glimpse of that dark future, and an explanation of why it would fail so badly.

Don't think Japan. Think Ukraine.

First, a simple observation: Racial homogeneity is no guarantee of wealth. Don't believe me? Just look at a night photo of North Korea and South Korea:

The red arrow and white outline point to North Korea. It's completely pitch dark at night because it's poor as hell. People starve there. But it's every bit as ethnically pure and homogeneous as its neighbor South Korea - in fact, it's the same race of people. North Korea, in fact, puts a ton of cultural emphasis on racial homogeneity. But that doesn't save their society from being a dysfunctional hellhole.

OK, so North and South Korea are an experiment. They prove that institutions matter - that a homogeneous society can either be rich and happy or poor and hellish, depending on how well it's run.

It's not just East Asia we're talking about, either. It's incredibly easy to find deeply dysfunctional white homogeneous countries. Ukraine, for instance. Ukraine's per capita GDP is around $8,300 at purchasing power parity. That's less than 1/6 of America's. It's also a deeply dysfunctional society, with lots of drug use and suicide and all of that stuff, and has been so since long before the Donbass War started. 

It's worth noting that Ukraine also has an economy largely based on heavy industry and agriculture - just the kind of economy Trump wants to go back to. So being a homogeneous all-white country with plenty of heavy industry and lots of rich farmland hasn't saved Ukraine from being a dysfunctional, decaying civilization. 

Alt-righters explicitly call for America to be a white racial nation-state. Some cite Japan as an example of a successful ethnostate. Japan is great, there's no denying it. But I know Japan, and let me assure you, an American Whitopia would not be able to be Japan. It definitely wouldn't be Sweden or Denmark or Finland. It couldn't even be Hungary or Czech or Poland. It would probably end up more like Ukraine. 

Here's why.

Where are your smart people?

Modern economies have always depended on smart people, but the modern American economy depends on them even more than others and even more than in the past. The shift of industrial production chains to China has made America more dependent on knowledge-based industries - software, pharmaceuticals, advanced manufacturing, research and design, business services, etc. Even the energy industry is a high-tech, knowledge-based industry these days. Take away those industries, and America will be left trying to compete with China in steel part manufacturing. How's that working out for Ukraine?

If you want to understand how important knowledge-based industries are, just read Enrico Moretti's book, "The New Geography of Jobs". Cities and towns with lots of human capital - read, smart folks - are flourishing, while old-line manufacturing towns are decaying and dying. Trump has sold people a fantasy that his own blustering bullshit can reverse that trend, but if you really believe that, I've got a bridge to sell you.

So here's the thing: Smart Americans have no desire to live in a Whitopia. First, let's just look at smart white people. Among white Americans with a postgraduate degree, Clinton beat Trump in 2016 by a 13-point margin, even though Trump won whites overall by a 22 point margin. Overall, education was the strongest predictor of which white people voted for Trump and which went for Clinton. Also note that close to 2/3 of the U.S.' GDP is produced in counties that voted for Clinton. 

Richard Florida has been following smart Americans around for a long time, and he has repeatedly noted how they like to live in diverse places. Turn America into an ethnostate, and the smart white people will bolt for Canada, Australia, Japan, or wherever else isn't a racist hellhole.

Now look beyond white people. A huge amount of the talent that sustains America's key industries comes from Asia. An increasing amount also comes from Africa and the Middle East, though Asia is still key. Our best science students are mostly immigrants. Our grad students are mostly immigrants. Our best tech entrepreneurs are about half immigrantshttps://blogs.wsj.com/digits/2016/03/17/study-immigrants-founded-51-of-u-s-billion-dollar-startups/. You make America into Whitopia, and those people are gone gone gone.

I'm not saying every single smart American would leave an American white ethnostate. But most would, and many of those who remain wouldn't be happy. 

There's a clear precedent for this: Nazi Germany. Hitler's persecution of Jews made Jewish scientists leave. But it also prompted an exodus of non-Jewish scientists who weren't Jewish but who didn't like seeing their Jewish colleagues, friends, and spouses get persecuted - Erwin Schroedinger, for example, and Enrico Fermi. This resulted in a bonanza of talent for America, and it starved Nazi Germany of critical expertise in World War 2. Guess who built the atom bomb? 

How you get there matters

There are just about 197 million non-Hispanic white people in the United States. But the total population of the country is 323 million. That means that around 126 million Americans are nonwhite. Among young Americans, nonwhites make up an even larger percentage. 

To turn America into a white racial nation-state - into Whitopia - would require some combination of four things:

1. Genocide

2. Ethnic cleaning (expulsion of nonwhites)

3. Denial of legal rights to nonwhites

4. Partition of the country

To see how these would go, look to historical examples. 

Genocide is usually done against a group that's a small minority, like Armenians or Jews. Larger-scale genocides are occasionally attempted - for example, Hitler's plan to wipe out the bulk of the Slavs, or the general mass murder of 25% of the population in Pol Pot's Cambodia. These latter attempts at mega-genocide killed a lot of people (Hitler slaughtered 25 million Slavs or so), but eventually they failed, with disastrous consequences for both the people who engineered them and the countries that acquiesced to the policies.

Denial of legal rights to minorities also has a poor record of effectiveness. The Southern slavery regime in the U.S., the apartheid regime in South Africa, and the Jim Crow system in the U.S. all ended up collapsing under the weight of moral condemnation, economic inefficiency, and war. 

Ethnic cleansing and partition have somewhat less disastrous records - see India/Pakistan, or Israel/Palestine, or maybe the Iraqi Civil War that largely separated Sunni and Shia. But "less disastrous" doesn't mean "fine". Yes, India and Pakistan and Israel survived intact. But those bloody campaigns of separation and expulsion left scars that still haven't healed. The cost of Israeli partition was an endless conflict and a garrison state. The cost of Indian partition was a series of wars and an ongoing nuclear standoff, not to mention terrorism in both India and Pakistan. 

In America, a partition would lead to a long bloody war. Remember, 39% of whites voted for Hillary Clinton. And the 29% of Asians and Hispanics who voted for Trump are unlikely to express similar support for a policy that boots them out of their country or town. Furthermore, nonwhite Americans are not confined to a single region that could be spun off into a new country, but concentrated in cities all over the nation. Thus, any partition would involve a rearrangement of population on a scale unprecendented in modern history. That rearrangement would inevitably be violent - a civil war on a titanic scale. 

That war would leave lots of bitterness and social division in its wake. It would leave bad institutions in place for many decades. It would elevate the worst people in the country - the people willing to do the dirty deeds of ethnic cleansing. In an earlier post about homogeneity vs. diversity, I wrote about how a white ethnostate created byan exodus of whites from America or Europe would probably be populated by the most fractious, violent, division-prone subset of white people. A white ethnostate created by a titanic civil war and mass ethnic cleansing would be run by an even worse subset.

This is why a partition or ethnic cleansing of America would lead to lower social trust, bad institutions, a violent society, and a kakistocracy. In other words, a recipe for a country that looks more like Ukraine (or even North Korea) than it does like Japan. 

It's already happening

This isn't just theoretical, and it isn't just based on historical analogies either. There are already the first signs of dysfunction and dystopia in the new America that Trump, Bannon, Sessions, Miller, and others are working to create. 

First of all, the places that voted for Trump are not doing so well economically or socially. Not only do Trump counties represent only about a third of the nation's GDP, but they also tend to be suffering disproportionately from the opiate epidemic. States that shifted most strongly toward Trump from 2012 to 2016, like Ohio, tend to be Rust Belt states with low levels of education, low immigration, and low percentages of Asians and Hispanics. Imagine all the things that make Ohio slightly worse off than Texas or California or New York or Illinois, then multiply those things by 1000 - and take away all the good economic stuff in Ohio, like the diverse urban revival in Columbus - to see what a Trumpian Whitopia would look like. 

Second, Trump is already creating a kakistocracy. His administration, of course, is scandal-ridden and corrupt. His allies are the likes of Joe Arpaio, who is reported to have tortured undocumented immigrants. His regime has emboldened murderous Nazi types to march in the street, and his condemnation of those Nazis has been rather equivocal

That episode caused business leaders - some of the smartest, most capable Americans - to abandon the Trump administration. If even business leaders - who are mostly rich white men - abandon an administration with even a whiff of white nationalism, imagine who would be in charge in a Whitopia. It would not be the Tim Cooks and Larry Pages and Elon Musks of the world. It would be far less competent people. 

So already we're seeing the first few glimmerings of a dystopian Whitopia. We're still a long way off, of course - things could get a million times worse. But the Trump movement gives us a glimpse of what that path would look like, and it ain't pretty. 

Whitopia: a self-inflicted disaster of epic proportions

Refashioning America as a white ethnostate would be a self-inflicted catastrophe of epic, unprecedented proportions. It would drive America from the top rank of nations to the middle ranks. It would involve lots of pain and death and violence for everyone, but the white Americans stuck in Whitopia would suffer the longest. Nonwhite Americans would move away and become refugees, or die in the civil wars. But the ones who survived would escape the madness and begin new lives elsewhere, in more sane functional countries. 

Meanwhile, white Americans and their descendants would be trapped in the decaying corpse of a once-great civilization. A manufacturing-based economy making stuff no one else wanted to buy, bereft of the knowledge industries and vibrant diverse cities that had made it rich. A violent society suffering long-lasting PTSD from a terrible time of war and atrocity. A divided society, with simmering resentment underneath the surface, like Spain under Franco. A corrupt, thuggish leadership, with institutions that keep corrupt, thuggish leaders in power. 

This is what it would take to turn America from a diverse, polyracial nation into a white ethnostate. That is the price that white Americans, and their children, and their children's children would pay. 

It's not worth it.