Tuesday, February 28, 2017

Can't we all just get along?, Econometrics edition

Some academic fights I understand, like the argument over whether to use sticky prices in DSGE models. Others I have trouble comprehending. One of these is the fight between champions of structural and quasi-experimental econometrics. Angrist and Pischke, the champions of the quasi-experimental approach, waste few opportunities to diss structural work, and the structural folks often fire back. What I don't get is: Why not just do both?

Each approach has unavoidable strengths and weaknesses. Francis Diebold explains these in a nerdy way in a recent blog post. I tried to explain these in a non-nerdy Bloomberg View post a year ago.

The strength of the structural approach, relative to the quasi-experimental approach, is that you can make much bigger, bolder predictions. With the quasi-experimental approach, you typically have a linear model, and you estimate the slope of that line around a single point in the space of observables. As we all remember from high school calculus, we can always do that as long as something is differentiable:

But as you get farther from that point, extrapolation of the curve becomes less accurate. The curve curves. And just knowing the slope of that tangent line at that one point won't tell you how quickly your linear approximation becomes useless as you move away from that point. 

So this means that quasi-experimental methods have limited utility, but we can't really know how limited. Suppose we found out that minimum wage has a very small effect on jobs when you go from $4.25 to $5.05. How much does that tell us about how bad a $7.50 minimum wage would be? Or a $12.75 minimum wage? In fact, if all we have is a quasi-experimental study, we don't actually know how much it tells us. 

Quasi-experimental results come with basically no guide to their own external validity. You have to be Bayesian in order to apply them outside of the exact situation that they studied. You have to say "Well, if going from $4.25 to $5.05 wasn't that bad, I doubt going to $6.15 would be that much worse!" That's a prior.

If you want to believe that your model works far away from the data that you used to validate it, you need to believe in a structural model. That model could be linear or nonlinear, but "structural" basically means that you think it reflects factors that are invariant to conditions not explicitly included in the model. "Structural," in other words, means "the stuff that (you hope) is really going on."

The weakness of structural modeling is that good structural models are really, really rare. Most real-world situations in economics are pretty complicated - there are a lot of ins, a lot of outs, a lot of what-have-you. When you make a structural model you assume a lot of things away, and you assume that you've correctly specified the parts you leave in. This can often leave you with a totally bullshit fantasy model. 

So just test the structural model, and if the data reject it, don't use it, right? Hahahahahahaha. That would kill almost all the models in existence, and no models means no papers means no jobs for econometricians. Also, even if you're being totally serious and scientific and intellectually honest, it's not even clear how harsh you want to be when you test an econ model - this isn't physics, where things fit the data to arbitrary precision. How good should we even expect a "good" model to be? 

But that's a side track. What actually happens is that lots of people just assume they've got the right model, fit it as best they can, and report the parameter estimates as if those are real things. Or as Francis Diebold puts it:
A cynical but not-entirely-false view is that structural causal inference effectively assumes a causal mechanism, known up to a vector of parameters that can be estimated. Big assumption. And of course different structural modelers can make different assumptions and get different results.
So with quasi-experimental econometrics, you know one fact pretty solidly, but you don't know how reliable that fact is for making predictions. And with structural econometrics, you make big bold predictions by making often heroic theoretical assumptions. 

(The bestest bestest thing would be if you could use controlled lab experiments to find reliable laws that hold in more complex environments, and use those to construct reliable microfounded models. But that's like wishing for a dragon steed. Keep wishing.)

So why not do both things? Do quasi-experimental studies. Make structural models. Make sure the structural models agree with the findings of the quasi-experiments. Make policy predictions using both the complex structural models and the simple linearized models, and show how the predictions differ. 

What's wrong with this approach? Why should structural vs. quasi-experimental be an either-or? Why the academic food fight? If there's something that needs fighting in econ, it's the (now much rarer but still too common) practice of making predictions purely from theory without checking data at all.

Monday, February 27, 2017

Historical cycle theories are silly...or are they?

I have a soft spot for theories I thought of when I was 14. Back then I consumed a lot of epic fantasy books (and video games, and TV shows), in which an ancient evil is often just now returning after being banished (typically for a period of 1000, 5000, or 10000 years), and new heroes must arise to defeat it again, etc. etc. I reflected that my grandfathers had defeated cosmic evil, back in WW2, and that before that, my American forebears had defeated cosmic evil in the Civil War, so at some point we were due for another showdown with the ever-returning Forces of Darkness. I also figured that each generation after the war would be a little softer and more complacent than the last, and that this weakness would be one thing that encouraged the Forces of Darkness to make their comeback. And since it was about 75 years from the Civil War to WW2, I figured that each cycle lasted about four generations, and that it would be the generation after mine who would have to bear the brunt of the fight the next time.

It's fun to be 14. If you've never done it, I suggest you try it.

I recently found out that the authors Neil Howe and William Strauss already published a very detailed version of a very similar theory, back in 1991 (well before I turned 14!). I found this out via Steve Bannon, who according to news reports is a fan of their theory. Recently, Howe wrote a Washington Post op-ed explaining the theory. The basic idea is that there's a four-generation cycle. A "crisis" generation creates social unity and builds up national institutions, and each successive generation challenges and degrades those institutions, until four generations later the institutions collapse and there's another crisis. According to Howe, the Millennials are the ones who will have to renew our society this time.

That's a cool theory. But like all periodic theories of history, it's easily falsified.

Why? Because lots of crises are externally imposed. The Black Death's arrival in Europe had little to do with the strength of European institutions. The Japanese invasion of the Philippines was unrelated to the Phillipines' position in any generational cycle. The Industrial Revolution and the Mongol Invasions blindsided every nation on the planet. And so forth. Exogenous shocks obviously happen, and they disrupt the timing of any generational cycle. So the nice smooth even periodicity that Howe and Strauss posit can't exist, even if there are forces tending in that direction.

Also, if you look at history, you see both some very long periods of crisis-free stability and some very long periods of continuous dramatic social upheaval. For example, China's "century of humiliation" involved about 110 years of almost continuous rebellion, civil war, invasion, mass killing, and political upheaval. France during the years from 1789 to 1945 experienced two empires, three republics, a large number of revolutions and counter-revolutions, many foreign invasions, and millions of violent deaths. 

On the flip side of the coin, Britain from the Glorious Revolution of 1689 to the start of World War 1 experienced over two centuries of stability with no real regime change or total war (the Napoleonic Wars being the closest thing, but ultimately not even requiring mass conscription). China during the Ming Dynasty and Japan during the Tokugawa Dynasty were similarly stable. 

If you hunted around and looked closely, you might be able to look at those long stable centuries and find some minor social disruptions loosely corresponding to the Strauss-Howe four-generation cycle. But think how many other such minor disruptions you'd be ignoring! (Were the 1960s a "crisis" for America? We had a bunch of assassinations, race riots, and a major war, after all.) Apophenia is a powerful temptation. But don't be fooled - by any objective measure you can find, history is aperiodic.

So formally, in the rigorous sense, Strauss-Howe theory is wrong. BUT, I still think it could be describing some important processes at work. Just because history is aperiodic doesn't mean it's random.

First, there's the idea of institutional decay, as put forth in Mancur Olson's The Rise and Decline of Nations. The idea here is that institutions developed to solve the problems of one era eventually become powerful incumbents who resist needed institutional changes later on down the road. If crises cause a "reset" of this cycle - the necessary fall of ineffective incumbent institutions, and their replacement with newer, more effective ones - the result could look a lot like a Strauss-Howe cycle. If the time it takes for institutions to go from effective to parasitical is a few decades, then it could even look periodic for countries that experience few external shocks (like the U.S., perhaps?). 

Second, there's the idea of a cycle of globalization. If free capital and labor flows tend to cause instability to build up in global economies - through excessive leverage, economic financialization, difficulty absorbing large cohorts of immigrants, the creation of an unsustainable "reserve currency" regime, etc. - then there could be repeated periods of globalization and retrenchment. Obviously, since there has only really been a modern global economy for a century and a half or so, this sort of cycle can't be reliably observed or confirmed yet. And no one has suggested that the cycle lasts a fixed number of generations or decades. But there are plenty of parallels between 1890-1929 and 1980-2008. And there are also parallels between the Great Depression and the Great Recession. And you could be forgiven for believing there are parallels between the politics of the 1930s and the politics of today.

So I wouldn't totally toss out the idea of a predictable social crisis. Whether it comes from generational attitude changes, institutional decay, or the instability of globalization, it's certainly possible that eras of stability tend to lead to crises eventually.

Saturday, February 25, 2017

Why human capital is capital

Economists tend to use the word "capital" pretty loosely. It just means "anything you can spend resources to build, which lasts a long time, and which also can be used to produce value." That's really broad. For example, it could include society itself. It also typically includes "human capital," which refers to people's skills, talents, and knowledge.

Why do most economists define "capital" this way? Really, it's just a convenient way to make the kind of models they like to make. I tried to explain this in a wonky post a couple of years ago.

But there are people out there who really don't like this broad definition of "capital". For example, the economist Branko Milanovic has repeatedly argued against use of the term. So has Matt Bruenig. And Paul Krugman agrees with them. They would rather restrict the word to mean what economists typically call "physical capital" - machines, buildings, and the like.

Who's right? In general, I don't like to boss anyone around with regards to vocab choices. Use words the way you want to use them, and just let people know what you mean. I would personally have preferred a different term, like "skill capital". But I think the term "human capital" is useful because it helps to convey some important truths about the world. Here are some facts about the world that I think the term "human capital" helps remind us of:

1. It's worse to be uneducated, unskilled and poor than to be educated, skilled and "poor".

Imagine that you're 22, educated, and poor. You have a bunch of student loan debt, but no money in the bank. You have book learning and credentials, but no immediately employable skills, very little on your resume, and not much of a network. You're sleeping in your friend's spare room and buying the cheapest food you can find.

Congratulations, you're me! Was I poor? By many measures, yes. But I certainly didn't feel poor. I knew that my Stanford degree and general intellectual skill (I could do math well and write well) would eventually let me get a good-paying job. In fact, I had no qualms whatsoever about my economic future. Zero fear.

But imagine yourself in the same situation with no degree, and without that general writing and math skill. Imagine yourself as a 22-year-old with the same debt and the same empty bank account. What are your future employment prospects? Construction worker? Landscaper? Day laborer?

The second way is a much worse way to be, right? Both 22-year-olds, me and my hypothetical uneducated counterpart, have the same official amounts of wealth. But despite the fact that I was scrounging for cheap food and sleeping in a friend's spare room, I didn't really feel like a poor person, and with good reason - I knew my future wasn't a future of poverty.

The word "human capital" gets at this distinction. It's a way of saying "education and skills are a form of wealth." If you ignore this wealth, you end up treating penniless grad students the same as honest-to-goodness poor people.

2. Some people make more money than others from the same amount of labor.

Opponents of the term "human capital" tend to say that human capital is really just labor. They like to define "capital" as anything that gives you passive income - in other words, anything that gives you money without work.

But consider me vs. an uneducated grocery store worker the same age as me. I don't work any harder than a grocery store worker - less hard, if the truth be told. I get up, read books, read papers, read Twitter and the news. I write some articles. The grocery store worker is moving groceries from the stock room to the shelves and back, checking inventory, working the cash register, answering people's questions, etc. Who is putting in more labor input, more effort and strain? Probably the grocery store worker.

But, as I'm slightly ashamed to admit, I make more money.

I view that difference as a form of passive income. Without spending any more effort than my grocery store worker counterpart, I get more money. This is just as passive as owning stocks, bonds, or real estate. My education and my skills (and my human networks, and my knowledge of the labor market) are a form of wealth that delivers me income for no extra effort.

It makes sense to me to have a word for this other sort of passive income-earning power. And "human capital" refers to exactly that.

3. Government spending on education represents investment for the future.

Government pays lots of money to educate the populace. We have universal public school. We have state-supported universities and colleges. Families themselves pay a lot on top of that, for university tuition, room and board, tutors, etc.

Is that spending a form of consumption? Is it just fancy day-care and subsidized partying? Some cynics would say yes, but I think the answer is very clearly no. A lot of that spending represents an investment in the future. The spending today will pay off tomorrow, in the form of a more productive populace.

When you spend money today and get back more than you spent, I say you're building wealth. And "capital" is really the same thing as "wealth" - it's the ownership of something that can deliver you income (passively!) in the future. Education creates no physical stores of value - no trucks or buildings or machine tools. But skills and knowledge are durable - you remember how to program a computer, or how to think like a lawyer.

When investment creates durable stores of value that produce income in the future, it makes sense to me to call it "capital". In my opinion, one big problem in the United States is that government doesn't invest enough. I think that depicting education spending as an investment in productive capital is helpful for making the case that government should spend more on education.

So there are three reasons why it often makes sense to think of skills and education as a form of capital. What about the objections? One objection people give is that income from human capital isn't passive - but, as I explained in point 2, it really is passive, since it allows you to get more income without any more effort - or the same amount of income for less effort.

A second objection is that people's education and skills can't be bought and sold, because we don't have indentured servitude. That's basically true (though there are some gray areas, like long-term contracts, noncompete agreements, or wage garnishment for student loans). But that's just a law. We could easily pass a law saying that office buildings can never ever be bought or sold, but must be owned forever by the people or companies that built them in the first place. Under this law, they could only be rented out, and only using month-to-month leases.

Would that law make office buildings any less a form of "capital"? I say no. By the same token, laws against indentured servitude change how human capital gets used in the economy, but they don't really change what it is. I strongly support laws against indentured servitude. But they don't change anything about the physical nature of education and skills. They don't change the fact that these are durable investments that produce passive income.

So by using the term "human capital", we remind people of several important truths:

1. We remind people that educated "poor" people aren't really as poor as uneducated poor people.

2. We remind people that skilled workers don't really work harder than unskilled workers.

3. We remind people that government spending on education is an investment for the future.

I think those are good and important things to keep in mind.

Saturday, February 18, 2017

Why liberals should own guns

I wrote a Twitter thread about this a while back, but it got deleted in a periodic wipe, so I thought I'd reprise it here for posterity, and expand a little on the earlier point.

For decades now, liberals - a term I'm using loosely to mean anyone on the American left - have mostly shunned gun ownership and gun culture. Around half of Republicans own guns, and 41% of those who call themselves "conservatives," compared to only 22% of Democrats and 23% of those who call themselves "liberals".

Why? One reason is that liberals are more likely to live in big cities, where there is an assumption that violence will be stopped by the police (and by numerous witnesses), rather than by one's own defensive actions. But I think part of it is cultural - liberals, by and large, want to live in a society without widespread gun ownership, and many have decided to "be the change they want to see in the world."

But leading by example hasn't worked. Gun control has been mostly a political non-starter except for a very brief period at the beginning of the Clinton administration. Conservatives continue to use the issue of gun rights as a rallying cry and cultural wedge issue, constantly invoking the fear that liberal politicians will come storming into Americans' houses and take away their means of self-defense.

Now things may be changing. A recent BBC report found that since the election of Trump, liberal interest in gun ownership has spiked. The Liberal Gun Club, run by Lara Smith (no relation) has reported a 10% increase in membership and a "huge" increase in interest.

I think this is a good trend. More liberals need to own guns. Why? Here are two reasons:

1. It would make any calls for gun control more credible.

Right now, many conservatives see gun control as a plot to disarm them. But if liberals are also armed, calls for things like assault weapons bans, or background checks, or stricter licensing requirements sound more like an arms limitation treaty than a call for unilateral disarmament. In other words, instead of liberals saying "Hey, give up your guns," they'll be saying "Hey, let's all give up some of our guns." That's a more credible, more powerful message.

There's a precedent for this. Half a century ago, the Black Panthers, ardent gun nuts, staged an armed protest at the California state capitol. The state responded with the Mulford Act, which forbid open carry - a very big, rapid success for prudent gun control policy. Now, I'm not suggesting liberals stage armed takeovers of government buildings - the 60s were a very different time, and people like Cliven Bundy aren't going about things in the right way. But the larger point is that when liberals have guns, even conservative politicians are willing to embrace sensible gun control measures.

A more metaphorical example is the successful history of U.S.-Soviet and U.S.-Russian arms limitation treaties. The Russians love nukes like Ted Nugent loves guns, but because we had nukes of our own, we managed to make them see how sensible it would be to limit the total amount.

2. It's insurance against the breakdown of public order.

The total breakdown of public order is highly, highly unlikely. It would take a nuclear war, a civil war or coup, or a major natural disaster like a Yellowstone super-eruption to produce a situation in which America reverted to anarchy.

But just because it's unlikely doesn't mean it's pointless to insure against it. This is a tail risk, but the consequences would be huge and disastrous. So it might make a lot of people sleep better in their beds at night knowing that if it came time to grab our guns and get to safety, they'd have guns to grab.

And the erratic nature of Trump's leadership probably increases the tail risk just a little bit - an accidental tweet or a falling-out with Putin might set off a nuclear war.

Also, it's important to remember that small, localized breakdowns of public order do also happen in cities from time to time, and that it can help to have a gun in those extreme cases - especially if you're a minority, and less likely to get immediate help from the cops.

There's actually sort of a historical precedent for this as well. In the episode known as "Bleeding Kansas" in 1854-1861, the U.S. government decreed that slavery's legality in Kansas would be decided by popular vote. Naturally, this made pro- and anti-slavery people both rush to settle in the state, and it also sparked a guerilla war. The government, paralyzed by the fear of a larger civil war (which soon happened anyway), did little to quell the violence, so the state became a zone of anarchy. The anti-slavery forces, known as Jayhawkers, were well-armed, and eventually won the conflict.

So these are the two main reasons that liberals should own guns. The main argument against owning guns is the risk of accident - over a hundred American kids die from gun accidents every year. Why take the risk? Well, it's certainly possible to minimize the risk of gun accidents - keep the gun in a safe. If you take proper precautions, the risk is far lower than the aggregate statistics might suggest.

But if that risk is just too high, consider simply learning how to use guns. Knowing how to shoot, maintain guns, etc. is probably more important than physically having the guns in your home. Who knows...you might find out it's fun!

Anyway, remember, always safety first. And if you do have mental illness, I'd say don't buy a gun, even if the law allows you to.

Thursday, February 16, 2017

Why go after Milton Friedman?

The top question on my Reddit AMA was "When and why did you start hating on our lord and saviour Milton Friedman?". Two or three other questions were basically the same.

And it's true, I have been on a Milt-bashing kick of late. I did a post evaluating how Friedman's macro theories had held up, and gave them a C+ overall. I wrote another post complaining about his "pool player analogy", which people use to justify not checking their model assumptions against micro data. I wrote two Bloomberg posts declaring the Permanent Income Hypothesis dead (post 1, post 2). And I wrote a tweetstorm (since deleted in a periodic tweet-wipe) about how Friedman's libertarian policy program might have prevented racial integration in the United States.

My revisionist campaign against the late Friedman has ruffled a lot of feathers. Uncle Milt is something of a secular saint among both economists and libertarians. If you say "people don't smooth consumption," economists will talk about the issue calmly and reasonably, but if you say "the Permanent Income Hypothesis is wrong" - which means exactly the same thing - lots of hackles are instantly raised, and people jump to defend the hallowed PIH. Similarly, in policy discussions, if you diss vouchers, people will argue, but if you say "Milton Friedman was wrong about vouchers," they get mad.

So why do it? Why not leave Friedman alone and just talk about his ideas, or the modern-day versions thereof? Here are my reasons:

1. Clickbait!

Saying "X is wrong" gets less attention than saying "X, which Milton Friedman supported, is wrong." So why not do the latter? If it gets more laypeople paying attention to economic research and serious economic ideas, I say that's a good thing.

It's hard to get people interested in the latest research on consumption smoothing. That is a nontrivial thing to do. And putting Friedman's name up there is one way to do it. As long as I'm not misattributing anything to the man, what's wrong with that?

2. Fighting against "Great Sage" culture

"Science alone of all the subjects contains within itself the lesson of the danger of belief in the infallibility of the greatest teachers of the preceding generation," Feynman said. And you should take it from him, right?. ;-)

Anyway, that principle makes sense. If understanding is going to progress, people can't have too much reverence for the opinions and theories of respected humans. In the humanities, there tends to be a lot of reverence for the ideas and thoughts of the Great Old Masters. "Kant said X" and "Foucault said Y" are things you'll actually hear when you talk to humanities types. Who cares? Why should I believe Kant or Foucault? I never understood this. I guess in the humanities, lots of things are just matters of opinion, or untestable conjecture, so it's not that important to go out and try to prove Kant wrong. But in a scientific field, it's the knowledge that matters, not the people who found it (or tried to find it and failed). Too much reverence for the teachings of a Great Sage can hold people back from finding better ideas.

I feel like economics has a bit of Great Sage disease. People are way too reverent about old masters like Friedman or Lucas. This is in contrast with physics, where people delight in saying "Einstein was wrong" or "Feynman was wrong" about something. I like the irreverent way better.

3. Annoyance at the mixing of econ and politics

In his scholarly writings, Friedman was careful to draw a distinction between normative and positive economics. But it's not clear his fans got the message. Friedman very publicly engaged in policy advocacy. His most famous book was an ideological tract. They made that book into a TV show!

Do you think that Friedman's status as a top academic economist had nothing to do with the respect and credence that were afforded to his ideological and political ideas? If so, I've got a bridge to sell you. Friedman taught a generation of fans that laissez-faire policies were great, and his academic status lent an imprimatur to those teachings that a Wall Street Journal writer or libertarian pundit never could have enjoyed.

So by informing the public that Friedman got some big things wrong in his academic research (which of course is true of any economist), I hope to be able to dispel a little of that mystique. Fans of Friedman's libertarian ideology need to know that their sage was just as fallible a scientist as any other.

So there you go. Three reasons to publicly criticize the ideas of Milton Friedman. As for reasons not to -- well, the man has already passed away, and he amassed so much fame and respect that the tiny stings of an insignificant insect such as myself pose no real threat to his legacy. So I don't feel guilty at all.

Thursday Bloomberg Roundup, 2/16/2017

This week's Bloomberg View posts:

1. "Still Seeking Growth From Tax Cuts and Union Busting"

Can states win with a low-tax, anti-union strategy? A few are trying. How are they doing? The best way to answer this is to compare adjacent pairs of otherwise similar states. In this post, I take a look at Wisconsin vs. Minnesota and Kansas vs. Nebraska. Short version: laizzez-faire policies can create vast wealth for the poor and middle class don't seem to have much effect at all.

2. "Things Might Be OK if Trump Borrows From Abe"

I don't expect this to happen, but if Trump decided to follow Shinzo Abe's playbook, he could end up being the kind of responsible, forward-looking nationalist leader that Abe has proven to be.

This post is also a rebuke to all those gaijin writers who were yelling "Abe is a fascist!" a couple years back.

3. "Economics Gets a Presidential Demotion"

Economists might think that all the economist-bashing in the media is limited to a rabble of irrelevant angry British lefties. But in fact, the loss of status is real, and Trump's decision not to include the CEA Chair in his cabinet is just one more sign of that. If economists want to retain the extraordinary prestige they've amassed over the last few decades, they're going to need to make a few changes in the way they present themselves to the public. In this post I give three ideas for how they can do this.

4. "Monopolies are Worse Than We Thought"

More and more economists are pointing to rising industrial concentration and market power as the source of many of the problems in our economy. But what's behind that trend? In this post I suggest a few possible explanations - weakened antitrust, overregulation, and the influence of technology.

5. "Market Failure Looks Like the Culprit in Rising Costs"

This post is a riff on a great Scott Alexander post about excess costs in America. The question is why America has anomalously high costs for health care, infrastructure, college education, and asset management. Government intervention doesn't seem to be the explanation, since other rich countries generally have more of this, alongside lower costs. Baumol Cost Disease also doesn't seem like the whole explanation, for the same reason. Market failures, of the kind that most econ students learn about, might be part of the reason. But I suspect that a lot of it comes from what Akerlof and Shiller call "Phishing for Phools" - a combination of limited information and outright trickery that reaches a bad equilibrium.

Wednesday, February 15, 2017

My AMA on r/badeconomics

I did an AMA on r/badeconomics (whose name is tongue-in-cheek, as it's actually much more econ-savvy than r/economics). It ended up being really long, since it was posted a day early. But that just made it more fun! Thanks much to excellent moderator Jericho Hill for setting it up, and to everyone who posted questions.

Questions included:

1. Why do I diss Milton Friedman a lot these days?

2. Which is a bigger problem: 101ism, or the people who say econ is a bunch of neoliberal garbage?

3. Is heterodox econ the antidote to "economism"?

4. Do banks "lend excess reserves"?

5. Which economists in the public sphere do I respect the most?

6. How could the Euler Equation possibly be wrong?

7. Which pop econ books do I recommend?

8. Which economists in the public sphere do I respect the most?

9. What have economists changed their minds about the most in recent years?

10. Is the Permanent Income Hypothesis really "wrong"?

11. Does money need to be "backed" by some valuable commodity?

12. No, really, why do I hate Milton Friedman so much?

13. How did I develop my writing style?

14. Does Bloomberg pay me enough? Am I not afraid of life without tenure?

15. How does one get started being a blogger?

16. Neoliberalism is out of favor these days, so why keep on bashing it?

17. Who will be the next great public explainer of economics?

And more! A fun time was had by all. Check out the whole thing here.

Tuesday, February 07, 2017

No, we don't need an immigration "pause".

I've been getting in arguments with immigration restrictionists for years now. The more reasonable restrictionists suggest that we need an immigration "pause" in order to assimilate the recent big wave of immigrants. They point to the Immigration Act of 1924 as an example, and suggest doing something similar today.

The argument is not implausible. Integration is important. When the citizens of a country view themselves through a tribal lens, it can be very hard to get important things done, and the country can become dysfunctional and - eventually - poor. In the past, America has done well at combining many disparate ethnicities - Irish, Germans, Italians, Jews, Greeks, Poles, etc. There's plenty of reason to believe that this is happening again, with the mostly Hispanic and Asian immigrants of the recent wave.

But there's an argument that we need to speed this process up, by pausing immigration. Without a pause, restrictionists say, the phenomenon of "replenished ethnicity" might keep Hispanic and Asian people feeling like "permanent foreigners" for decades, leading to tribalized politics and social strife. Only because we paused immigration in the past, they say, did we managed to integrate the previous waves.

That's not implausible, but I think a closer look at the history of U.S. immigration shows that past restrictions were not as important as many believe. Here, via Natalia Bronshtein, is a graph showing the history of U.S. immigration by source country. I've annotated the graph with some important events:

The things that stand out most are 1) the big pause in the early middle 20th century, and 2) the big waves in the early 1900s and late 1900s/early 2000s. The y-axis is in absolute numbers; in terms of percentages of the U.S. population, those two waves were about equally big. 

One thing you'll notice is that there was no pause in the 19th century. Despite big waves of anti-immigrant and anti-Catholic sentiment, immigration was not banned and didn't halt. An exception was the Chinese Exclusion Act of 1882, but this didn't affect the biggest waves of immigrants coming in at the time. 

But despite the fact that there was no pause and no ban, and despite the fact that Irish and German immigrants kept coming throughout the 1800s, immigrants from these countries integrated quite effectively into American society.

Another thing to notice is that when the big immigration restriction was enacted in 1924, immigration had already fallen substantially from its peak about 15 years earlier. The law was probably important, but maybe not as important as its fans think. I bet the Great Depression, which came just 5 years later,  and WW2, would have been almost as effective in keeping immigration low.

Also note that immigration had started increasing substantially well before the 1965 law that loosened official controls. The 1950s were a time of rapidly increasing immigration, despite the legal ban. Nor was the 1965 law change immediately followed by a trend break; immigration increased steadily, but didn't really explode until the 1990s.

This doesn't mean that laws don't have an effect - the Simpson-Mazzoli act, commonly known as "Reagan's amnesty," was followed by a surge in Mexican immigration (and even more that was undocumented, and not on this graph). But overall, most of the ups and downs seem to correspond to economic booms, busts, and wars rather than to U.S. government policy. 

So fans of the 1924 immigration restriction should rethink their understanding of history. Economic factors were probably just as important as laws in determining immigration levels.

Another important observation is that country-specific immigration booms all seem to end on their own. Irish and German immigration trickled off around the turn of the 20th century. Italian immigration experienced a short mini-boom after WW2, but never came close to regaining its previous levels. The Austro-Hungarian and Russian booms were short-lived, one-shot affairs.

Should we expect the Mexican boom to end similarly, on its own, without government controls? Yes. In fact, it already did end, at least a decade ago. It's done, finished, over, kaput:

More Mexicans are going back to Mexico than are coming in. Mexican immigration basically halted sometime in the 2000s and went into reverse. And yes, that includes illegal immigration, which has been negative since the Great Recession.

The Mexican Boom is done. The Hispanic Boom as a whole is not quite finished - Central Americans and Caribbeans continue to come in, though at a slower rate than before. But these are trickling off as well. 

As of now, the main source of immigration to the U.S. is Asian. Asian immigrants are expected to surpass Hispanics as the largest foreign-born population in the country by mid century, unless Trump or other leaders block Asians from entering.

So the fears of "replenished ethnicity" keeping the American population from integrating are, in my opinion, overdone. Immigration booms end on their own. The new immigrants don't come from the same places that the old ones did. There is, therefore, little danger that allowing continued immigration will put us in danger of tribal balkanization.


For a more in-depth post on this topic, see this by Lyman Stone. Most of the conclusions and points are fairly similar, but there's much more theory and data. 

Monday, February 06, 2017

Much of econ has become more scientific

My employers, the editors of Bloomberg View, have an editorial out called "Why Not Make Economics a Science?". I like this post, it gets at a very important point. But I think it leaves some very important things out, too.

The article's main criticism of econ - or really, of macro, which most people casually call "economics" - is that it seems reluctant to discard theories:
[T]oo many theorists...have drifted far from the real world...Before the 2008 financial crisis, for example, the standard models more or less ignored finance...Given such spectacular failures, you’d think the profession would have gone back to the drawing board. It hasn’t. True, some tweaks have been attempted...But the error at the core of modern macroeconomics -- that mathematical consistency matters more than empirical relevance -- prevails...Reviving economics as a science will require economists to act more like scientists. If models are refuted by the observable world, toss them out. Rely on experiments, data and replication to test theories and understand how people and companies really behave.
As you can tell from the links (the post itself has many more), the editors have done their homework.

The editors are right that mainstream macro theory hasn't changed much since the crisis - the addition of finance, though important, really is a tweak to the basic structure. Central elements like the consumption Euler equation, TFP shocks, Calvo pricing, infinite forward-looking-ness, exponential discounting, profit-maximizing firms, etc. are still really common in DSGE models, despite the steady drumbeat of evidence against many of these assumptions. And DSGE models, though a tiny bit less popular than at their peak, are still really common in the literature:

The editors are right to be annoyed that the basic DSGE framework has only been tweaked, instead of rethought from the bottom up. Imre Lakatos would probably agree. Science should be about tossing out theories, not generating infinite numbers of theories to sit on the bookshelves gathering mold.

But I also think this brief editorial leaves out a few important things, which I'd like to remind people about:

1. Most economics is not macro.

In the press, we have a tendency to use "economics" as a synonym for "macroeconomics". There are a couple reasons for this. First, it's tedious to keep writing "macroeconomics". Second the only branches of econ that the public traditionally cares a lot about are macro and trade, and maybe a little about finance. Now people are starting to care a bit about labor too, which is good. But relatively few readers care about game theory, decision theory, industrial organization, development economics, public finance, economic history, environmental econ, ag econ, urban econ, etc.

The Bloomberg View editors know this distinction, which is why they specify in the article that they're talking about macro. But many economists in other fields will tend to read this editorial and get annoyed, since it leaves them out. In fact, as many of us Bloomberg View writers have noted, the non-macro parts of econ are now mostly empirical, and empirical econ is looking more and more like a standard science (i.e., very careful attention to controls). So it's important to remember that.

2. The definition of "macro" is part of the problem.

If you look at what academic macroeconomists - that is, the professors in the macro areas of econ departments - are doing, a lot of it now is pretty empirical. For example, macroeconomists might try to determine whether sticky prices or sticky wages matter more in business cycles, or why companies don't hire more young workers during recessions. Each paper of this sort will typically focus on understanding one piece of the macro puzzle. They'll have theory sections, but the theory will be limited to the phenomenon in question - it won't be a big general model of the whole economy.

Unfortunately, we often use the term "macro theory" only to refer to the big, general models of the whole economy. And since DSGE models are still the main type of big, general model of the whole economy (OLG would be a distant second, and most people don't call VARs "theory" at all), this means that "macro" now means "DSGE" almost by definition. Until someone comes up with a type of theory that isn't called "DSGE" and it gains credence as an alternative (for example, "agent-based" models), macro theory will by definition consist only of tweaks.

So while the Bloomberg View editors are right to be annoyed at the fact that mainstream macro theory hasn't changed much in the past decade, we should all recognize the big changes that have taken place not only in econ as a whole, but even within the macro area itself. No, there hasn't yet been a replacement for the basic Ed Prescott-inspired business-cycle modeling framework. But lots of other important work is being done, much of it very scientific.

Sunday, January 15, 2017

Cracks in the anti-behavioral dam?

This is purely my impression, buttressed with some anecdotes; I don't have any systematic data to back this up. But in both papers and casual discussion, I'm seeing macro people taking behavioral ideas more seriously. 

"Behavioral" is a very squishy idea, but basically I think of it as meaning "imperfect use of information". The difficulty with labeling a model "behavioral" is that we don't really know what information is available. This is why I believe there's a fundamental equivalence between behavioral and informational models - for any "informational" model where agents don't know all of the facts, there's an observationally equivalent "behavioral" model where they do observe the facts and just don't make use of them. 

But anyway, in macro, most models use Rational Expectations, so let's think of "behavioral" as just meaning "non-RE". Actually, non-RE models have been kicking around for a long time - for example, Sargent's learning models, or Mankiw and Reis' sticky information models. What seems to be changing (slightly) is that A) younger people seem to be making non-RE models, B) people are recommending non-RE models for policy analysis, and C) the departures from RE are getting more stark. 

Some recent examples I've seen are:

1. The learning approach to New Keynesian models, promulgated by Evans et al., which seems to be solidly mainstream

2. Mike Woodford's response to Neo-Fisherism, which relies crucially on a slight departure from RE

3. Xavier Gabaix's behavioral New-Keynesian model, where consumers are short-term thinkers instead of infinitely far-ahead-looking

These are all well-established people making these models - they cut their teeth on RE models for years before daring to venture out into behavioral waters. But now I'm starting to see young people doing behavioral stuff as well. A good example, sent to me by Kurt Mitman, is this paper by Kozlowski, Veldkamp, and Venkateswaran, entitled "The Tail that Wags the Economy: Belief-Driven Business Cycles and Persistent Stagnation". 

The basic idea of the paper is that instead of knowing the true PDF of macroeconomic shocks, people re-estimate the distribution every time they see a shock. Not too crazy, right? But that seemingly small departure from RE has big business-cycle implications. 

The reason is tail events. When big shocks are rare, just one of them can change people's whole understanding of how the economy works. How many events like the Great Depression have there been in American history? Really, there are only two since we started keeping national accounts. Two! In 2008 we abruptly went from "There was that one really bad depression one time" to "Whoa, this is a thing that can happen multiple times!". To think that this would have zero impact on agents' beliefs about the economy - which is exactly what RE demands we think - seems implausible. The authors write:
No one knows the true distribution of shocks to the economy. Economists typically assume that agents in their models do know this distribution as a way to discipline beliefs. But assuming that agents do the same kind of real-time estimation that an econometrician would do is equally disciplined and more plausible. For many applications, assuming full knowledge has little effect on outcomes and offers tractability. But for outcomes that are sensitive to tail probabilities, the difference between knowing these probabilities and estimating them with real-time data can be large.
Anyway, to make a long story short, this can produce long economic stagnations, like the one we just had. Taking a gander at the literature review section, I see that these authors didn't aren't the first to use this mechanism in a theory - it looks like it can be traced back to a 2007 AER paper by Lars Hansen. The other similar papers the authors cite, however, all come from 2013 or later, showing that this sort of idea has been gaining currency recently and rapidly. 

Now, Koslowski et al. do dodge one important issue: what data set do agents use to estimate the distribution of economic shocks? The data set they use goes back to Word War 2 - they don't even include the Great Depression. But even if we go back further than that, we'll miss earlier episodes like the Panic of 1873, when good national accounts just weren't kept at all. Data availability is so recent that there's almost an observational equivalence between assuming that people use all the available data, vs assuming that people overweight data from their own lifetimes.

If the authors - or some other authors - were to assume that people overweight data from their own lifetimes, as evidence from Malmendier and Nagel suggests, it would have important implications down the line. Instead of people's expectations slowly converging to RE over the decades (centuries?), people would forget the lessons of history and continue being surprised by depressions every 50 or 100 years or so. 

For now, macroeconomists don't have to worry about this question. Authors like Koslowski et al. can frame their papers as quasi-behavioral papers, where RE is limited by data availability, instead of fully behavioral papers where RE is limited by collective forgetting. So these are still only cracks in the anti-behavioral dam, not a full torrential flood.

But my question is this: What happens when people start applying this mechanism to more complicated shock processes? What if the economy has regime switches that last decades? What if there is more than one kind of rare shock (e.g. the Great Inflation of the 70s/80s)? I've seen some people try to model stuff like this, and the end result can come out looking like practically any type of non-rational expectations you can think of. Meanwhile, empirical macro people are starting to pay more attention to survey measures of expectations. And people from behavioral finance are starting to put things extrapolative expectations into macro models, to explain macro facts. And evidence like that collected by Malmendier and Nagel continues to pile up.

And I should mention casual conversation as well. More and more young macro people that I interact with, including (even especially?) those who run in "freshwater" circles, are saying that behavioral explanations will have to be part of our understanding of how consumption works. Here's an example from a recent blog comment. 

So I wouldn't be surprised to see some more cracks in the anti-behavioral dam in the years to come. Chris House, my old macro prof, proclaimed three years ago that behaviorism was a dead end and would never have a transformative impact on macro. But seeing papers like Koslowski et al.'s, I'm thinking that his prediction now looks to have been quite ill-timed.


Just for fun, I'll post some more random behavioral macro papers I see.

"Explaining Consumption Excess Sensitivity with Near-Rationality: Evidence from Large Predetermined Payments", by Kueng

"YOLO: Mortality Beliefs and Household Finance Puzzles", by Heimer, Myrseth, and Schoenle

"Learning about Consumption Dynamics", by Johannes, Lochstoer, and Mou

"Understanding Uncertainty Shocks and the Role of Black Swans", by Orlik and Veldkamp

"The Liquid Hand-to-Mouth: Evidence from Personal Finance Management Software", by Olafsson and Pagel

Friday, January 13, 2017

The $30k Hypothesis

I wrote a Bloomberg View post about the Permanent Income Hypothesis. Basically, more and more research is piling up showing that it doesn't fit real consumption patterns. Some consumption smoothing takes place, but there's also a substantial amount of hand-to-mouth consuming going on. Most economists I know of have already accepted this fact, and usually chalk the hand-to-mouth behavior up to liquidity constraints (or, less commonly, to precautionary saving).

But a new paper on unemployment insurance extension casts major doubt on these standard fixes, especially on liquidity constraints as the culprit. A long-anticipated transitory shock - UI expiration -- shouldn't produce a big bump in consumption even if people are liquidity constrained. Nor is home production the answer, since unemployed people are already at home long before UI expires. Something else is going on here - either people interpret UI expiration as a (false) signal of the expected duration of unemployment, or they expected Congress to extend UI at the last minute, or they're just short-term thinkers in general. Or something else. I predict that as more and more good consumption data become available, more and more of this short-termist behavior will be observed, putting ever more pressure on people who use standard models of consumption behavior.

Anyway, as expected, some people came out to defend the good ol' PIH, including my friend David Andolfatto, one of the web's best econ bloggers and a ruthless enforcer of Fed dress codes. David's claim was that the PIH is still useful in some cases, and not in others.

That's fine...IF we know ex ante what the cases are. If it's just an ex post thing - "consumers look like they're completely smoothing in this case, but not in this other case" - then the theory has no predictive power ex ante. How do you know if consumers are going to perfectly smooth in advance, if sometimes they do and sometimes they don't, and you don't know why? Liquidity constraints, which we can probably observe, are one reason to expect PIH not to hold, but they're only one reason - as the Ganong and Noel paper shows, there are other important reasons out there, and we don't know what they are yet.

Here's an example of why I think we can't just be satisfied with the notion that theories work sometimes and not others. Consider a very simple theory of consumption: the $30k Hypothesis. Stated simply, it's the hypothesis that households consume $30,000 a year, every year.

Rigorous statistical tests will reject this hypothesis. But so what? Rigorous statistical tests will reject any leading economic theory, especially as data gets better and better. All theories are wrong (right?). Some households do consume $30k, or close to it. So the $30k hypothesis is obviously right in some cases, wrong in others.

So should we use the $30k Hypothesis to inform our policy decisions? How should we know when to use it and when not to? Judgment? Plausibility? Political expedience?

This is a reductio, of course - if you don't impose any systematic restrictions on when to use a theory, you become completely anti-empirical, and priors rule everything. This is also why I'm a little uneasy about Dani Rodrik's idea that economists should rely heavily on judgment to pick which model to use in which situation. With an infinite array of models on the shelf, economists can always find one that supports their desired conclusions. I worry that judgment contains a lot more bias than real information.

Tuesday, January 03, 2017

Scenarios for the future of racial politics in America

If you don't live in a sensory deprivation tank, you probably noticed that the 2016 presidential election was rather racially charged. Many on the Democratic side charged Trump and his voters with racism, white supremacism, etc. Political scientists found that Trump's most ardent supporters were especially likely to score high on what they call "racial resentment" - their term for the belief that black Americans are getting more than they deserve. Meanwhile, the election results were very polarized by race:

Trump's victory was almost entirely furnished by the white vote, while Clinton overwhelmingly won all minorities. This repeated the pattern of 2012.  

Race has always been important in American politics - except for a brief period in the mid 20th century, blacks and Southern whites have always been on opposite sides of the partisan divide. The 1964 Civil Rights Act is widely acknowledged to have spurred the shift of Southern whites from the Democrats to the GOP. Meanwhile, "ethnics" - East and South European immigrants of the early 20th century - voted reliably Democratic until they merged with whites into the modern version of the white racial group.

Many (myself included) also believe that race is important to U.S. political economy. I buy the story that racial divisions are one of the big reasons that America doesn't have as big a welfare state as Europe. One big example of this is the way the GOP has profited from the "line-cutting" narrative - the idea that black Americans (and possibly other groups as well) are getting more than their fair share, "cutting in line" in front of more deserving whites. That narrative has probably damped white support for social safety nets.

So race is really important. But there are three big reasons why racial politics aren't set in stone. First, racial coalitions can change, as when Southern whites and blacks briefly united to support FDR. Second, racial definitions can change, as when "ethnics" joined the white race in the latter half of the 20th century. And third, the salience of race in politics can increase and decrease. So predicting the future of racial politics in the U.S. is no easy task.

Here are the possible scenarios, as I see them. These are extreme scenarios, of course; reality will probably be a lot messier, just as saying "Hispanics vote Democrat" ignores the 29% who voted for Trump. But anyway, here are five futures I can imagine:

1. Scenario 1: Race Loses Salience

This is not a future in which racial divisions vanish or America becomes "colorblind". It simply means that racial divisions would no longer the main dividing line in American public life. People would largely stop defining their political interests by race. The GOP starts appealing to more nonwhites, and the Democrats start appealing to more whites - maybe because the parties shift their ideologies, or maybe because the racial groups themselves change what they want. Intermarriage helps by blurring the boundaries between races. In this scenario, Americans go back to fighting over economics, or perhaps national security or religion, instead of about race.

(There's a very extreme form of this scenario where race does vanish, and "American" becomes a catch-all racial group that absorbs all the groups. But I consider this extreme version to be pretty unlikely.)

2. Scenario 2: White Expands

This would be a repeat of what happened in the 20th century. Just as Italians, Jews, and Slavs became "white", Asians and Hispanics could come to be regarded as part of the same group as whites. In this scenario, high rates of intermarriage between whites, Asians, and Hispanics, combined with the fact that many Hispanics already identify as white, blur the distinction between the three groups. The new group might be called "white", or it might be called something else.

In this scenario, blacks would be the odd group out, as they ended up being in the 20th century. Since black people are expected to stay at only around 13% of the American population, even with continued African immigration, this means that there could be no winning coalition that did not include a very large piece of the new racial majority. Race would lose some (but not all) salience, as it did in the late 20th century, when economic issues joined racial issues as the dividing lines between Republican and Democrat.

3. Scenario 3: All Against Whites

In this scenario, tensions between whites and the other racial groups continue to rise. The GOP gains an increasing share of the white vote, while Asians and Hispanics become even more overwhelmingly Democratic. Asians, Hispanics, and blacks might or might not start to consider themselves a single race, but they would be united politically by their opposition to whites. Since other races are approaching demographic parity with whites, this scenario might see an increasingly racialized but still even split - whites could desert the Dems at about the same rate that they lost demographic heft, leaving the two parties still roughly equal for decades.

4. Scenario 4: White Splits

This is similar to Scenario 3, except that the white racial group would split in two. The dividing line might be education, or perhaps just politics itself. Those who left the white race would simply stop self-identifying as white. They might go back to identifying with their national ancestries ("German-American", "Irish-American", etc.), they might combine with Asians, Hispanics, and/or blacks into a new racial group, or they might create some new category for themselves. Meanwhile, the rump "white" race would simply be the GOP-voting part of the current white race, and would continue to identify as white. The dividing line would still mainly be race, but now the Democrats would have a structural advantage as the percentages of Asians and Hispanics increased.

5. Scenario 5: Politics Becomes Race

This is the weirdest scenario. It's a bit similar to Scenario 4, except that some Asians and Hispanics also leave their races and join the GOP-voting whites both electorally and racially. The nation would still have two big racial blocs, and the electoral dividing line would still be race - so this is different than Scenario 1 - but the American races of the future would in no way resemble the ones we see today. Politics and race would fuse into a single concept. Democrats and Republicans would become like Hutus and Tutsis, Bosniaks and Serbs - not necessarily able to tell each other apart visually, yet deeply believing themselves to be two totally different peoples. As you can see from the aforementioned analogies, I consider this to be a pretty pessimistic scenario.

These five scenarios don't exhaust the possibilities (everyone could start to identify as black!), but they're the only ones that seem to me to have any chance of happening. Actually, I'm not sure about Scenario 1 - it's kind of wishful thinking on my part.

Scenario 2 has the weight of history on its side - it's happened twice before. The white race in America has proven very capable of expanding to take in new entrants, as it did with Germans and Swedes in the 19th century and East and South Europeans in the 20th.

Scenario 3 is most similar to the recent electoral outcomes, so it's sort of a straight-line trend projection of increasing racial polarization. I also consider this to be a pretty pessimistic scenario.

Scenario 4 is a projection of a somewhat less prominent trend - the increasing polarization of the white electorate by education. College has emerged as one of the key institutions of American society, if not the key institution, and there's a chance that skill-biased technological change will make that situation irreversible. A combination of progressive education and the venom of GOP-voting whites could cause liberal whites to simply decide that the white race isn't something they want to be a part of anymore.

Scenario 5 is the projection of yet another trend - the Big Sort. Like-minded Americans are already moving near each other and marrying each other. Social media, and the splintering of mass media in general, could accelerate the trend. Partisanship is virulent in America at the best of times, and it does seem conceivable that it might eventually be even more powerful than race.

So what do you think? Did I miss any plausible scenarios? Which scenario do you think will come to pass? Which will be best for the Democrats, and which will be best for the GOP? How can the parties nudge American society toward their desired scenario? And what would be the consequences of each scenario, for policy, for people's lives, and for the integrity of the nation-state? How should we intellectuals try to steer the populace, if indeed we have any ability to do so? These are the big questions, and they're all beyond my ability to answer just yet.

Sunday, January 01, 2017

Some thoughts on UBI, jobs, and dignity

One of the more interesting arguments these days is between proponents of a universal basic income (UBI) and promoters of policies to help people get jobs, such as a job guarantee (JG). To some extent, these policies aren't really in conflict - it's perfectly possible for the government to mail people monthly checks and try to help them get jobs. But there are some tradeoffs here. First, there's money - both UBI and JG cost money, and more importantly cost real resources, which are always in limited supply. Also, there's political attention/capital/focus - talking up UBI takes time and attention away from talking up JG and other pro-employment policies.

One of the key arguments used by supporters of pro-employment policies - myself included - is that work is essential to many people's sense of self-worth and dignity. There's a more extreme variant of this argument, which says that large-scale government handouts actually destroy dignity throughout society. Josh Barro promotes this more extreme argument in a recent post.

This seems possible, but it's very hard to get evidence about whether welfare payments are actually dignity-destroying. Anyone who goes on welfare probably has other bad stuff happening to them in life, so there's a big endogeneity problem. Meanwhile, time-series analyses of nationwide aggregate happiness before and after welfare policy implementation are unlikely to tell us much. The best way to study this would be to find some natural experiment that made one group of people eligible for a big UBI-style welfare benefit, without allowing switching between groups - for example, payouts to some Native American group might fit the bill. My prior is that handouts are not destructive to dignity and self-worth, as Barro assumes - I predict that they basically have no effect one way or the other. But this is an empirical question worth looking into.

In his own post, Matt Bruenig argues against Barro. His argument, basically, is that many rich people earn passive income, and seem to be doing just fine in the dignity department:
If passive income is so destructive, then you would think that centuries of dedicating one-third of national income to it would have burned society to the ground by now...In 2015, according to PSZ, the richest 1% of people in America received 20.2% of all the income in the nation. Ten points of that 20.2% came from equity income, net interest, housing rents, and the capital component of mixed income...1 in 10 dollars of income produced in this country is paid out to the richest 1% without them having to work for it.
I don't think this constitutes an effective rebuttal of Barro, for the following reasons:

1. "Work" is subjective. Many rich people believe that investing constitutes work (I'd probably beg to differ, but no one listens to me). And founding a successful business, which creates capital gains, certainly requires a lot of work. 

2. Passive income very well might be destructive to the self-worth of the rich, on the margin. In fact, I have known a number of rich kids who inherited their wealth, and devoted their youths to self-destructive pursuits like drug sales and petty crime. It could be that for many rich people, the dignity-destroying effects of unearned income are merely outweighed by the dignity-enhancing effects of high social status and relative position.

3. Many rich people became wealthy through work - either a highly paid profession like CEO, or by starting their own companies. This past work may provide dignity for old rich people, just as retired people of all classes may derive dignity from their years of prior effort.

And Matt's argument certainly doesn't counter the less extreme version of the "jobs and dignity" argument (i.e., the version made by Yours Truly). Even if passive income isn't actively harmful to dignity, it might not be helpful either, in which case pro-employment policies would be more effective than UBI in promoting dignity.

But that said, I do think Matt's policy proposal is a good one:
A national UBI would work very similarly. The US federal government would employ various strategies (mandatory share issuances, wealth taxes, counter-cyclical asset purchases, etc.) to build up a big wealth fund that owns capital assets. Those capital assets would deliver returns. And then the returns would be parceled out as a social dividend.
This is something I've suggested as well. I see it as an insurance policy against the possibility that robots might really render large subsets of human workers obsolete. 

UBI isn't a bad policy. If robots take most of our jobs, it will be an absolutely essential policy. I just don't think it solves the dignity problem. And with Trump winning elections in part by promising to restore dignity, I think Democrats need an issue to counter him, and jobs policy is far more likely than UBI to fit this bill. 

Saturday, December 31, 2016

Who is responsible when an article gets misread?

How much of the responsibility for understanding lies with the writer of an article, and how much with the reader? This is not an easy question to answer. Obviously both sides bear some responsibility. There are articles so baroque and circuitous that to get the point would require an unreasonable amount of time and effort to parse, even for the smartest reader. And there are readers who skim articles so lazily that even the simplest and most clearly written points are lost. Most cases fall somewhere in between. And the fact that writers don't usually get to write their headlines complicates the issue.

See what you think about this one. The other day, Susan Dynarski wrote an op-ed in the New York Times criticizing school vouchers (a subject I've written about myself). Dynarski opens with the observation that economists are generally less supportive of vouchers than they are of most free-market policies:
You might think that most economists agree with this overall approach, because economists generally like free markets. For example, over 90 percent of the members of the University of Chicago’s panel of leading economists thought that ride-hailing services like Uber and Lyft made consumers better off by providing competition for the highly regulated taxi industry. 
But economists are far less optimistic about what an unfettered market can achieve in education. Only a third of economists on the Chicago panel agreed that students would be better off if they all had access to vouchers to use at any private (or public) school of their choice.
Here's the actual poll: 

As you can see, the modal economist opinion is uncertain about whether vouchers would improve educational quality, while the median is between "uncertain" and "agree". This clearly supports Dynarski's statement that economists are "far less optimistic about vouchers than about Uber and Lyft. 

The headline of the article (which Dynarski of course did not write) might overstate the case a little bit: "Free Market for Education? Economists Generally Don’t Buy It". Whether the IGM survey shows that economists "generally don't buy" vouchers depends on what you think "don't buy" and "generally" mean. It's a little click-bait-y, like most headlines, but in my opinion not too bad. 

Scott Alexander, however, was pretty up in arms about this article. He writes:
By leaving it at “only a third of economists support vouchers”, the article implies that there is an economic consensus against the policy. Heck, it more than implies it – its title is “Free Market For Education: Economists Generally Don’t Buy It”. But its own source suggests that, of economists who have an opinion, a large majority are pro-voucher... 
I think this is really poor journalistic practice and implies the opinion of the nation’s economists to be the opposite of what it really is. I hope the Times prints a correction.
A correction!! Of course no correction will be printed, because no incorrect statements were made. Dynarski said that economists are "far less optimistic" about vouchers than about Uber/Lyft, and this is true. She also reported close to the correct percentage of economists who said they supported the policy in the IGM poll ("a third" for 36%). 

Scott is upset because Dynarski left out other information he considered pertinent - i.e., the breakdown between economists who were "uncertain" and those who "disagree". Scott thinks that information is pertinent because he thinks the article is trying to argue that most economists think vouchers are bad. 

If Dynarski were in fact trying to make that case, then yes, it would have been misleading to omit the breakdown between "uncertain" and "disagree". But she wasn't. In fact, her article was arguing that economists tend to have reservations about vouchers. And she supports her case well with data.

This is a special kind of straw man fallacy. Straw manning is where you present a caricature of your opponent's argument. But there's a particularly insidious kind of straw man where you characerize someone's arguments correctly, but get their thesis wrong. You misread someone's argument, and then criticize them for failing to support your misreading. Other examples of this fallacy might be:

1. You write an article citing Autor et al. to show that the costs of trade can be very high. Someone else says "This doesn't prove autarky is better than free trade!" But of course, you weren't trying to prove that.

2. You write an article arguing that solar is cost-competitive with fossil fuels by pointing out that solar power is expanding rapidly. Someone else says "Solar is still a TINY fraction of global generating capacity!" But of course, you weren't trying to refute that.

3. You write an article saying we shouldn't listen to libertarian calls to dismantle our institutions. Someone else says "Libertarians aren't powerful enough to dismantle our institutions!" But of course, you weren't trying to say they are.

I think Scott is doing this with respect to Dynarski's article. To be fair, his misreading was somewhat assisted by the headline the NYT put on the piece. But once he was reminded of the fact that the headline wasn't Dynarski's, and once he re-read the article itself and realized what its actual thesis was, I think he should have muted his criticism. 

Instead, he doubled down. He argued that most reasonable people, reading the article, would think it was arguing that economists are mostly against vouchers. But his justification for this continues to rely very heavily on the wording of the headline:
First, I feel like you could write exactly the opposite headline. “Public School: Economists Generally Don’t Buy It”... 
Second, the article uses economists “not buying it” as a segue into a description of why economic theory says school choice could be a bad idea... 
In the face of all of this, the New York Times reports the field’s opinion as “Free Market In Education: Economists Generally Don’t Buy It”.
On Twitter, he said: "the actual article is more misleading than the headline." But he appears to say this because he takes the headline - or, more accurately, his reading of it - as defining the thesis that Dynarski is then obligated to defend (when in fact she wrote the piece long before a headline was assigned to it). When he finds that Dynarski doesn't support his reading of a headline she didn't write, it is her article, not the headline, that he calls "misleading".

Of course, the fault here is partly that of the NYT, who used a headline that focused only on one part of Dynarski's article and overstated that part. It's a little harsh for me to say "Come on, man, you should know an article isn't about what its headline says it's about!" Misleading headlines are a problem, it's absolutely true. But after learning that Dynarski didn't write the headline, I think Scott should have been able to then read the article on its own, and go back evaluate the arguments Dynarski actually makes. It's the refusal to do this that seems to me to constitute a straw-man fallacy.

Anyway, one last point: I think Dynarski is actually wrong that economists are more wary of vouchers than other free-market policies. Yes, economists in general are probably wary of voucher schemes. But they're also a lot more favorable to government intervention in a variety of cases than Dynarski claims. Klein and Stern (2006) have some very broad survey data (much broader than IGM). They find that 67.1% of economists support "government production of schooling" at the k-12 level, with 14.4% uncertain and 17.4% opposed. But they also record strong support for a variety of other interventionist policies, such as income redistribution, various types of regulation, and stabilization policy. On many of these issues, economists are more interventionist than the general public! So I think if Dynarski makes a mistake, it's to characterize economists as being generally pro-free-market. Their ambivalence about vouchers doesn't look very exceptional.