Hide table of contents

On this episode of the Utilitarian Podcast, I talk with Bryan Caplan. Bryan is a professor of economics at George Mason University, and his latest book is" Labor Econ Versus the World - Essays on the world's greatest market".

We talk about some of Brian's big ideas, like open borders and housing deregulation, whether incremental change is better than big ideas and why people disagree with Bryan. We also discuss economic policy in poor countries, and whether trying to improve such policy is more cost-effective than buying bed nets to prevent malaria. I also make Bryan give an estimate of how much richer the world could be with optimal economic policy.

We discuss automation and universal basic income, whether a world government is a good idea, risks from AI and engineered pandemics, Bryan's objections to utilitarianism, and the labor economics of worker wellbeing improvements and marriage. This podcast has timestamp chapters in the description, which is supported on some podcast players. And as always, I can be contacted by email at utilitarianpodcast@gmail.com.

Here's the transcript: 

Gus: Bryan, welcome to the Utilitarian Podcast.

Bryan: Thank you very much for having me. I really appreciate it.

Elevator pitch: Open borders

Gus: Okay. You have advocated for several big, neglected ideas and just for listeners who haven't heard, although most listeners will have heard, could you give the elevator pitch for open borders?

Bryan: Sure. The idea of open borders is that we should let anyone take a job anywhere, which is a fantastic and simple way for the vast majority of people on earth who are not lucky enough to be born in rich countries to solve their own poverty problem by just moving to a rich country and getting a job.

This is actually one of the most reliable ways known. In fact, I think I would say it is the most reliable way known for escaping absolute poverty on earth, which is simply to leave a country with high poverty levels and go to a rich country. And we really do see that someone can move from a very poor country like Haiti, and really the day they show up in the US their earnings can easily multiply by a factor of 10 or 20.

Gus: You start from this one giant fact, and then you say all of the objections to open borders, for example, would have to overcome this wild increase in earnings.

Bryan: Correct. Now in the background is a economic theory about why some people make a lot of money and others don't make a lot of money.

And the basic economic theory is productivity. Tom Cruise makes a pile of money because he brings a lot to a movie. If I were an extra in a set, I would not make much money because I would not be bringing very much the movie, which means then that the main reason why wages would be so much higher for the Haitian in the US than in Haiti is actually, this productivity is so much higher in the U S than in Haiti, which means that we don't need to have employers be nice.

Rather we can just rely upon regular market forces, where employers in a country where labor productivity is high say, "well, workers produce a lot here, so we better pay them a lot or else we're not going to have workers working for us". By the way, the idea that employers compete for workers should be especially obvious to almost everyone on earth right now, because at least in the US we are experiencing the greatest labor shortages of my lifetime, where employers are quite desperately trying to get workers competing with each other. So anyway, that's something that's noteworthy here.

Gus: Give us a sense of the magnitude of the gains on the table, if we implement open borders.

Bryan: So when economists have estimated how much it would enrich humanity, if everyone could get a job to take a job anywhere, then the typical estimate is something like doubling the production of the world, which has a name that we don't usually hear about, which is GWP - gross world product.

It's the same idea as gross national product, except this is for the world. So notice, so open borders is not claiming that if you move workers from Haiti to the US that we get a bigger population and then GDP goes up just through that simple mechanism, everybody knows that. It's saying something much more important, which is that the production of humanity rises because you're moving people from places where they produce little to places they produce much.

So that what humanity actually accomplishes then goes up. And again, this is out of all of the policy ideas that economists have ever measured the effects of, this is probably the single biggest thing they've ever found. Precisely because current regulations are so strict and so destructive, you really are just needlessly trapping, most human ability in places where it is just hard for anyone to accomplish much of anything.

Big ideas VS incremental change

Gus: Let's just assume you're right in the case for open borders, why focus on this big idea? That's so unlikely to be implemented anytime soon. Why not focus on incremental change? You could have written a book about increasing high-skilled workers to the US by 25 percent.

Bryan: So probably one of the arguments is not going to be very utilitarian, and that is just, it is more interesting to me to work on big ideas and ideas that I think are neglected. Now you could make utilitarian defense and say the incremental ideas already have plenty of defenders. There isn't that much additional work to do on them. So the extra value, the additional contribution of a scholar that works on a neglected idea that is intrinsically of great value is still going to be high.

You could say that, it's plausible, but it's hard to prove one way or the other. So in terms of just what I think I'm good at, I think that I am better at going and defending ideas that are currently unpopular that have a lot of merit than I would be at doing something incremental. So there's that.

Then obviously there's also the dynamic which I do talk about in the book of what's called the Overton Window, where if all you ever put forward are mild changes to the status quo then the whole conversation is about something between the status quo and the mild changes. If you do put forward bigger ideas then often, what happens is it gets people thinking over the longer term and saying maybe something bigger would be possible.

So I especially am always hoping at least that my work will be very influential on young elites who are not yet able to do much of anything other than read and talk, but who in the future will actually have a lot of influence. And then I'm hoping that I will put these ideas in their heads and they'll be thinking "could that actually be done? Maybe".

Why people disagree with Bryan

Gus: I must say, reading through your work, you'll handle the objections. It's difficult to give a concrete objection that you haven't already addressed. So, how do you explain why people continue to disagree with you?

Bryan: Yeah, that is a great question. I would say it's not just me. There are a lot of arguments in the world that are quite compelling and yet there's still a lot of people who disagree.

The arguments for evolution seem quite ironclad. And yet in the United States, about half of people would just say "nah". So it's important just to remember that it is not that unusual for there to be very compelling arguments that just don't persuade people. As to why they wouldn't persuade people, so a part of the reason, of course, is that most people are old.

Most people are over 30 anyway, and it's just really hard to persuade anyone over that age of anything important. That's a general fact about human beings, which is far as I know, has always been true. Now, if you say it's like, why is it so hard to change their minds, and easier to change other people, really everybody's pretty hard to change.

It's just that it stands out as being almost impossible once you get over a certain age. Another big part is sometimes arguments are counter-intuitive. So there's that. But probably a bigger one is people get emotionally attached to certain answers. Now what's striking to me in a lot of what I've thought about is there are some answers that seem emotionally appealing cross-culturally and over time. I often put these under the heading of what I call, what actually, what psychologists call social desirability bias.

There's just certain answers where, even though, if you think about it for about 10 seconds, you realize "that was not true". And yet human beings find it so emotionally appealing that this is their default, that they rush to.

Things like, we should be doing everything possible to go and prevent COVID. Everything possible would seem that we would just stop doing anything and just live in isolated rooms. And if people starve, they starve. Okay, not everything possible, "well, you said everything possible". So what you said is just flatly wrong.

And people like to speak this way. Human beings like hyperbole, but sometimes they get sucked in by their own hyperbole. So that's a, I think a lot of this is the feeling, the social desirability feelings around nationalism. And "this is our country, and we wouldn't want to do any possible thing that would put our country at the slightest risk".

So I think that is something that matters for people too. And even if you say "look just like allowing any change allows for some risk and not allowing change is going to create some risks too". You might say "let's not allow computers, maybe we're going to get SkyNet and it's going to go and destroy us". But on the other hand, if we don't get computers, maybe we'll get destroyed because we didn't have computers.

A careful attitude?

Gus: How much weight would you put on a conservative attitude that just says: even in the absence of clear arguments against one of these ideas you have, we should be slow to change. There might be something we have missed.

Bryan: Well, what I would say is that, it's a very reasonable default before you've heard arguments. And it's also reasonable when you can't really have an open discussion about something. When there's an idea, that's just so taboo, then, that might be an issue. But I would say this is a reason why, rather than a good reason for rejecting radical ideas, it's reason to say, "look, radical ideas should get a ton of scrutiny", which I do try to do in the book. I really do try to answer and I don't try. I try to avoid ever shaming anyone. So there are some arguments in the book that are intellectually, at least quite challenging. And yet people advise we don't put that in the book. It could offend people.

And I said, "look, this isn't a book about not offending people". It's a book about getting to the bottom of the matter. And especially making everyone who disagrees feel like I actually put their best arguments forward, even if they themselves are afraid to do so.

Most notably in Open Borders, I addressed the argument of, poor countries have dangerously low IQs. And if we let them in and they're going to go and destroy us with their stupidity, basically. And this is an argument, I think a lot of people actually believe it. And, when you look at the numbers, at least some of it checks out. Like we actually do see generally very low IQs in the poorest countries in the world.

And we also see that those countries seem to be dysfunctional in a lot of ways, which at least plausibly might have something to do with people they're not being very smart. And so in Open Borders, I try to put the best case for this argument on the table and then see whether it checks out. So, partly what I do is go over the work that has been done and say, "look, even if you think that your work is totally solid, it still actually gives you numbers very consistent my numbers".

But also, by facing it, I was able to go through some further math in particular on the question of, to what extent is low IQ just caused by growing up in dire poverty. So I'm not going to tell you that, unless you want me to, I'm not going to go over exactly the reasoning. But what I came up with is a very lower bound of how much of the IQ gap between rich and poor countries is explained by the dire situation in poor countries.

At least 40% of that gap, I think probably more like 60%, which means that in the process of talking about this question, I actually came up with another argument for open borders, which is we really need it to get the intelligence and humanity up a lot. There is no nothing else known in all the research on intelligence that would do more to improve human intelligence than moving people from poor countries to rich countries at birth.

Economic policy in poor countries

Gus: So, the strict immigration policies of rich countries, that's one of the factors that you identify as a cause of poverty in this upcoming book of yours, Poverty - Who to Blame? It's not the biggest one, though. Could you tell us what the biggest one is?

Bryan: Actually I think it might be the biggest one. Let's see. No, actually, no, you're right. You're right. Because I also just talk about bad policies in poor countries. My view, and the view of a lot of people who have worked on this is that, if poor countries would have just had policies very similar to those rich countries for the last a hundred years, they too would be rich.

And basically if that could have just raised the growth rates of these countries by even one and a half percentage points per year, then that would actually have made them much more than twice as rich. Like I was saying, open borders, reasonable projection is that would double the wealth of world. If you could have gone and improved the economic growth of poor countries by one and a half percent, for a hundred years, that would have actually been enough to more than quadruple their income. Just having low quality policies has been probably even a bigger factor.

Bed nets or policy?

Gus: The effective altruism movement has advocated things like donating malaria nets to help save children in Africa. If we're seeing this as two interventions opposed to each other, if we have a million dollars, should we spend it on malaria bed nets or should we spend it on whatever we can do pragmatically to increase the quality of immigration policy or increase the quality of economic policy in developing countries?

Bryan: Yeah. That is a great question. Definitely, in terms of just having great confidence that you're doing some good, it's gotta be malaria nets. In terms of what is really the best approach, honestly, what I would actually say is probably the best use would just be to spend the million dollars getting some people who are great international law experts and just find out like, what is the easiest country to move people into legally? Like where are the loopholes where we could actually get people in.

So if you could actually just use that million dollars to get a hundred people in from poor countries to rich countries, then I'd say that would very likely be better. Because not only do you help those a hundred people. But on top of that, that also means they're likely to be sending a lot of help home, which can be used to buy malaria nets and much more.

For that one specifically the problem is that international law is really complicated. You have to know the laws of a whole lot of countries. I did have some friends who were saying, it seems like Argentina might actually have a pretty big loophole. As to whether it's really exploitable or not, is just hard to know without getting some people who really know what they're doing and trying it.

And then in terms of improving the quality of economic policy, that's one where it's probably especially hard to do.

Gus: I would at least be afraid of missing some local knowledge and trying to intervene in a way that causes unintended consequences.

Bryan: There's always a reason to be somewhat worried about that. I would tend to start with the most asinine policies that exist, where I would say, look, they've just been examined, forwards and backwards. There just is an issue. Things like getting India to change its regulations on buildings.

There could be some unintended consequences. Come on, look. It's a country. We got over a billion people. If you measured homelessness in the way we do in America, they probably have hundreds of millions of homeless people. Indian statistics won't say that because they have a lower standard for what counts. But you have rules that make it very hard to build tall buildings in India.

And if you could figure out some way of just going and diluting that or relaxing that then seems like that would do some enormous good. And that's a case where my inclination would be to say, get someone who really knows India and say, look, shop this around. Can we find one place in India where they'll play ball and where we could actually somehow use this money to get them to go and drastically change their laws, and then be a model for the rest of the country.

How much richer could the world be?

Gus: You have generally pushed for deregulation and libertarian solutions. You're in the process of arguing for deregulating housing, for example, and land use. So imagine that we have two different worlds. In one world, we continue along the path we're on. There are no catastrophes. There are no sudden windfalls. In the second world or in the other world, you get to decide economic policy on all fronts. So energy, housing, immigration. How much richer could the world be say in 2100? How much richer could the second world be?

Bryan: I'm gonna sound like a megalomaniac, but yeah, I think I could triple the per capita income of the world. If you let me actually do all of those things. Of course there's a lot of people who would just think that's crazy or disagree, or we'll say we can triple it by doing my thing.

Just the open borders, that's one where the case seems so strong and that by itself seems like you could get a doubling. Then add on things like housing deregulation, which can seem like an another enormous win. Very obviously in the richest countries.

But then you learn about countries like India and say, wow, even though they're dirt poor, they are still strangling housing in a country where people are sleeping on the streets. It's hard to build a building. So odd. In a way you have to first appreciate how dysfunctional the status quo is to realize that what I'm offering is not so amazing. It's not that I have some idea that is so innovative that it hasn't occurred to other people.

It's more just like looking at what a mess things are now and saying, I wouldn't do that. Yeah. So honestly, I guess I would say, I think I could get per capita income for the world up by five times compared to trend. If people would really listen to me.

Along the way I would also be saying, stop fighting. Everybody dimantle your militaries. We're not gonna have a war anymore. This is a crazy thing to do. No. And then other things like deregulating nuclear power, I think there's a lot of gains from that as well.

Gus: What would it actually mean to have a five times increase? Can you paint us a picture?

Bryan: The tempting thing is just to say let's see, what have we got right now, multiply that by five and then find some people who are living that way and say, that's what we would have. That's what the typical person would have. That is probably not correct because of course, there's just going to be like, even, if everything stays in the current course, there's just going to be a lot of technological improvement. I think that's very reasonable to predict.

Basically when you look at a millionaire today. It's not like they're living the lifestyle of JP Morgan in 1900. The living something that in many ways would have just amazed JP Morgan. At the same time there's things that Morgan had that you wouldn't have with a million dollars today. You don't have a whole entire army of servants or, and you don't live in a castle. I don't know if Morgan really lived in a castle. He could have, if he wanted to.

Per capita income of the world right now is about like $10,000, maybe a little bit lower, maybe like $8,000 to $10,000 figure with current path. It's very reasonable to think that would be up to $40,000. And if you listen to me, let's say $200,000. So if you just imagine that the average incomes would be the level of what we think of as a very successful lawyer.

Albiet with relative price changes. Like the tech stuff is probably going to be fantastic compared to what we know. And then things like access to personal human servants, in a world that rich it's going to be very rare. Just like right now in rich countries hardly anyone is a butler now, whereas 150 years ago in rich countries like rich people would have butlers.

When the people that are the, that are potential butlers are getting that rich they're probably not going to be butlers for you. They're probably gonna be doing something else. Robot butlers, very believable. Now that I think about it. Yes.

Automation and UBI

Gus: I see two big potential objections to this view of the world in which we can become generally more libertarian and increase our wealth.

The first is risk of automation. So is there a case for a universal basic income based on the following story? There's a long-term trend towards automation and this trend will continue. And we imagine a world in which humans cannot compete with robots and AIs in almost any job. So AIs will be better at teaching and researching economics than you are. And in this world, should the government give everyone money so that they can survive and thrive, ultimately?

So there's two parts of this story. One is a reading of economic history and the other one is a hypothetical. In terms of the right way to read economic history, I will say that it gives zero support for the automation story.

In the sense of automation destroys jobs. What we have rather seen for the last, gee, you can just, you can put it back to a hundred years or 200 years or 500 years, whatever you want to do. For whatever timeframe you look at, what we'll see is that automation is destroying some jobs and yet full employment generally remains.

During the great depression, there was an idea. "Now this time it's different. Now we're so automated that this is why we have unemployment". And then guess what? Turned out that it had nothing to do with technology. And it was very easy to go and reemploy the workforce in a wide variety of jobs that were different and just basically reallocate human labor.

What really happened after this was not that jobs were destroyed. Rather we had a large increase in total employment because women enter the workforce to a much larger degree. So that's really what happens. And it was nothing to do with technology. And again the intuition's pretty clear, which is that, when technology gets better, you stop using human beings for the areas where technology is a good substitute.

And then you say, now what? And there's always something else you can do. Always has been. So this is what has always occurred. There's no sign this is in the slightest way changing these days. There's no sign at all that we're running out of things for human beings to do, right?

This is just paranoia. All that it really amounts to is there's a story saying, oh, we're losing some jobs in this one industry due to technology. And if this happens to all industries, then employment will disappear. You could always have said that about every prior innovation, yet that was incorrect because when technology goes and make some, it makes human beings obsolete in some way, people then put their thinking caps on and say, okay, there's something else that we could do. And there always has been.

Workers have moved from fields to factories, to offices. Where will they move if the computers and the robots can do everything?

Bryan: Yes. So that is the question. If they can do literally everything and they are better at everything than human beings are. Then one thing you say as well, human beings might just keep doing the same thing, but just not being very good at it. But as long as it has positive value, that's fine. So a robot could have 10 times your productivity, you can say, yeah but I have one 10th of the robot's productivity, so I can earn one 10th of what a robot earns. In a world that fantastically rich, that could be an enormous amount of riches. So there's that, but again, the main thing is that, now we have moved from economic history to a hypothetical.

So anytime someone is saying, projecting past trends. That's where I will veto them and say, no, you're not projecting past trends. Were you projecting past trends you would just say full employment forever, combined with higher living standards and not just for the people that are in the innovative industries, but for everybody.

So barbering right now is almost the same as barbering a hundred years ago. And yet barbers don't live in the style of 1922, barbers live modern lifestyles too. Because there's been such an expansion in the production of everything else that the time that barbers spent cutting hair is compensated so they can go and afford all this other stuff that is being produced.

Now in the pure hypothetical, that's quite different. That's what says, we'll just imagine that the wages of all human beings are, have been driven to zero, in the same way that the wages of most horses have been driving to zero. That's it. So that has occurred. So just imagine it's like that. Imagine human beings are like horses, where compared to the computers, we're so inflexible there's so few alternative things to do with us.

So basically if you got anything on that scenario, that's one where I'll still say that, the idea of the universal basic income is a very foolish one because even in this world, not everybody will need the income. And so my general view for all philanthropy, and this is fundamental principle, effective altruism is if you have a budget of philanthropy, you should try to allocate it in the most helpful, possible way for humanity.

If there were a billionaire that said," Hey, I've $8 billion I'm going to give out", no effective altruists would say "great, give a dollar to every person on earth". In fact, that would be almost the height of stupidity. Let's find the best place to spend the money and then put it all there. Let's do that.

The universal basic income is basically saying, let's take an enormous amount of public money and let's go and just split it evenly between people. That's the best we can do. And I'll say, look, you can always do better than that. That is just a terrible idea. So of course in the scenario that you're envisioning there are going to be a lot of people who own non-labor assets, maybe lands, maybe stock, and they are not going to need this universal basic income.

Could very well be that productivity will be so high that even just a few hours of work a week will be more than enough to give someone a very high standard of living. So you've got that as well. My view actually is that no matter how good the world gets, there's always going to be a couple of hell holes.

There's always going to be a place kind of like North Korea, Haiti, where though the rest of the world has moved forward, they are to stuck in their time loop, just continuing their dysfunctional stuff. And I'd say, that's where you should be spending your philanthropy is trying to help the problems the modern world has just stubbornly failed to solve, or that has stubbornly resisted the solutions that have been offered.

By the way, this does take me back to a more general view that I have, which is that all universal programs are foolish and not for any particular political viewpoint, but from just common sense.

It does not make sense to take money from everyone and give it to everyone. At minimum it's just futile because, you could have just taken less from the people that need it less. And then we could as well save the transactions costs, but on top of it, it also leads to distorting incentives.

Taxes to be very high in order to have universal programs. And so that discourages people from working and investing and otherwise trying to take care of themselves and deliver value to the world. And so a universal, basic income to my mind is really just a particular egregious form of this general mistake that people make.

They want to give money to everyone. And again, as to why people would want to give money to everyone, I will chalk this one up to a very strongly to social desirability bias. Just sounds good. Everyone should get it. Everyone. What about the people don't need it? Everyone should get it.

Why are you going to go and waste perfectly good, philanthropic dollars on people that don't need it? I'm happy to talk about that more, but when you actually press people, the arguments that people will give for wasting trillions of dollars on the people who don't need it are so flimsy. So ill-based. So unresearched. And just the fact people are so comfortable with this. There's some okay research saying maybe this is a good reason to do it. So let's keep spending trillions of dollars on it rather than, all right, let's spend a few million seeing whether the story checks out because otherwise we're wasting trillions of dollars every year forever. Bad idea..

Gus: Okay. So is it safe to say that you reject universal basic income?

Bryan: Yes. Yes.

Totalitarianism VS other risks

Gus: Okay. The second challenge, you could say, to your worldview is to talk about the risk of human extinction. So here's the case for world government to prevent human extinction: There are classes of risks such as engineered pandemics, or maybe unsafe AI, that have these potentially enormous negative externalities, like killing billions of people. And we can prevent such catastrophes, but doing so is a public good. So we need a world government to supply the public good of protection from these risks.

And you have written a paper arguing for the risk of totalitarianism. So I'm interested in the trade-off between world government for preventing human extinction from these risks I mentioned, and a world government as a potential risk itself by the world government becoming totalitarian.

Bryan: Great. So a deep and wonderful question. So let me try to answer it in all the parts that you're asking. So let's just start with what it would take to get world government from where we are. I think it would be amazing if you could accomplish it without World War III. Just think about the countries that won't cooperate with the most basic stuff right now, and what would it take to get them into your world government? The answer is World War III, probably actually World War three, four, and five would be a lucky, a fortunate scenario here. It's just really hard to do it.

So I would say on that alone, even if it would be wonderful once it was accomplished, once you realize that it would require terrible wars in order to make it happen, it's one where you say, all right, look, there's just no peaceful way forward. So terrible idea.

Gus: So you don't see any move towards more centralized power? For example, the European Union?

Bryan: So look there, there is some such move, although it's not even clear what the net is. We have a lot more countries than we had 50 years ago. So that's is a move away from centralization. When we actually look at what international organizations are able to accomplish, the ones that bring in a wide variety of countries are underwhelming.

The United Nations. It brings almost every country in, and then precisely because every country is in, they barely agree on anything and they don't do very much. On the other hand, when you have alliances that are selective, like NATO, they're able to accomplish more, but at the expense of creating enemies who are then saying, "hey, we're not part of this, and you're a threat to us".

That's a lot of what's going on with Ukraine is, well is Ukraine gonna join NATO? Who is Ukraine joining NATO against if not Russia? And so as to what the net effect of that is, is just quite unclear. In terms of creating alliances where there's a broad agreement among wide range of countries, which is the whole point of trying to get to a world government, is to get the big disagreements in line. I don't see much sign that any of that has worked.

Now, secondly, I would just say that, you may disagree, but I mean to my mind is just very hard to argue that any of the threats that effective altruists are worrying about are anything comparable to nuclear war. So nuclear war, like, it could happen today. The missiles could be on their way right now for all we know and if there was a full launch then that could plausibly kill billions through not just the direct effects, but of course the massive destruction of the entire global economy.

So you put all that together and you could easily see that killing billions of people. So I mean, to my mind, worrying about things like AI risk or an engineered pandemic, when we've got nuclear weapons is just a strange allocation of mental resources.

Perhaps not so strange when we realize we have had nuclear weapons for a long time and we're bored talking about them. It seems just intractable to go and get countries to go and do much about it. In fact, it seems like we're going to get proliferation. We've been getting proliferation and it's probably just going to get even more proliferation. So let's go and think about some newer problems where we have no concrete proof they'll ever actually be serious, but still. It's more interesting to talk about. So I think a lot of this is motivated more by the entertainment value of talking about problems.

Totalitarianism

Bryan: Now, in terms of what I've written about the totalitarian threat. Yeah. So, a lot of this is inspired by George Orwell's 1984. And you may remember the plot of 1984, but if you haven't, I'll give you the brief version. In 1984, we don't have a world government. We have three mega governments. We have Oceania, which encompasses the United States, the Americas and the former British empire. We have Eurasia, which is basically the old Soviet block plus Western Europe. And then with east Asia, which is China, Japan, and then Indonesia sucked in there. And then there's some areas that these three powers are fighting over.

But in any case, the basic setup is that each country has basically become like all closed off and there's no longer any real serious long-term risk of any global change. And therefore it's possible to have permanent stagnant tyranny.

As Orwell points out, a lot of the reason why it is dangerous historically for a tyranny to be stagnant is that the rest of the world is not stagnant. And then you wind up losing relative importance and also people in your country find out things are better in other countries and that's demoralizing, or maybe they start trying to leave the country. So anyway, if you take this story about what limits tyranny, historically, namely just the presence of other countries you're competing with and other options and knowledge of other options, then if you flip that around and say if there was only one country, then most of these pressures against having a truly crushing stagnant tyranny would go away and then it's plausible that it could actually just last permanently.

World government does not have to become totalitarian. It's more along the lines of what's the risk. So say the risk of, a crushing tyranny, that just stifles all human progress. As long as we have a wide range of power centers on earth, then that's super unlikely. If we have world government then I think that goes up from a really slight risk to least a medium risk.

And the other thing about this kind of tyranny is once it gets to, once it gets in place, then it might be super durable. It could last for thousands of years, tens of thousands of years. It's pretty hard to say, like in the absence of any external pressure, how long would the North Korean dictatorship last? Already, they've managed to cut out almost all external pressure and those new, their nuclear arsenal. In a way that might just give them near immortality, actually.

Does Kim Jong-un rule North Korea until his death? I think it's very likely, now, actually I'll give that 80%. He might be assassinated or die early, or hopefully he'll just eat, drink and smoke himself into an early grave. But he's got a strangle over the system. They've perfected or at least close to perfected the engineering design of tyranny.

Engineered pandemics

Gus: One criticism you've made of the rationalist community, which also applies to the effective altruism community is that we're too likely to accept these sci-fi scenarios. For example, of worrying about unsafe AI. Would you characterize the risk of engineered pandemics as a sci-fi scenario also?

Bryan: Yes. Not as fanciful, but won't it be deadly to the people that created it as well? That's gotta be of concern to whoever is doing it. Say, "wait a second we're not going to be able to keep this thing out of our own society". Now again, you could imagine it being used more as a doomsday weapon where you're like, "we will unleash the ultimate germ", or if you have very high hopes for this kind of thing, you can imagine we're going to engineer one where our people are immune. We've got the North Korean gene. So we all have immunity and we've come up with a virus that everyone who isn't North Korean is vulnerable to, something like that. Or I guess a little less fanciful is you have a emergency vaccination program at the same time that you do a simultaneous launch everywhere else.

This is one where, when you really start thinking about it, it's like. Isn't that just going to lead to country that did it to getting nuked? If that's what the response, then what's the difference between that country just going and launching nukes on other countries. It really is hard to understand what's so great about it as a strategy for a Machiavellian, diabolical genius, to go and do whatever horrible plan they're planning on doing.

Fanciful scenarios

Gus: Is your basic objection that we have no reference class? We have nothing to extrapolate from and so it's difficult to establish a subjective probability?

Bryan: Not quite, so I am not a fan of using those kinds of methodological rules and say, "there's no reference class, so you just can't say what the probability is". Once you're there, you can't say that it's small either.

I would say rather the reference class is precisely in fanciful scenarios. I would think of that as the reference class. So if you just go through human history and say, wow, what are the kinds of scenarios that inspire the human imagination? The kind of stories that you can tell where little kids are on the edge of their seat, and then say, out of all these fanciful scenarios, what fraction have ever come to pass?

And the answer is not zero. So human, human flight. For most of human history, this was fantasy that human beings could fly like birds. And we did it. Alright, alright. We'll score one for human ingenuity that we were able to get aircraft and then going to the moon. All right. So space travel. Yeah. That's pretty impressive. Although, when you think about what we've actually been able to do with it, then it's pretty disappointing so far, but we got satellites. But then you just go down the list of other things that human beings have dreamed of for all of human history, the kinds of powers that you attribute to gods and demi-gods and heroes.

Immortality. That's one. We haven't done too well there so far. So someone says, yeah, I've got the immortality recipe. On the one hand, like important, if true. On the other hand, almost certainly false. Like invisibility, right? This is another one that's been around since before the Ring of Gyges. Oh, I've got a ring of invisibility, wouldn't that be great. So imagine that, yeah, that would be really cool.

When I was a teenager I was into fantasy rather than sci-fi. So I was a Dungeons and Dragons person, rather than a, especially not a hard sci-f i person. But if you just go through those books and just say what are the things that people want, like ability to shoot fire out of your hands for saying some words, what are the odds we'll ever get to that? Yeah. That's not going to happen.

Prediction markers for disasters

Gus: Do you think we could use prediction markets to estimate the probability of various disasters?

Bryan: Yeah, totally reasonable to use prediction markets. Not perfect, but least bad things we've got. Yeah. So we have prediction markets where to go and say that there was 10% chance of very serious AI risk, that would be enough to change my mind a lot. The only reason why it wouldn't change my mind even more is that I have actually occasionally looked into standard betting markets when I thought the odds were wrong. And then I discovered, oh my God, the transaction costs are so high, I can't make money. Even though the odds seemed way off.

So about a year ago, I just thought that the relative odds for Biden and Trump being president in 2020 and 2025 were really wrong. Trump was way too low. Let's see. I actually think Trump was a bit too low and Biden, I thought was way too low. And then I did look into betting and then I found out not only are there the official transactions cost, but also since the US tax law would not recognize me as professional better, I would have to pay tax on winnings, but I would not be able to offset losses from losing bets.

And once I do that, I said, oh wow. Even though I think that these betting odds are off by over 10 percentage points, I would still expect to lose money by betting on my belief here. So that's why, that's the main reason why I wouldn't be quite as convinced by betting markets as it seems like I should, because of those transactions costs, which I didn't really appreciate until more recently.

Gus: So crowdsource estimates of risks from various disaster scenarios. It seems like a very valuable thing to have. Why don't we have well-funded, high functioning prediction markets? Is it mostly a regulatory story?

Bryan: Partly regulation. I think a lot of it is just lack of interest. So, if you look at the kind of betting that emerges, even when it is illegal. Sports betting, you can make it illegal, and there's still a massive black market. Why? Because people love sports betting. There's just a large market of people who feel very passionately about sports and they think they've got answers and they want to put money on it.

Whereas for these kinds of things that really most people have never even thought about the issue. So it's just not a major hobby. It is definitely the kind of thing where. The number of people who care about global catastrophic risks is small. There are a number of super rich people who care a lot.

So if they were inclined, they could subsidize the markets and basically say we just pay 20% more than whatever your official winnings are, as a subsidy, just to get people into the market and get it going. And then maybe they could even use their high profile to generate more excitement. So if Elon Musk says, I think this is the coolest thing in the world, maybe it would become cool as a result of him calling it cool. Maybe, I don't know, is Elon that cool? Maybe.

Gus: He at least has an enormous audience. So he might be able to do something like this.

Objections to utilitarianism

Gus: Okay. This is the Utilitarian Podcast, so we should talk about why utilitarianism is wrong, in your view. You have this argument from conscience, where you say that utilitarianism is hyper demanding, but utilitarian people are not spending every waking moment maximizing utility. So I guess the objection is that utilitarians don't take their own view seriously. And so maybe they don't even believe their own theory. Is that a, is that an accurate summary?

Bryan: Yeah, but as it's useful to give some background. So probably the most popular argument against utilitarians is just they're hypocrites. So you can call this the argument for hypocrisy, is like you say that we should give away virtually all of our surplus wealth, but you don't. And that is at least worth pointing out and thinking about and saying. Yeah. Now it is always open to the utilitarian to say, "yeah, I'm just an evil hypocrite, that's right, but it doesn't make the view wrong".

That's why I did construct this other argument, which is similar, but it's different in a subtle way. So I do call it the argument for conscience. And I say, look, let's go and find the most morally scrupulous utilitarians we can find it. Does seem like there are some people who really care about doing the right thing.

I've just met you. So I don't know. So perhaps you would put yourself into this group. So I've definitely met some people where I would be very surprised if they ever did anything they thought was morally wrong. I just observe them. They put their conscience first. And so I know these people.

And then when I look at those people and then say, Hey, you a person that seems hyper-scrupulous, someone who seems like they would care much more about doing the right thing than about personal convenience or your own money or anything else. And I see that you're not following your own principle.

That to me is a much better argument against the view, their official view, because this is one where they can't easily say, "yeah, I'm just an evil hypocrite". No, you're not. You're not an evil hypocrite. I've been watching you very carefully and you seem actually you are very, not just thoughtful, but you have a lot of integrity and you're still not doing it.

You yourself actually don't find the view convincing. And that's why you're not doing it. It's one that you are saying is a moral obligation, but you just haven't really followed through on it because it doesn't seem plausible even to you.

It's not one where you actually think that you are killing people by failing to donate all your surplus money to needy strangers. It's rather one where on some level you're saying, "sure seems different than murdering people".

Gus: Yeah. So one thing to say is that this directly aiming to maximize might be self-undermining. It might actually, just as a factual matter, be better to take some time off. But in general, I definitely take your point. I think this objection works if you're an anti-realist in metaethics, I don't think it works if you're a moral realist.

The way I see this is that there's a bunch of suffering out there. This suffering is bad. It doesn't matter whether we feel like it's too demanding to do something about it. And the analogy we could take here as that, imagine I'm a cosmologist. And my theory of the universe is that it's enormous and filled with planets and stars that we could explore, but that would simply take too much time, too much resources. So maybe our theory of cosmology is wrong. We would never reason this way. And so we, shouldn't reason this way about moral theories either.

Bryan: So again, the argument I'm not making is just, "it's too hard, I don't feel like it". Rather it's one where, does it actually see morally obligatory to you when you really think about it? Or is this just something that you're deducing stubbornly from a theory and refusing to consider, maybe the theory is wrong?

The main thing that you know, that we know about moral reasoning is that all you can really do in order to go and test a moral theory is to come up with hypotheticals and say, does it actually correctly predict what the hypothetical says? Of course, a lot of what people are doing is trying to come up with kind of with hypothetical counterexamples of theories.

Right now, it is always open to you to just say my fundamental theory is so plausible that it's more plausible than every objection that's ever been developed. You can say that. And if the person says that, I don't know what I would say to change their minds, all I would say is I've yet to actually meet the human being who really thinks this.

No matter how much they say they're utilitarian, when you start going and presenting specific counterexamples, and there is a famous list of counterexamples, almost everyone who hears it says "those counterexamples seem pretty good".

Like the only reply I've ever heard that seems to at least be promising is to say, all right. Yes, that's a moral illusion. It's a moral illusion. And yeah. Because human beings evolved in a certain way, it's just very hard for us to accept the truth that all suffering counts equally. And that's the only thing that matters and all of the things are unimportant except for suffering.

And then someone goes and present these counterexamples, that evolution has honed me to care about, but I need to steel myself and just accept that those counterexamples are meaningless.

Gus: That is something akin to what I would say. In general, I reject the methodology of doing thought experiments and testing our intuitions against them. I have been fighting this out with Michael Huemer for hours, so maybe, maybe we shouldn't.

Bryan: How do you, what makes the starting premise so damn plausible? Equally important, what about Hitler's suffering? That doesn't seem important. And not only, that seems good to me, actually. I think Hitler should suffer in hell for all eternity. So that seems like a counterexample number one. And then of course the classic utilitarian answer is "well that's just an illusion generated by your desire to have deterrence against future Hitlers".

No, it's not, it doesn't seem like it. Even if I knew with absolute certainty that no one would ever discover that I was torturing Hitler to punish him for what he did, I'd still think that was not just an option, I think that's obligatory on me. I've got to turn up the punishment dial on Hitler up to the maximum level. What's the highest level of punishment that you can inflict on an being. Give it to Hitler for eternity. That's what he deserves.

Gus: I think this complex moral reasoning is much less secure than our experience of suffering as bad. That's my general point.

Bryan: How about that suffering is a bad thing, usually. You know it would be odd to say that you actually experienced the badness. I experience that it hurts. Is it actually bad that it hurts? That's, I'd say that's a plausible, additional premise, although what's not so plausible is that this is the only bad thing in the universe.

Gus: I do actually accept the view that we directly experience badness.

Bryan: I this some analytic view, that pain and bad just mean the same thing, or it's like a synthetic view that this is a...

Gus: It's an analytic view, but in a specific way. So it's not like we can we can, open the dictionary and find out that painfulness is badness. It's that the concept of a badness is formed by our experience of pain.

Bryan: How many experiences have you double checked to see? Did any of these other experiences have anything to do with my formation of the concepts of good and bad? Couldn't you do a thorough survey of all your experiences to say, were there any ones that didn't involve suffering that still had something to do with the idea?

Gus: I could, but I find that this actually matches my experience quite perfectly.

Bryan: But how hard have you sought? Sought for the counterexamples. It sounds like you also have some view that you're not going to count counterexamples. So.

Gus: Exactly, yeah. I don't accept counterexamples from intuitions in general and I don't...

Bryan: It sounds like you would accept them from experience.

Gus: Yeah that's, that's different for me. I think an intuition is a, is an intellectual construct. Badness is experienced more like a color than it is experienced like a intuition about a math problem going a certain way, for example.

Bryan: So like when me and a Nazi disagree about whether it's bad to make Hitler suffer in hell for all eternity, it's similar to like, disagreeing about what color of clothes he's wearing? Seems highly intellectual. Seems like we can both agree on the facts. Hitler is screaming and agony, and I'm saying "good, make him scream more". And the Nazi "The Führer's suffering. His suffering counts at least as much as anyone else's".

And then I'm saying, "yeah yeah, it counts negatively". Like it's good for Hitler to suffer, make him suffer. But what, after what he did, like the more, the better. Like I'm, I am here. I am the avenging angel of justice.

Gus: I do think the Hitler example can be debunked. If we have extremely strong feelings about a Hitler for obvious reasons. But if you isolate the person of Hitler and imagine, and no instrumental effects from whether he feels pain or not, I believe that Hitler's pain would..

Bryan: Oh, that is of course utilitarian position. Aren't you there doing what you just said you shouldn't do, which is do these counterfactual thought experiments? Let me tell you if I do that one, I say, "oh, it's awesome, it's definitely awesome". What I've talked about with Tyler Cowen a lot is, you're stuck in a desert island, you're never going to be found, end of World War II. Hitler, washes up. What do you do?

And I say, yeah, torture him. Yeah, definitely. I'm the avenging angel here. I can do this.

Predicting future morality

Gus: My general interest in utilitarianism is trying to systematize ethics and trying to predict what we will find ethical and unethical in a hundred years. As long as utilitarians are not attempting to basically control the world to their liking, then I think that you should find it relatively harmless. This attempt to predict what the future will find unethical and ethical.

Bryan: Right. It's a totally different standard than what you've been saying, but yeah. Let me put it this way, if prediction markets were to come up and they say: in a hundred years people will not, it will not agree with you. Would that change your mind at all? The world will not get more utilitarian in the next hundred years. Moral views will change, but they'll change in ways that have nothing to do with utilitarianism.

Gus: I actually do think it would decrease my credence in utilitarianism. Yes.

Bryan: Moral counterexamples don't, but a prediction market's counterexample would sway you somewhat.

Gus: A person like Jeremy Bentham has predicted many of the things we care about today. And if that's an argument for utilitarianism, then the reverse would be an argument against utilitarianism.

Bryan: I'll agree that Bentham predicted a bunch of things. And so, some of the things that Bentham was in favor of has come to pass. But there's a bunch of other things that are quite the opposite and things have moved in an anti-utilitarian direction. Like who could ever anticipate that putting your pronouns in your email would become a moral norm. I'll just say it's bizarre.

It's like something that would only matter to, like one person in a thousand or 10,000, if even they care. And yet there's a bunch of other people who are very animated about this issue and get very upset if you don't want to do it. And it's like, utilitarian argument of like hardly anybody cares about this. So like, why are you mad? Why are you trying to increase suffering by making people unhappy? When people don't do a burdensome thing that would be of interest almost no one, anyway. That kind of argument is not going to win you a lot of friends these days. And yet seems like a pretty good utilitarian argument.

In terms of, other ways we moved in an anti-utilitarian direction, just think about the whole move of Marxism-Leninism in the 20th century and the related movements of trying to take farmer's land away from them. And expropriating private property. I think Bentham would have been strongly opposed to this for obvious reasons.

And then we had a period of about 80 years where the stuff was a big deal. And made enormous strides and then we moved back and then it seemed like that was dead. But now a bunch of countries seem like they're moving back to it for no reason at all. Like for less than no reason, other than the reason of, well we've forgotten the Soviet block ever existed.

Gus: The thing I would actually accept this, that conditional on humanity becoming more informed, more rational. If we, then, for example, if people disbelieved utilitarianism to a higher degree, then this should cause me to, to decrease my credence.

Bryan: But isn't that circular because you just won't call them rational, if they're not utilitarian?

Gus: No, I think we could define rationality separately from support for utilitarianism. Definitely. Yeah.

Economic thinking

Gus: I would like to ask you about economic reasoning in general. I'm interested in, for example, whether having a minimum wage is a good idea. And then I look in an economics textbook, and it tells me that minimum wage will increase unemployment among low-skilled workers. Okay, that's fine.

Then I look in the empirical literature and I find various meta-analyses that, that go in different directions. And I look at expert, surveys of experts and these surveys are also inconclusive. So what do I do with economics as a discipline, if it seems like I now need to get a PhD and do a research career to find out the answer to this question?

Bryan: Yeah, that is a very fair question. I think I would just start by saying that, questions like this are ones where once you hear the argument, then I would say, it should be able to give you at least a fairly strong prior on it.

Once you understand the idea of when you raise the price of labor, then this is going to reduce the amount that people want to hire. Yeah, that's very similar to how if you raised the price of asparagus that will change the amount that people want. The people want to buy. You raise the price of bookbags it'll change the number of bookbags people want to buy.

So there's one where once you hear the general logic of it, and it's also one where the next step is to say, all right, so that's the pure logic, but does it actually fit with all of my firsthand experience, introspection?

As long as you were calm, when you think about it, you weren't trying to get a certain answer. This is one where, sorry. Yeah, that does make sense. So see, that would just give the starting point. And that's one where I would say that, then it does make sense. So when people have gone and tried to do actual academic research on it, what they found? And that's where I would say that it makes sense to use that research to go and do a Bayesian update on what you got when you first understood the idea and applied common sense to it.

I agree that actual, empirical research on the minimum wage, narrowly construed, has been more mixed than I would've thought, but I will still say, you know, it makes so much sense. And it's not just that it seemed to make sense, that would be a factor, but rather, it seems very hard to believe that if you're running a business this would not be a very important factor for how many people you wish to hire.

Sure seems like even if the research was mixed, that we should basically just stay at least fairly close to our original view on this, or you know, not the original view, but the view after learning the very basics of it. And then it's also one where the next step of saying, hmm, so are there reasons why people would resist the basic reasoning? Yeah, there's a really obvious reason because it's so bitter. It goes against the popular ideology of saying we can just pass laws to go and make worker's lives better.

So there's that. Now the other thing I would say is that in terms of understanding what research says, it's always very helpful to cast a wider net and say, don't just ask what do papers on the minimum wage say. What do papers in general, on the responsiveness of labor to price say?

I wrote a post called "the myopic empiricism of the minimum wage", where I go over a number of other relevant literatures. One of them is just, there's a big literature on European unemployment, which is very consistent with the textbook views. As that an important reason why most of European countries have had higher unemployment than the US is because they have pushed harder on getting wages up.

Also fits within Europe where there's a few outliers in Europe of countries that have been less regulated. And these are countries that have lower unemployment. Even more impressive, there were a couple of countries that have moved from high regulation to lower regulation in Europe, and they've moved from high unemployment to lower appointment. Germany.

So these are all things that I would have in mind. So now, as to what else you would say about this? I think my main view is that, if you're not going to get a PhD in the subject, then the most helpful thing is first of all, learn the textbook. Second of all, think about the extent to which the textbook actually fits with common sense. Again, not common sense in the sense of what does my ideology tell me I'm supposed to believe, but would I, if I were an employer, does it make sense that I respond? Like when I talk to other people about this, is the reaction that I suggest one that seems strange?

And honestly, to me, one of the most convincing arguments here is it's common on job applications to have a little line that says salary requirement. Nobody puts a million dollars an hour on their salary requirements because almost everyone says "hmm, that would probably affect my employment prospects", which says to me it is not counterintuitive to think that pushing wages up reduces employment, rather it is emotionally unappealing, but totally intuitive.

Start with the textbook, check the textbook against common sense and then try to cast a wide net for what kinds of scholarly evidence counts, rather than just saying, what is the evidence on this particular question?

Gus: So common sense. Sanity checks. Are they heavier as evidence than the empirical literature, for example?

Bryan: Honestly, I would say that if you're like, if you really want to get to the truth then yes. It is hard because the appeal to common sense does give people a big excuse just to say "whatever my ideology says is common sense".

But if you're honest with yourself, you will realize that there is a difference. So for example, there are a number of things that I wish were not responsive to incentives, but I just can't believe that they are not responsive incentives.

So for example I favor drug legalization. It would be very convenient for me if getting rid of drug laws would not lead to more drug abuse, but that just seems ridiculous to me. Of course it's going to lead to more people doing it. If you allow a free market and plausibly, the price will fall by a factor of 10.

Or when people say that immigration laws are not actually, don't work. Like that would be so easy for me as a proponent of open borders, to say "yeah, you're totally right, they don't work". Like that's crazy. Of course, going and harshly punishing immigration keeps out immigrants. In a way that is why I wrote the book, is because I think that behavior would change a lot. It wouldn't be easy rhetorically just to say "it doesn't even matter what you think about immigration because the laws don't work". But I'll just say like, common sense. You know, textbook plus common sense to say, first of all, that's not what the textbook says.

Second of all, that just sounds like total wishful thinking. Sounds like a pack of lies that you're telling the people to trick them into doing something that they would otherwise not want to do. Yeah that's where I would come down on this. And honestly, like I do think that there are a few, really, just a few very unpopular slopes in economics, where there is an enormous amount of brain power being devoted to saying, let's find there that they aren't there.

I'm not saying it's conscious or maybe it's conscious, but the minimum wage is the one that is so outrageous because even the economists who work on it will say, "yeah, it's not really a very important issue". But so then why are you working on it? " Yeah, conventional textbook view just rubs me the wrong way, I don't like it". Suspicious.

Labor economics

Gus: Okay, Bryan you have a new book called "Labor Econ Versus the World", which is actually, if I understand it correctly, a compilation of blog posts.

Bryan: All right. So I've been blogging for EconLog for 17 years, and I like to think that I've done a number of pieces of lasting value. Most of which were not going to be read to by people, because hardly anyone is going to go scroll through 17 years of a blog and see which ones are good.

So we had the idea to go and create a series of eight new books, which would thematically go and select my very best pieces and then put them all together between two covers. I also talked to some friends, like Mike Huemer had really good experiences with Amazon self-publishing and for my graphic novel work, I've met artists. So I had a very good artist to do the covers.

The very first book in this eight book series is now out. It's called "Labor Econ Versus the World: Essays on the World's Greatest Market", the greatest market being the market for human labor. Which, contrary to people worried about automation, I think it's going to be the greatest market in the world for a very long time to come.

And what I do in the book is four things. So first of all, I talk a lot about labor regulation in general and the neglected harm of labor regulation. One argument that I make that I think should be particularly interesting for utilitarians is that I have a piece called "the joy market clearing wages".

There are many people who have said, "sure Europe has higher regulation, which caused higher unemployment, but nevertheless Europe treats people like human beings". And the United States on the other hand, without this regulation, is degrading. And so it's worse. And what I say in this essay is look, we've got some great happiness research saying that human beings find unemployment per se, to be horrible.

Even if you completely make up the loss of income, when people are unemployed, they just feel their lives are meaningless and they have nothing to do. And they're, and they have no place in society. And so I say, actually the US system is much better for human happiness. Because the marginal gains from getting some extra wages or benefits that you get from the European system just aren't comparable to the welfare losses of walking people into long-term unemployment.

So I say, actually the US system of lower regulation and low unemployment is really the better one overall. So that is one section. I do talk quite a bit about the minimum wage there. I think my piece on the myopic empiricism of minimum wage, I think is in there. There is a piece that I did on just labor demand elasticity in general, and what the evidence on that is and what it means. How this is number almost everyone thinks is super boring, even though it is highly relevant to almost all labor market policies. I think in the piece, I mentioned that there's one, one mention on Twitter per month of labor demand elasticity, even though this is a crucial variable for understanding so much of what policy does.

Then I have a section on immigration called "open borders". I have a section on education, flushing out a lot of my ideas on what's called the signaling model of education. And then finally I have a last section which is self-help called "the search for success". Probably some of the more notable features in there I talk about research on the success sequence.

This is some very common sense research that says that there's a pretty simple and easily doable formula for avoiding poverty, which amounts to: finish high school, work full time and get married before you have kids. And in the US if you just follow these three simple steps, your odds of being below the poverty line are like two percent.

What is striking about this is that it's such an easy formula to follow this. This doesn't say "Wanna avoid poverty? Just go to MIT, be first in your class, win a Nobel prize and then you'll, won't be a poverty". Great. Thanks a lot. This is something that's very doable for almost everyone. You don't need to be smart to do it. You just need to go and follow some pretty basic rules.

Worker wellbeing improvements

Gus: You start out this Labor Econ Versus the World book by listing a number of common ideas about how we have improved the wellbeing of workers. And then you have an alternative explanation or debunking, where you explain the economics of why this increase in wellbeing actually happened. So maybe you could talk a bit about that.

Bryan: Yeah, sure. And again, by the way, this is something where I think basically all economists, if you really push them would say I'm right. They might say 2% wrong, 98% right. But yeah, 98% right.

So things like, why is it that workers are so much richer today than they were were during the industrial revolution? If you go and read most American history textbooks, they'll just talk a lot about regulation and say regulation went and saved workers from this horrible life. And they barely even mention rising productivity. And again, it sounds like this is saying that if we had just taken modern regulations and imposed them in 1840, they could have immediately been enjoying our standard of living back then.

Which if you think about it, it's just crazy on multiple levels. One of them is you just look at total production and say, look, even if regulation managed to equally divide all production at the time, they would still be dirt poor compared to us. And another one is, look, if you go in 1840 and try imposing modern labor modern standards, it's not going to lead most workers to have a great job. It's going to lead most workers to be unemployable.

It would be like saying today that I have to get paid a million dollars an hour. So that really does not make much sense. The story that makes sense is that worker productivity improved a lot due to improvements in technology, but also improvements in management and other social organization improvements in trade and so on.

And all of this meant that a worker is worth a lot more in competition, then brings their wages up. So that's one example there. There's one that I think that almost no labor economist, if really pushed, were you able to say: if we put moderate, modern labor regulations in the world of 1840, could they have had our living standard of today? I think like virtually all labor economists would say no.

I think the main version of "not yes" would just be dodging the question and saying "that's not the question we should be asking". Can it be one of the questions we're asking? I'm asking it. Can we ask my question first? I think you don't want to talk about this because once you talk about this, you will be alienating a whole bunch of people that you want to keep as friends, even though, and you don't want to tell them they're wrong. I do want to tell them their they're wrong.

Economics of marriage

Bryan: So in the last part of the book, I talk quite a bit about the economics of marriage, both the direct economics of finding a partner but also the apparent effects of marital status on labor markets.

So what is quite striking is that the measured payoff for men being married seems to be very comparable to the measured of payoff for men of having a four year college degree. And yet there is immense propaganda in favor of going to college and almost no propaganda in favor of getting married.

And now obviously you would say the payoff for education is causal, the pay for marriage is not causal. At least when you put in a whole bunch of normal control variables, that pay off for marriage stills, it seems to be very robust for men.

Gus: Do you have any idea of why this is? Why would marriage, yeah?

Bryan: There is probably a lot of things going on. One of them is probably that once you are married, you then are accepting a bunch of responsibilities and saying, "look, I have people depending upon me now, I'm going to try harder, I'm going to try to be the man that's my wife wants me to be, that my kids need me to be". So that's one pretty obvious thing that would matter.

The way that we can actually get some identification here is we can see that women probably get a small, negative, a negative effect of marriage on their earnings. So it's not purely just that married people are more reliable, because that would seem would raise wages for both men and women. Probably it does have something to do with a gender division of labor where upon marriage, then both sides of the marriage usually think a big part of what the man's supposed to be doing is advancing his career so that he can provide for our family.

And on the other hand, a lot of what women think they're doing, is well,"so either I am getting ready for having kids, or I am playing a supporting role or I'm taking care of kids that we actually, that once we do have them". Of course, that matters a lot as well. So that's probably some of what's going on.

Another possibility obviously is just that our employers correctly believe that married men are more reliable than unmarried men. And so they are just more interested in hiring them for that. You know, I can easily believe that was a big deal 50 years ago. It's a little hard to believe that's so important today, but don't totally dismiss it. That might be true as well. So there's that.

Gus: Wouldn't it be very easy to fake being married? Just buy a cheap ring and put it on.

Bryan: Yeah, interesting that you should say that. Of course, in principle it seems pretty easy to fake having a four-year college degree too. Most employers don't check. For entry level jobs they do, but after a while, usually there's no verification of that. So my view on that is actually that it's easy to tell the initial lie, but maintaining the lie is a totally different story. In order for you to start telling these lies, you basically have to go and cut your personal life and your work life completely apart.

Because otherwise, eventually you're going to have some coworkers over at your house and they say, "hey, where's your wife?" Or they're talking to your friends and saying, "so what was it like going to Middle Tennessee State University with this person?" Like.

Every now and then we do catch someone lying about their college credentials. There's probably a whole lot of people don't get caught. So maybe it actually works. And just people's compulsive honesty is preventing them from profiting, by being dishonest. So there's that.

Gus: I'm definitely not suggesting to lie either about your college degree or your marriage status, but I was just interested in it as a point of, yeah, where the value comes from for the employer.

Bryan: I mean, so like, another thing that could be going on is, when a guy gets married, then he stops being distracted by trying to find mates. So he's got that locked down and then, like, he knows where he's going home every night. So instead of trying to peacock or to show off his attractiveness to a wide range of women, he just has one woman who knows how well he's doing, pretty well. And he's trying to make her happy with his progress.

Anyway, that seems like a least a plausible story of what's going on. And then there's also the, one of just women make men shape up. There's a lot of stories about, there's some guy that is not very reliable, and has problems, and then he meets the right girl and then she whips him into shape.

Gus: Yeah, isn't it a general feature of kind of a sociological research that married people are just, they do better on a bunch of different variables?

Bryan: Right. Although married women don't do better on earnings. Men don't whip women into shape. Women whip men into shape, men don't whip women into shape. Which also fits with this classic saying of: men get married hoping that their wives will not change, women get married hoping that their husbands will change, and they're both generally disappointed. Probably an exaggeration, but, divide by two, and then there's some insight there.

So that is "Labor Econ Versus the World", my latest book, and the next seven books will probably be coming out one book, every, say one to three months, until they're all done.

Gus: Bryan, thank you for doing this podcast with me.

Bryan: All right. Yes. Super fun. Thanks a lot. At least you have increased my happiness and reduced my suffering. So chalk one up for the utilitarian success, and I hope your listeners feel the same way.

Gus: Fantastic.

22

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities