Many (e.g. Parfit, Bostrom, and MacAskill) have convincingly argued that we should care for future people (longtermism) and thus extinction is as bad as the loss of 10^35 lives or possibly much more bc there might be 10^35 humans yet to be born.
I believe with medium confidence, these numbers are far too high and that when fertility patterns are fully accounted for, 10^35 might become 10^10—approximately the current human population. I believe with much stronger confidence that EAs should be explicit about the assumptions underlying numbers like 10^35 because concern for future people is necessary but insufficient for such claims.
I first defend these claims before offering some ancillary thoughts about implications of longtermism EAs should take more seriously.
Extinction isn’t that much worse than 1 death
The main point is that if you kill a random person, you kill off the rest of the rest of their descendants too. And since the average person is responsible for ~10^35/(current human population) of the future lives, their death is ~10^10 times less bad than extinction.
The general response to this is a form of Malthusianism—that after a death, human population regains its level since fertility increases. Given that current fertility rates are below 2 in much of the developed world, I have low confidence this claim is true. More importantly, you need high credence in a type of Malthusianism to bump up the 10^10 number significantly. If Malthusianism is 99% likely to be correct, extinction is only 10^12 times worse than one death--if X is harm of extinction and X is arbitrarily large: there is a 99% chance you can treat one death as infinitely less bad as extinction but a 1% chance it’s 10^10 times worse and 0.99(0 * X) + 0.01(1/10^10 * X) = 1/10^12 * X.
There are many other claims one could make regarding the above. Some popular ones include digital people, simulated lives, and artificial uteruses. I don’t have developed thoughts on how these technologies interact with fertility rates. The same point about needing high credence from above does apply though. And more importantly, if any of these or other claims are the lynchpin for arguments about why extinction should be a main priority, EAs should make the point more explicitly because none of these claims is that obvious. Even Malthusianism type claims should be made more explicit.
Finally, I think arguments for why extinction might be less than 10^10 times worse are often ignored. I’ll point out two. First, it seems that people can have large positive externalities on others’s lives and also future people’s lives by sharing ideas; less people means the externality from each life is less. Second, insecurity that might result from seeing another’s death might lower fertility and thus lower future lives.
Other Implications of longtermism
I'd like to end by zooming out on longtermism as a whole. The idea that future people matter is a powerful claim and opens a deep rabbit hole. In my view, EAs have found the first exit out of the rabbit hole—that extinction might be really bad—and left even more unintuitive implications buried below.
A few of these:
- Fertility might be an important cause area. If you can raise the fertility rate by 1% for one generation, you increase total future population by 1%, if you assume away Malthusianism and similar claims. If you can affect a longterm shift in fertility rates (for example, through genetic editing), you could do much, much better— 100% x [1.01^n - 1] times better, where n is the number of future generations, which is a very large number.
- Maybe we should prioritize young lives over older lives. Under longtermism, the main value most people have is their progeny. If there are 10^35 more people left to live, saving the life of someone who will have kids is > 10^25 times more valuable than saving the life of someone who won’t.
- Abortion might be a great evil. See 1…no matter your view on whether an unborn baby is a life, banning abortion could easily affect a significant and longterm increase in the fertility rate.
Thanks, this back and forth is very helpful. I think I've got a clearer idea about what you're saying.
I think I disagree that it's reasonable to assume that there will be a fixed N = 10^35 future lives, regardless of whether it ends up Malthusian. If it ends up not Malthusian, I think I'd expect the number of people in the future to be far less than whatever the max imposed by resource constraints is, ie much less than 10^35.
So I think that changes the calculation of E[saving one life], without much changing E[preventing extinction], because you need to split out the cases where Malthusianism is true vs false.
E[saving one life] is 1 if Malthusianism is true, or something fraction of the future if Malthusianism is false, but if it's false, then we should expect the future to be much smaller than 10^35. So the EV will be much less than 10^35.
E[preventing extinction] is 10^35 if Malthusianism is true, and much less if it's false. But you don't need that high a credence to get an EV around 10^35.
So I guess all that to say that I think your argument is right and also action relevant, except I think the future is much smaller in non-Malthusian worlds, so there's a somewhat bigger gap than "just" 10^10. I'm not sure how much bigger.
What do you think about that?