Hide table of contents

Many (e.g. Parfit, Bostrom, and MacAskill) have convincingly argued that we should care for future people (longtermism) and thus extinction is as bad as the loss of 10^35 lives or possibly much more bc there might be 10^35 humans yet to be born.

I believe with medium confidence, these numbers are far too high and that when fertility patterns are fully accounted for, 10^35 might become 10^10—approximately the current human population. I believe with much stronger confidence that EAs should be explicit about the assumptions underlying numbers like 10^35 because concern for future people is necessary but insufficient for such claims.

I first defend these claims before offering some ancillary thoughts about implications of longtermism EAs should take more seriously.

 

Extinction isn’t that much worse than 1 death

The main point is that if you kill a random person, you kill off the rest of the rest of their descendants too. And since the average person is responsible for ~10^35/(current human population) of the future lives, their death is ~10^10 times less bad than extinction. 

The general response to this is a form of Malthusianism—that after a death, human population regains its level since fertility increases. Given that current fertility rates are below 2 in much of the developed world, I have low confidence this claim is true. More importantly, you need high credence in a type of Malthusianism to bump up the 10^10 number significantly. If Malthusianism is 99% likely to be correct, extinction is only 10^12 times worse than one death--if X is harm of extinction and X is arbitrarily large: there is a 99% chance you can treat one death as infinitely less bad as extinction but a 1% chance it’s 10^10 times worse and 0.99(0 * X) + 0.01(1/10^10 * X) = 1/10^12 * X.

There are many other claims one could make regarding the above. Some popular ones include digital people, simulated lives, and artificial uteruses. I don’t have developed thoughts on how these technologies interact with fertility rates. The same point about needing high credence from above does apply though. And more importantly, if any of these or other claims are the lynchpin for arguments about why extinction should be a main priority, EAs should make the point more explicitly because none of these claims is that obvious. Even Malthusianism type claims should be made more explicit.

Finally, I think arguments for why extinction might be less than 10^10 times worse are often ignored. I’ll point out two. First, it seems that people can have large positive externalities on others’s lives and also future people’s lives by sharing ideas; less people means the externality from each life is less. Second, insecurity that might result from seeing another’s death might lower fertility and thus lower future lives.

Other Implications of longtermism

I'd like to end by zooming out on longtermism as a whole. The idea that future people matter is a powerful claim and opens a deep rabbit hole. In my view, EAs have found the first exit out of the rabbit hole—that extinction might be really bad—and left even more unintuitive implications buried below.

A few of these:

  1. Fertility might be an important cause area. If you can raise the fertility rate by 1% for one generation, you increase total future population by 1%, if you assume away Malthusianism and similar claims. If you can affect a longterm shift in fertility rates (for example, through genetic editing), you could do much, much better— 100% x [1.01^n - 1] times better, where n is the number of future generations, which is a very large number.
  2. Maybe we should prioritize young lives over older lives. Under longtermism, the main value most people have is their progeny. If there are 10^35 more people left to live, saving the life of someone who will have kids is > 10^25 times more valuable than saving the life of someone who won’t.
  3. Abortion might be a great evil. See 1…no matter your view on whether an unborn baby is a life, banning abortion could easily affect a significant and longterm increase in the fertility rate.
Comments7


Sorted by Click to highlight new comments since:

I think your calculations must be wrong somewhere, although I can't quite follow them well enough to see exactly where. 

If you have a 10% credence in Malthusianism, then the expected badness of extinction is 0.1*10^35, or whatever value you think a big future is. That's still a lot closer to 10^35 times the badness of one death than 10^10 times.

Does that seem right?

No, because you have to compare the two harms. 

Take the number of future lives as N and current population as C


Extinction is as bad as N lives lost.


One death is w/ 10% credence only approx as bad as 1 death bc Malthusianism. But w/ 90% credence, it is as bad as N/C lives lost.


So, plugging in 10^35 as N and 10^10 as C, EV of one death is 1 (.1) + N/C (0.9) ~ N/C * 0.9 ~ 9e24, 11 times worse than extinction.


In general, if you have credence p, extinction becomes 10^10*1/(1-p) worse than one death.

Ah nice, thanks for explaining! I'm not following all the calculations still, but that's on me, and I think they're probably right.

But I don't think your argument is actually that relevant to what we should do, even if it's right. That's because we don't care about how good our actions are as a fraction/multiple of what our other options are. Instead, we just want to do whatever leads to the best expected outcomes. 

Suppose there was a hypothetical world where there was a one in ten chance the total figure population was a billion, and 90% chance the population was two. And suppose we have two options: save one person, or save half the people.

In that case, the expected value of saving half the people would be 0.9*1 + 0.1*500,000,000 = about 50,000,001. That's compared to the expected value of 1 of saving one person. Imo, this is a strong reason for picking the "save half the people option".

But the expected fraction of people saved by the options is quite different. The "save half" option always results in half being saved. And the expected value of the "save one" option is also very close to half: 0.9*0.5 + 0.1*1/1,000,000,000. Even though the two interventions look very similar from this perspective, I think it's basically irrelevant - expected value is the relevant thing. 

What do you think? I might well have made a mistake, or misunderstood still.

Hmm, I’m not sure Iunderstand your point so maybe let me add some more numbers to what I’m saying and you could say if you think your point is responsive? 

What I think you’re saying is that I’m estimating E[value saving one life / value stopping extinction] rather than E[value of saving one life] / E[value of stopping extinction]. I think this is wrong and that I’m doing the latter.

I start from the premise of we want to save in expectation most lives (current and future are equivalent). Let’s say I have two options…I can prevent extinction or directly stop a random living person from dying. Assume there are 10^35 (I just want N >> C) future lives and there are 10^10 current lives. Now assume I believe there is a 99% chance that when I save this one life, fertility in the future somehow goes up such that the individual’s progeny are replaced, but there’s a 1% chance the individual’s progeny is not replaced. The individual is responsible for 10^35/10^10 =10^25 progeny. This gives E[stopping random living person from dying] ~ 1%*10^25 =10^23.

And we’d agree E[preventing extinction] = 10^35. So E[value of saving one life] / E[value of stopping extinction] ~ 10^-12.

Interestingly E[value of saving one life / value of stopping extinction] is the same in this case because the denominator is just a constant random variable…though E[value of stopping extinction/value of saving one life] is very very large (much larger than 10^12).

Thanks, this back and forth is very helpful. I think I've got a clearer idea about what you're saying. 

I think I disagree that it's reasonable to assume that there will be a fixed N = 10^35 future lives, regardless of whether it ends up Malthusian. If it ends up not Malthusian, I think I'd expect the number of people in the future to be far less than whatever the max imposed by resource constraints is, ie much less than 10^35.

So I think that changes the calculation of E[saving one life], without much changing E[preventing extinction], because you need to split out the cases where Malthusianism is true vs false.

E[saving one life] is 1 if Malthusianism is true, or something fraction of the future if Malthusianism is false, but if it's false, then we should expect the future to be much smaller than 10^35. So the EV will be much less than 10^35.

E[preventing extinction] is 10^35 if Malthusianism is true, and much less if it's false. But you don't need that high a credence to get an EV around 10^35.

So I guess all that to say that I think your argument is right and also action relevant, except I think the future is much smaller in non-Malthusian worlds, so there's a somewhat bigger gap than "just" 10^10. I'm not sure how much bigger. 

What do you think about that?

Edit: I misread and thought you were saying non-Malthusian worlds had more lives at first; realized you said the opposite, so we're saying the same thing and we agree. Will have to do more math about this.

This is an interesting point that I hadn't considered! I think you're right that non-Malthusian futures are much larger than Malthusian futures in some cases...though if i.e. the "Malthusian" constraint is digital lives or such, not sure.

I think the argument you make actually cuts the other way. That to go back to the expected value...the case where the single death is deriving its EV from is precisely the non-Malthusian scenarios (when its progeny is not replaced by future progeny) so its EV actually remains the same. The extinction EV is the one that reduces...so you'll actually get a number much less than 10^10 if you have high credence that Malthusianism is true and think Malthusian worlds have more people.

But, if you believe the opposite...that Malthusian worlds have more people, which I have not thought about but actually think might be true, yes a bigger gap than 10^10; will have to think about this.

Thanks! Does this make sense to you?
 

We've talked about this, but I wanted to include my two counterarguments as a comment to this post: 

  1. It seems like there's a good likelihood that we have semi-mathusian constraints nowadays. While I would admit that one should be skeptical of total malthusianism (ie for every person dying another one lives because we are at max carrying capacity), I think it is much more reasonable to think that carrying constraints actually do exist and maybe its something like for every death you get .2 lives or something. If this is true, I think this argument weakens a bunch.
  2. This argument only works if, conditional on existential risk not happening, we don't hit malthusian constraints at any point in the future, which seems quite implausible. If we don't get existential risk and the pie just keeps growing, it seems like we would just get super-abundance and the only thing holding people back would be malthusian physical constraints on creating happy people. Therefore, we just need some people to live past that time of super-abundance to have massive growth. Additionally, even if you think those people wouldn't have kids (which I find pretty implausible -- as one person's preference for children would lead to many kids given abundance), you could talk about those lives being extremely happy which holds most of the weight. This also 

Side note: this argument seems to rely on some ideas about astronomical waste that I won't discuss here (I also haven't done so much thinking on the topic), but it seems maybe worth it to frame around that debate. 

Curated and popular this week
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
 ·  · 11m read
 · 
Our Mission: To build a multidisciplinary field around using technology—especially AI—to improve the lives of nonhumans now and in the future.  Overview Background This hybrid conference had nearly 550 participants and took place March 1-2, 2025 at UC Berkeley. It was organized by AI for Animals for $74k by volunteer core organizers Constance Li, Sankalpa Ghose, and Santeri Tani.  This conference has evolved since 2023: * The 1st conference mainly consisted of philosophers and was a single track lecture/panel. * The 2nd conference put all lectures on one day and followed it with 2 days of interactive unconference sessions happening in parallel and a week of in-person co-working. * This 3rd conference had a week of related satellite events, free shared accommodations for 50+ attendees, 2 days of parallel lectures/panels/unconferences, 80 unique sessions, of which 32 are available on Youtube, Swapcard to enable 1:1 connections, and a Slack community to continue conversations year round. We have been quickly expanding this conference in order to prepare those that are working toward the reduction of nonhuman suffering to adapt to the drastic and rapid changes that AI will bring.  Luckily, it seems like it has been working!  This year, many animal advocacy organizations attended (mostly smaller and younger ones) as well as newly formed groups focused on digital minds and funders who spanned both of these spaces. We also had more diversity of speakers and attendees which included economists, AI researchers, investors, tech companies, journalists, animal welfare researchers, and more. This was done through strategic targeted outreach and a bigger team of volunteers.  Outcomes On our feedback survey, which had 85 total responses (mainly from in-person attendees), people reported an average of 7 new connections (defined as someone they would feel comfortable reaching out to for a favor like reviewing a blog post) and of those new connections, an average of 3
 ·  · 15m read
 · 
In our recent strategy retreat, the GWWC Leadership Team recognised that by spreading our limited resources across too many projects, we are unable to deliver the level of excellence and impact that our mission demands. True to our value of being mission accountable, we've therefore made the difficult but necessary decision to discontinue a total of 10 initiatives. By focusing our energy on fewer, more strategically aligned initiatives, we think we’ll be more likely to ultimately achieve our Big Hairy Audacious Goal of 1 million pledgers donating $3B USD to high-impact charities annually. (See our 2025 strategy.) We’d like to be transparent about the choices we made, both to hold ourselves accountable and so other organisations can take the gaps we leave into account when planning their work. As such, this post aims to: * Inform the broader EA community about changes to projects & highlight opportunities to carry these projects forward * Provide timelines for project transitions * Explain our rationale for discontinuing certain initiatives What’s changing  We've identified 10 initiatives[1] to wind down or transition. These are: * GWWC Canada * Effective Altruism Australia funding partnership * GWWC Groups * Giving Games * Charity Elections * Effective Giving Meta evaluation and grantmaking * The Donor Lottery * Translations * Hosted Funds * New licensing of the GWWC brand  Each of these is detailed in the sections below, with timelines and transition plans where applicable. How this is relevant to you  We still believe in the impact potential of many of these projects. Our decision doesn’t necessarily reflect their lack of value, but rather our need to focus at this juncture of GWWC's development.  Thus, we are actively looking for organisations and individuals interested in taking on some of these projects. If that’s you, please do reach out: see each project's section for specific contact details. Thank you for your continued support as we