Hide table of contents

Introduction

Several recent popular posts (here, here, and here) have made the case that existential risks (x-risks) should be introduced without appealing to longtermism or the idea that future people have moral value. They tend to argue or imply that x-risks would still be justified as a priority without caring about future people. I felt intuitively skeptical of this claim[1] and decided to stress-test it.

In this post, I:

  1. Argue that prioritizing x-risks over near-term interventions and global catastrophic risks may require caring about future people. More
  2. Disambiguate connotations of “longtermism”, and suggest a strategy for introducing the priority of existential risks. More
  3. Review and respond to previous articles which mostly argued that longtermism wasn’t necessary for prioritizing existential risks. More

Prioritizing x-risks may require caring about future people

I’ll do some rough analyses on the value of x-risk interventions vs. (a) near-term interventions, such as global health and animal welfare and (b) global catastrophic risk (GCR) interventions, such as reducing risk of nuclear war. I assume a lack of caring about future people to test whether it’s necessary for prioritizing x-risk above alternatives. My goal is to do a quick first pass, which I’d love for others to build on / challenge / improve!

I find that without taking into account future people, x-risk interventions are approximately[2] as cost-effective as near-term and GCR interventions. Therefore, strongly prioritizing x-risks may require caring about future people; otherwise, it depends on non-obvious claims about the tractability of x-risk reduction and the moral weights of animals.

InterventionRough estimated cost-effectiveness, current lives only ($/human-life-equivalent-saved)[3]
General x-risk prevention (funding bar)[4]$125 to $1,250
AI x-risk prevention$375
Animal welfare$450
Bio x-risk prevention$1,000
Nuclear war prevention$1,250
GiveWell-style global health (e.g. bednet distribution)$4,500


 

Estimating the value of x-risk interventions

This paper estimates that $250B would reduce biorisk by 1%. Taking Ord’s estimate of 3% biorisk this century and a population of ~8 billion, we get: $250B / (8B * .01 * .03) = $104,167/life saved via biorisk interventions. The paper calls this a conservative estimate, so a more optimistic one might be 1-2 more OOMs as effective at ~$10,000 to ~$1,000 / life saved; let’s take the optimistic end of $1,000 / life saved as a rough best guess, since work on bio x-risk likely also reduces the likelihood of deaths from below-existential pandemics and these seem substantially more likely than the most severe ones.

For AI risk, 80,000 Hours estimated several years ago that another $100M/yr (for how long? let’s say 30 years) can reduce AI risk by 1%[5][6]; unclear if this percentage is absolute or relative, relative seems more reasonable to me. Let’s again defer to Ord and assume 10% total AI risk. This gives: ($100M * 30) / (8B * .01 * .1) = $375 / life saved.

On the funding side, Linch has ideated a .01% Fund which would aim to reduce x-risks by .01% for $100M-$1B. This implies a cost-effectiveness of ($100M to $1B) / (8B * .0001) = $125 to 1,250 / life saved.

Comparing to near-term interventions

GiveWell estimates it costs $4,500 to save a life through global health interventions. 

This post estimates that animal welfare interventions may be ~10x more effective, implying $450 / human life-equivalent, though this is an especially rough number.[7]

Comparing to GCR intervention

Less obviously than near-term interventions, a potential issue with not caring about future people is over-prioritizing global catastrophic risks (that might kill a substantial percentage of people but likely not destroy all future value) relative to existential risks. I’ll just consider nuclear war here as it’s the most likely non-bio GCR that I’m aware of.

80,000 Hours estimates a 10-85% chance of nuclear war in the next 100 years; let’s say 40%. As estimated by Luisa Rodriguez, full-blown nuclear war would on average kill 5 billion people but usually not lead to existential catastrophe. I’m not aware of numerical estimates of the tractability of nuclear risk: if we assume it’s the same tractability as biorisk above ($25B to reduce 1% of the risk), we get: $25B / (5B * .01 * .4) = $1,250 / life saved.

Disambiguating connotations of “longtermism”

While caring about future people might be necessary for the case for x-risks to go through, longtermism is still a confusing term. Let’s disambiguate between 2 connotations of “longtermism”:

  1. A focus on influencing events far in the future.
  2. A focus on the long-term impact of influencing (often near-medium term) events.

The word longtermism naturally evokes thoughts of (1): visions of schemes intended to influence events in 100 years time in convoluted ways. But what longtermists actually care about is (2): the long-term effects of events like existential catastrophes, even if the event itself (such as AI takeover) may be only 10-20 years away! Picturing (1) leads to objections regarding the unpredictability of events in the far future, when oftentimes longtermists are intervening on risks or risk factors expected in the near-medium term.

The subtle difference between (1) and (2) is why it sounds so weird when “longtermists” assert that AGI is coming soon so we should discount worries about climate effects 100 years out, and why some are compelled to assert that “AI safety is not longtermist”. As argued above, I think AI safety likely requires some level of caring about future people to be strongly prioritized;[8] but I think the term longtermism has a real branding problem in that it evokes thoughts of (1) much more easily than (2).

This leaves longtermists with a conundrum: the word “longtermism” evokes the wrong idea in a confusing way, but caring about future people might be necessary to make prioritization arguments go through. I’d be interested to hear others’ suggestions on how to deal with this when introducing the argument for working on x-risks, but I’ll offer a rough suggestion:

  1. Introduce the idea of working on existential or catastrophic risks before bringing up the word “longtermism”.
  2. Be clear that many of the risks are near-medium term, but part of the reason we prioritize them strongly is that if the risks are avoided humanity could create a long future.
    1. Use analogies to things people already care about because of their effect on future humans, e.g. climate change.[9]

Reviewing previous articles

I’ll briefly review the arguments in previous related articles and discussion, most of which argued that longtermism was unnecessary for prioritizing existential risks.

"Long-Termism" vs. "Existential Risk"

Regardless of whether these statements [...about future people mattering equally] are true, or whether you could eventually convince someone of them, they're not the most efficient way to make people concerned about something which will also, in the short term, kill them and everyone they know.

I agree that longtermism has this branding problem and it is likely not best to argue for future people mattering equally before discussing existential risks.

A 1/1 million chance of preventing apocalypse is worth 7,000 lives, which takes $30 million with GiveWell style charities. But I don't think long-termists are actually asking for $30 million to make the apocalypse 0.0001% less likely - both because we can't reliably calculate numbers that low, and because if you had $30 million you could probably do much better than 0.0001%. So I'm skeptical that problems like this are likely to come up in real life.

I think the claim that “if you had $30 million you could probably do much better than 0.0001%” is non-obvious. $30 million for a 0.0001% reduction is as cost-effective as $3 billion for a 0.01% reduction, which is only 3x off from the target effectiveness range of the .01% Fund discussed above. This seems well within the range of estimation error, rather than something you can “probably do much better than”.

I also think the claim that “we can't reliably calculate numbers that low” is very suspect as a justification; I agree that we can’t estimate low percentage existential risk impacts with much precision, but there must be some actions we’d take that our best guess estimate impact on existential risk is 0.0001% (and substantially lower); for example, a single day of work for an average AI safety researcher. It’s not obvious to me that our best guess estimate for a marginal $30 million poured into AI safety risk reduction should be >0.0001%, despite us not being able to estimate it precisely.

  1. Simplify EA Pitches to "Holy Shit, X-Risk"
    1. This argues for >=1% AI risk and >=.1% biorisk being enough to justify working on them without longtermism, but doesn’t do any math on the tractability of reducing risks or a numerical comparison to other options.
  2. My Most Likely Reason to Die Young is AI X-Risk
    1. This argues that it’s possible to care a lot about AI safety based on it affecting yourself, your loved ones, other people alive today, etc. but doesn’t do a comparison to other possible causes.
  3. Carl Shulman on the common-sense case for existential risk work and its practical implications
    1. This argues that x-risk spending can pass a US government cost-benefit test based on willingness-to-pay to save American lives. The argument is cool in its own right but as EAs prioritizing time/money we should compare against the best possible opportunities to do good anywhere in the world, not base our threshold on the US government's willingness-to-pay.
  4. Alyssa Vance’s tweet about whether the longtermism debate is academic
    1. And Howie Lempel responding, saying he thinks he would work on global poverty or animal welfare if he knew the world was ending in 1,000 years
    2. Howie’s response is interesting to me, as it implies a fairly pessimistic assessment of tractability of x-risks given that 1,000 years would shift the calculations presented here by over an OOM (>10 generations).
    3. (Edited to add: Howie clarifies "I wouldn't put too much weight on my tweet saying I think I probably wouldn't be working on x-risk if I knew the world would end in 1,000 years and I don't think my (wild) guess at the tractability of x-risk mitigation is particularly pessimistic." with additional thoughts
  5. antimonyanthony’s shortform argues longtermism isn’t redundant, making some similar points to the ones I’ve made in this post, and an additional one about the suffering-focused perspective leading to further divergence.
    1. “Arguments for the tractability of reducing those risks, sufficient to outweigh the nearterm good done by focusing on global human health or animal welfare, seem lacking in the arguments I’ve seen for prioritizing extinction risk reduction on non-longtermist grounds.”
    2. “But as far as I’ve seen, there haven’t been compelling cost-effectiveness estimates suggesting that the marginal dollar or work-hour invested in alignment is competitive with GiveWell charities or interventions against factory farming, from a purely neartermist perspective.”
    3. “Not all longtermist cause areas are risks that would befall currently existing beings… for those who are downside-focused, there simply isn’t this convenient convergence between near- and long-term interventions.”

Acknowledgments

Thanks to Neel Nanda, Nuño Sempere, and Misha Yagudin for feedback.
 

  1. ^

    It feels like suspicious convergence to me: Care about future people equally to current people? Work on X. Don’t care about future people at all? Work on X anyway!

  2. ^

    (Edited to add this footnote) Here, I mean within 1-2 OOMs by "approximately" due to the low resilience/robustness of many of the cost-effectiveness estimates (while 1-2 OOMs difference would be a huge deal for more resilient/robust estimates).

  3. ^

    When practically making a career decision, one would need to consider the possibility of direct work which may be different from allocating dollars; e.g. one might be a particularly good fit for AI safety work, or alternatively have difficulty directly contributing and prefer the scalability of donations to near-term causes. But I think these rough estimates succeed in showing that without taking into account future people, it’s not a slam dunk to prioritize x-risks.

  4. ^

    Edited to add: as described by Linch, this effectiveness estimate should perhaps receive a moderate penalty (adjustment upwards to be a bit less effective) since a substantial portion of the estimate was informed by civilizational resilience measures like refuges, which wouldn't prevent most people from dying in case of a catastrophe.

  5. ^

    I’ve sometimes seen estimates from people I respect working on AI risk that estimate much higher tractability than the 80,000 Hours article and myself (if anything, I’d lean more pessimistic than 80k); I tend to take these with a grain of salt and understand that high estimates of the tractability of one’s own work might help for maintaining motivation, sanity, etc. Plus of course there are selection effects here.

  6. ^

    There’s a wide range of effectiveness within AI risk interventions: e.g. the effectiveness of “slam dunk” cases like funding ARC seems much higher than the last/marginal AI risk reducing dollar. Though this could also be true of some alternative near-term strategies, e.g. charity entrepreneurship!

  7. ^

    It’s very sensitive to moral weights between species. I’d love to hear from people who have done more in-depth comparisons here!

  8. ^

    One more concern I have is that caring about only present people feels “incompletely relative”, similar to how Parfit argued self-interest theory is incompletely relative in Reasons and Persons. If you don’t care about future people, why care equally about distant people? And indeed I have heard some arguments about focusing on AI safety to protect yourself / your family / your community etc., e.g. in this post. These arguments feel even more mistaken to me; if one actually only cared about themselves or their family / small community, it would very likely make sense to optimize for enjoyment which is unlikely to look like hunkering down and solving AI alignment. I concede that this motivation could be useful a la a dark art of rationality, but one should tread very carefully with incoherent justifications.

  9. ^
Comments38
Sorted by Click to highlight new comments since: Today at 6:45 AM

I agree thinking xrisk reduction is the top priority likely depends on caring significantly about future people (e.g. thinking the value of future generations is at least 10-100x the present).

A key issue I don't see discussed very much is diminishing returns to x-risk reduction. The first $1bn spent on xrisk reduction is (I'd guess) very cost-effective, but over the next few decades, it's likely that at least tens of billions will be spent on it, maybe hundreds. Additional donations only add at that margin, where the returns are probably 10-100x lower than the first billion. So a strict neartermist could easily think AMF is more cost-effective.

That said, I think it's fair to say it doesn't depend on something like "strong longtermism". Common sense ethics cares about future generations, and I think suggests we should do far more about xrisk and GCR reduction than we do today.

I wrote about this in an 80k newsletter last autumn:

 

Carl Shulman on the common-sense case for existential risk work and its practical implications (#112)

Here’s the basic argument:

  • Reducing existential risk by 1 percentage point would save the lives of 3.3 million Americans in expectation.
  • The US government is typically willing to spend over $5 million to save a life.
  • So, if the reduction can be achieved for under $16.5 trillion, it would pass a government cost-benefit analysis.
  • If you can reduce existential risk by 1 percentage point for under $165 billion, the cost-benefit ratio would be over 100 — no longtermism or cosmopolitanism needed.


Taking a global perspective, if you can reduce existential risk by 1 percentage point for under $234 billion, you would save lives more cheaply than GiveWell’s top recommended charities — again, regardless of whether you attach any value to future generations or not. 

Toby Ord, author of The Precipice, thinks there's a 16% chance of existential risk before 2100. Could we get that down to 15%, if we invested $234 billion?

I think yes. Less than $300 million is spent on the top priorities for reducing risk today each year, so $200 billion would be a massive expansion.

The issue is marginal returns, and where the margin will end up. While it might be possible to reduce existential risk by 1 percentage point now for $10 billion — saving lives 20 times more cheaply than GiveWell's top charities — reducing it by another percentage point might take $100 billion+, which would be under 2x as cost-effective as GiveWell top charities.

I don’t know how much is going to be spent on existential risk reduction over the coming decades, or how quickly returns will diminish. [Edit: But it seems plausible to me it'll be over $100bn and it'll be more expensive to reduce x-risk than these estimates.] Overall I think reducing existential risk is a competitor for the top issue even just considering the cost of saving the life of someone in the present generation, though it's not clear it's the top issue.

My bottom line is that you only need to put moderate weight on longtermism to make reducing existential risk seem like the top priority.

 

(Note: I made some edits to the above in response to Eli's comment.)

I agree with most of this, thanks for pointing to the relevant newsletter!

A few specific reactions:

The first $1bn spent on xrisk reduction is very cost-effective

This seems plausible to me but not obvious, in particular for AI risk the field seems pre-paradigmatic such that there aren't necessarily "low-hanging fruit" to be plucked; and it's unclear whether previous efforts besides field-building have even been net positive in total.

That said, I think it's fair to say it doesn't depend on something like "strong longtermism". Common sense ethics cares about future generations, and I think suggests we should do far more about xrisk and GCR reduction than we do today.

Agree with this, though I think "strong longtermism" might make the case easier for those who aren't sure about the expected length of the long-term future.

Taking a global perspective, if you can reduce existential risk by 1 percentage point for under $234 billion, you would save lives more cheaply than GiveWell’s top recommended charities — again, regardless of whether you attach any value to future generations or not. 

reducing it by another percentage point might take $100 billion+, which would be only 20% as cost-effective as GiveWell top charities.

Seems like there's a typo somewhere; reducing x-risk by a percentage point for $100 billion would be more effective than $234 billion, not 20% as cost-effective?

Thanks I made some edits!

This seems plausible to me but not obvious, in particular for AI risk the field seems pre-paradigmatic such that there aren't necessarily "low-hanging fruit" to be plucked; and it's unclear whether previous efforts besides field-building have even been net positive in total.

Agree though my best guess is something like diminishing log returns the whole way down. (Or maybe even a bit of increasing returns within the first $100m / 100 people.)

I think log returns is reasonable - that's what we generally assumed in the cost effectiveness analyses that estimated that AGI safety, resilient foods, and interventions for loss of electricity/industry catastrophes would generally be lower cost per life saved in the present generation than GiveWell. But that was only for the first ~$3 billion for AGI safety and the first few hundred million dollars for the other interventions.

I'm having trouble figuring out how to respond to this. I understand that it's kind of an academic exercise to see how cause prioritization  might work out if you got very very rough numbers and took utilitarianism very seriously without allowing any emotional considerations to creep in. But I feel like that potentially makes it irrelevant to any possible question.

If we're talking about how normal people should prioritize...well, the only near-term cause close to x-risk here is animal welfare. If you tell a normal person "You can either work to prevent you and everyone you love from dying, or work to give chickens bigger cages, which do you prefer?", their response is not going to depend on QALYs.

If we're talking about how the EA movement should prioritize, the EA movement currently  spends more on global health than on animal welfare and AI risk combined. It clearly isn't even following near-termist ideas to their logical conclusion, let alone long-termist ones.

If we're talking about how a hypothetical perfect philosopher would prioritize, I think there would be many other things they worry about before they get to long-termism. For example, does your estimate for the badness of AI risk include that it would end all animal suffering forever? And all animal pleasure? Doesn't that maybe flip the sign, or multiply its badness an order of magnitude? You very reasonably didn't include that because it's an annoying question that's pretty far from our normal moral intuitions, but I think there are a dozen annoying questions like that, and that long-termism could be thought of as just one of that set, no more fundamental or crux-y than the others for most people.

I'm not even sure how to think about what these numbers imply. Should the movement put 100% of money and energy into AI risk, the cause ranked most efficient here? To do that up until the point where the low-hanging fruit have been picked and something else is most effective? Are we sure we're not already at that point, given how much trouble LTF charities report finding new things to fund? Does long-termism change this, because astronomical waste is so vast that we should be desperate for even the highest fruit? Is this just Pascal's Wager? These all seem like questions we have to have opinions on before concluding that long-termism and near-termism have different implications.

I find that instead of having good answers to any of these questions, my long-termism (such as it is) hinges on an idea like "I think the human race going extinct would be extra bad, even compared to many billions of deaths". If you want to go beyond this kind of intuitive reasoning into real long-termism, I feel like you need extra work to answer the questions above that in general isn't being done.

But I feel like that potentially makes it irrelevant to any possible question.

The question is, I think, "how should FTX/SBF spend its billions?"

But I feel like that potentially makes it irrelevant to any possible question.

I see what you mean and I think I didn't do a good job of specifying this in the post; my impression is one question your post and the other posts I'm responding to are trying to answer is "How should we pitch x-risks to people who we want to {contribute to them via work, donations, policy, etc.}?" So my post was (primarily) attempting to contribute to answering that question.

In your post, my understanding of part of your argument was: thoughtful short-termism usually leads to the same conclusion as longtermism so when pitching x-risks we can just focus on the bad short-term effects without getting into debates about whether future people matter and how much, etc. My argument is that it's very unclear if this claim is true[1], so making this pitch feels intellectually dishonest to some extent. It feels important to have people who we want to do direct work on x-risks working on it for coherent reasons so intellectual honesty feels very important when pitching there; I'm less sure about donaters and even less sure about policymakers but in general trying to be as intellectually honest as possible while maintaining similar first-order effectiveness feels good to me.

It feels less intellectually dishonest if we're clear that a substantial portion of the reason we care about x-risks so much is that extinction is extra bad, as you mentioned here but wasn't in the original post:

I find that instead of having good answers to any of these questions, my long-termism (such as it is) hinges on an idea like "I think the human race going extinct would be extra bad, even compared to many billions of deaths".


A few reactions to the other parts of your comment:

If we're talking about how normal people should prioritize...well, the only near-term cause close to x-risk here is animal welfare. If you tell a normal person "You can either work to prevent you and everyone you love from dying, or work to give chickens bigger cages, which do you prefer?", their response is not going to depend on QALYs.

I agree, but it feels like the target audience matters here; in particular, as I mentioned above I think the type of person I'd want to successfully pitch to directly work on x-risk should care about the philosophical arguments to a substantial extent.

If we're talking about how the EA movement should prioritize, the EA movement currently  spends more on global health than on animal welfare and AI risk combined. It clearly isn't even following near-termist ideas to their logical conclusion, let alone long-termist ones.

Agree, I'm not arguing to change the behavior/prioritization of leaders/big funders of the EA movement (who I think are fairly bought into longtermism with some worldview diversification but are constrained by good funding opportunities).

If we're talking about how a hypothetical perfect philosopher would prioritize, I think there would be many other things they worry about before they get to long-termism. For example, does your estimate for the badness of AI risk include that it would end all animal suffering forever? And all animal pleasure? Doesn't that maybe flip the sign, or multiply its badness an order of magnitude? You very reasonably didn't include that because it's an annoying question that's pretty far from our normal moral intuitions, but I think there are a dozen annoying questions like that, and that long-termism could be thought of as just one of that set, no more fundamental or crux-y than the others for most people.

I agree with much of this except the argument for lack of emphasis on longtermism; I think there are lots of annoying questions but longtermism is a particularly important one given the large expected value of the future (also, in the first sentence you say a hypothetical perfect philosopher but in the last sentence you say "most people"?).

If there are lots of annoying questions that could flip the conclusion when only looking at the short-term this feels like an argument for mentioning something like longtermism as it could more robustly overwhelm the other considerations.

I'm not even sure how to think about what these numbers imply. Should the movement put 100% of money and energy into AI risk, the cause ranked most efficient here? To do that up until the point where the low-hanging fruit have been picked and something else is most effective? Are we sure we're not already at that point, given how much trouble LTF charities report finding new things to fund? Does long-termism change this, because astronomical waste is so vast that we should be desperate for even the highest fruit? Is this just Pascal's Wager? These all seem like questions we have to have opinions on before concluding that long-termism and near-termism have different implications.

These rough numbers should definitely not be taken too seriously to imply that we should put all of our resources into AI risk! Plus diminishing marginal returns / funding opportunities are a real consideration. I think Ben's comment does a good job describing some of the practical funding considerations here. I do think we should be very surprised if taking into account longtermism doesn't change the funding bar at all; it's a huge consideration and yes, I think it should make us willing to pick substantially "higher fruit".

  1. ^

    even if it's true the majority of cases in a substantial minority thoughtful short-termism would likely make different recommendations due to personal fit etc.

I think once you take account of diminishing returns and the non-robustness of the x-risk estimates, there's a good chance you'd end up estimating the cost per present life saved of GiveWell is cheaper than donating to xrisk. So the claim 'neartermists should donate to xrisk' seems likely wrong.

I agree with Carl the US govt should spend more on x-risk, even just to protect their own citizens.

I think the typical person is not a neartermist, so might well end up thinking x-risk is more  cost-effective than GiveWell if they thought it through. Though it would depend a lot on what considerations you include or not.

From a pure messaging pov, I agree we should default to opening with "there might be an xrisk soon" rather than "there might be trillions of future generations", since it's the most important message and is more likely to be well-received. I see that as the strategy of the Precipice, or of pieces directly pitching AI xrisk. But I think it's also important to promote longtermism independently, and/or mention it as an additional reason to prioritise about xrisk a few steps after opening with it.

This paper estimates that $250B would reduce biorisk by 1%. Taking Ord’s estimate of 3% biorisk this century and a population of ~8 billion, we get: $250B / (8B * .01 * .03) = $104,167/life saved via biorisk interventions.

This estimate and others in that section overestimate the person-affecting value of existential risk reduction, because in the course of the century the people presently alive will be gradually replaced by future people, and because present people will become progressively older relative to these future people. In other words, the years of life lost due to an existential catastrophe relevant from a person-affecting perspective—and hence the person-affecting value of reducing existential risk—will diminish non-negligibly over a century, both as the fraction of people who morally count shrinks and as the average life expectancy of people in this shrinking group shortens.

(Adjusting the estimates to account for this effect would strengthen your point that prioritizing existential risk reduction may require assigning moral value to future people.)

To be fair, the same issue applies to animal welfare, on a presentist view or a narrow (as opposed to wide) necessitarian view. Probably a very small share and maybe none of the targeted animals exist at the time a project is started or a donation is made, since, for example, farmed chickens only live 40 days to 2 years, and any animals that benefit would normally be ones born and raised into different systems, rather than changing practices for any animal already alive at the time of reform. They aren't going to move live egg-laying hens out of cages into cage-free systems to keep farming them. It's the next group of them who will just never be farmed in cages at all.

Some GiveWell charities largely benefit young children, too, but if I recall correctly, I think donations have been aimed at uses for the next year or two, so maybe only very young children would not benefit on such a person-affecting view, and this wouldn't make much difference.

Other person-affecting views may recover much of the value of these neartermist interventions, but they do care about future people, and existential risks could dominate again in expected value per resource spent, but moreso s-risks and quality risks than extinction risks.

Probably a very small share and maybe none of the targeted animals exist at the time a project is started or a donation is made, since, for example, farmed chickens only live 40 days to 2 years, and any animals that benefit would normally be ones born and raised into different systems, rather than changing practices for any animal already alive at the time of reform. They aren't going to move live egg-laying hens out of cages into cage-free systems to keep farming them. It's the next group of them who will just never be farmed in cages at all.

Many animal interventions are also about trying to reduce the number of farmed animals that will exist in the future: averted lives. If you only care about currently living beings, that has no value.

Agreed.

(I was making a general point and illustrating with welfare reforms.)

Interesting points. Yes, this further complicates the analysis.

Good points!

Some GiveWell charities largely benefit young children, too, but if I recall correctly, I think donations have been aimed at uses for the next year or two, so maybe only very young children would not benefit on such a person-affecting view, and this wouldn't make much difference.

Agreed that this wouldn't make much of a difference for donations, although maybe it matters a lot for some career decisions. E.g. if future people weren't ethically important, then there might be little value in starting a 4+ year academic degree to then donate to these charities.

(Tangentially, the time inconsistency of presentists' preferences seems pretty inconvenient for career planning.)

I think this could make a difference for careers and education, but I'd guess not 10x in terms of cost-effectiveness of the first donations post-graduation. Most EAs will probably have already started undergraduate degrees by the time they first get into EA. There are also still benefits for the parents of children who would die, in preventing their grief and possibly economic benefits. People over 5 years old still have deaths prevented by AMF, just a few times less per dollar spent, iirc.

I'd guess few people would actually endorse presentism in particular, though.

I do think that it's interesting (and maybe surprising?) that best guesses might suggest marginal x-risk interventions and global dev interventions are within 1-2 OOMs of each other in cost-effectiveness, with a presentist-ish lens. Given the radically different epistemic states and quality of evidence that generated these estimates, as well as the large differences between the object level activities, as well as funding levels, our prior belief on this question before updating on the evidence might look quite different! 

jtm
2y10
0
0

Thanks again for writing this. I just wanted to flag a potential issue with the $125 to $1,250 per human-life-equivalent-saved figure for ‘x-risk prevention.’ 

I think that figure is based on a willingness-to-pay proposal that already assumes some kind of longtermism.
 

You base the range Linch’s proposal of aiming to reducing x-risk by 0.01% per $100m-$1bn. As far as I can tell, these figures are based on a rough proposal of what we should be willing to pay for existential risk reduction: Linch refers to this post on “How many EA 2021 $s would you trade off against a 0.01% chance of existential catastrophe?”, which includes the proposed answer “we should fund interventions that we have resilient estimates of reducing x-risk ~0.01% at a cost of ~$100M.” 
 

But I think that the willingness to pay from Linch is based on accounting for future lives, rather than the kind of currently-alive-human-life-equivalent-saved figure that you’re looking for. (@Linch, please do correct me if I'm wrong!)

In short, saying that we should fund interventions at the $100m/0.01% bar doesn’t say whether there are many (or any) available  interventions at that level of cost-effectiveness. And while I appreciate that some grantmakers have begun leaning more on that kind of quantitative heuristic, I don’t doubt that you can infer from this fact that previously or currently funded work on ‘general x-risk prevention’ has met that bar, or even come particularly close to it.


So, I think the $125-$1,250 figure already assumes longtermism and isn’t applicable to your question. (Though I may have missed something here and would be happy to stand correct – particularly if I have misrepresented Linch’s analysis!)

Of course, if the upshot is that  ‘general x-risk prevention’ is less cost-effective than the $125-$1,250 per currently-alive-human-life-equivalent-saved, then your overall point only becomes even stronger. 

(PS: As an aside, I think it would be a good practice to add some kind of caption beneath your table stating how these are rough estimates, and perhaps in some cases even the only available estimate for that quantity. I'm pretty concerned about long citation trails in longtermist analysis, where very influential claims sometimes bottom out to some extremely rough and fragile estimates. Given how rough these estimates are, I think it'd be better if others replicated their own analysis from scratch before citing them.)

I think the numbers work out assuming x-risk means (almost) everyone being killed and the percent reduction is absolute (percentage point), not relative:

$100,000,000 / (0.01% * 8 billion people) = $125/person

Thanks for your reply.

My concern is not that the numbers don't work out. My concern is that the "$100m/0.01%" figure is not an estimate of how cost-effective 'general x-risk prevention' actually is in the way that this post implies.

It's not an empirical estimate, it's a proposed  funding threshold, i.e. an answer to Linch's question "How many EA 2021 $s would you trade off against a 0.01% chance of existential catastrophe?” But saying that we should fund interventions at that level of cost-effectiveness doesn't say whether are many (or any) such interventions available at the moment.  If I say "I propose that GiveWell should endorse interventions that we expect to save a life per $500", that doesn't by itself show whether such interventions exist. 

Of course, the proposed funding threshold could be informed by cost-effectiveness estimates for specific interventions; I actually suspect that it is. But then it would be useful to see those estimates – or at the very least know which interventions they are  – before establishing that figure as the 'funding bar' in this analysis.

This is particularly relevant if those estimates are based on interventions that do not prevent catastrophic events but merely prevent them from reaching existential/extinction levels, as the latter category does not affect all currently living people, meaning that '8 billion people' would be the wrong number for the estimation you wrote above.

I agree with your points. I was responding to this point, but should have quoted it to be clearer:

"But I think that the willingness to pay from Linch is based on accounting for future lives, rather than the kind of currently-alive-human-life-equivalent-saved figure that you’re looking for."

I think the numbers can work out without considering future lives or at least anything other than deaths.

But I think that the willingness to pay from Linch is based on accounting for future lives, rather than the kind of currently-alive-human-life-equivalent-saved figure that you’re looking for. (@Linch, please do correct me if I'm wrong!)

I think the understanding is based on how many $$s the longtermist/x-risk portion of EAs have access to, and then trying to rationally allocate resources according to that constraint. I'm not entirely sure what you mean by "accounting for future lives," but yes, there's an implicit assumption that under no realistic ranges of empirical uncertainty would it make sense to e.g. donate to AMF over longtermist interventions. 

A moderate penalty to my numbers (from a presentist lens) is that at least some of the interventions I'm most excited about on the margin are from a civilizational resilience/recovery angle. However, I don't think this is a large effectiveness penalty, since many other people are similarly or much more excited on the margin about AI risk interventions (which has much more the property that either approximately everybody dies or approximately no one dies). 

So, I don't think elifland's analysis here is clearly methodologically wrong. Even though my numbers (and other analysis like mine) were based on the assumption that longtermist $$s were used for longtermist goals, it could still be the case that they are more effective for preventing deaths of existing people than existing global health interventions are. At least first order, it should not be that surprising. That is, global health interventions were chosen from the constraint of the first existing interventions with a large evidential base, whereas global catastrophic risk and existential-risk reducing interventions were chosen from (among others) the basis of dialing back ambiquity aversion and weirdness aversion to close to zero.

I think the main question/crux is how much you want to "penalize for  (lack of) rigor." Givewell-style analysis have years of dedicated work put into them. Much of my gut pulls grew out of an afternoon of relatively clear thinking (and then maybe a few more days of significantly-lower-quality thinking and conversations, etc, that adjusted my numbers somewhat but not hugely). I never really understood the principled solutions to problems like the optimizer's curse and  suspicious convergence

PS: As an aside, I think it would be a good practice to add some kind of caption beneath your table stating how these are rough estimates, and perhaps in some cases even the only available estimate for that quantity. I'm pretty concerned about long citation trails in longtermist analysis, where very influential claims sometimes bottom out to some extremely rough and fragile estimates.

Strongly agreed. 

PS: As an aside, I think it would be a good practice to add some kind of caption beneath your table stating how these are rough estimates, and perhaps in some cases even the only available estimate for that quantity. I'm pretty concerned about long citation trails in longtermist analysis, where very influential claims sometimes bottom out to some extremely rough and fragile estimates.

I agree! I bolded the rough in the header because I didn’t want people to take the numbers too seriously but agree that probably wasn’t enough.

I tried to add a caption to the table before posting but couldn’t figure out how; is it possible / if so how do I do it?

jtm
2y10
0
0

Thanks for writing this! I think your point is crucial and too often missed or misrepresented in discussions on this.

A related key point is that the best approach to mitigating catastrophic/existential risks depends heavily on whether one comes at it from a longtermist angle or not. For example, this choice determines how compelling it is to focus on strategies or interventions for civilisational resilience and recovery

To take the example of biosecurity: In some (but not all) cases, interventions to prevent catastrophe from biological risks look quite different from interventions to prevent extinction from biology. And the difference between catastrophe and extinction really does depend on what one thinks about longtermism and the importance of future generations.

Thanks for this really well-written post, I particularly like how you clarified the different connotations of longtermism and also the summary table of cost-effectiveness. 

I think one thing to note is that an X-risk event would not only wipe out humans, but also the billions of factory farmed animals. Taking into account  animal suffering would dramatically worsen the cost-effectiveness of X-risk from a neartermist point of view. I think this implies longtermism is necessary to justify working on X-risk (at least until factory farming is phased out). 

tl;dr I wouldn't put too much weight on my tweet saying I think I probably wouldn't be working on x-risk if I knew the world would end in 1,000 years and I don't think my (wild) guess at the tractability of x-risk mitigation is particularly pessimistic.

***

Nice post. I agree with the overall message of as well as much of Ben's comment on it. In particular, I think emphasizing the significance of future generations, and not just reducing x-risk, might end up as a crux for how much you care about: a) how much an intervention reduces x-risk v. GCRs that are unlikely to (directly?) lead to existential catastrophe; b) whether civilization just manages to avoid x-risk v. ends up on track to flourish as much as possible and last a lot longer than (e.g.) the typical mammalian species. 

***

That said, I mostly came here to quickly caution against putting too much weight on this:

  1. Alyssa Vance’s tweet about whether the longtermism debate is academic
    1. And Howie Lempel responding, saying he thinks he would work on global poverty or animal welfare if he knew the world was ending in 1,000 years
    2. Howie’s response is interesting to me, as it implies a fairly pessimistic assessment of tractability of x-risks given that 1,000 years would shift the calculations presented here by over an OOM (>10 generations).

That's mostly for the general reason that I put approximately one Reply Tweet's worth of effort into it. But here are some specific reasons not to put too much weight on it and also that I don't think it implies a particularly pessimistic assessment of the tractability of x-risk.[1]

  1. I'm not sure I endorse the Tweet on reflection mostly because of the next point.
  2. I'm not sure if my tweet was accounting for the (expected) size of future generations. A claim I'd feel better about would be "I probably wouldn't be working on x-risk reduction if I knew there would only be ~10X more beings in the future than are alive today or if I thought the value of future generations was only ~10X more than the present." My views on the importance of the next 1,000 years depend a lot on whether generations in the coming century are order(s) of magnitude bigger than the current generation (which seems possible if there's lots of morally relevant digital minds). [2]
  3. I haven't thought hard about this but I think my estimates of the cost-effectiveness of the top non-longtermist opportunities are probably higher than implied by your table.
    1. I think I put more weight on the badness of being in a factory farm and (probably?) the significance of chickens than implied by Thomas's estimate.
    2. I think the very best global health interventions are probably more leveraged than giving to GiveWell.
  4. I find animal welfare and global poverty more intuitively motivating than working on x-risk, so the case for working on x-risk  had to be pretty  strong to get me to spend my career on it. (Partly for reasons I endorse, partly for reasons I don't.)
  5. I think the experience I had at the time I switched the focus of my career was probably more relevant to global health and animal welfare than x-risk reduction.
  6. My claim was about what I would in fact be doing, not about what I ought to be doing.

[1] Actual view: wildly uncertain and it's been a while since I last thought about this but something like the numbers from Ben's newsletter or what's implied by the 0.01% fund seem within the realm of plausibility to me. Note that, as Ben says, this is my guess for the marginal dollar. I'd guess the cost effectiveness of the average dollar is higher and I might say something different if you caught me on a different day. 

[2] Otoh, conditional on the world ending in 1,000 years maybe it's a lot less likely that we ended up with lots of digital minds?

Thanks for clarifying, and apologies for making an incorrect assumption about your assessment on tractability. I edited your tl;dr and a link to this comment into the post.

No worries!

Highlights from recent Twitter (and Twtiter-adjacent) discussion about this post (or the same topic), with the caveat that some views in tweets probably shouldn't be taken too seriously:

  1. Matt Yglesias: The more I think about it, the more I find the “longtermism” label puzzling since the actual two main ideas that self-described longtermists are advancing about pandemics and AI governance actually involve large probabilities and short time horizons.
    1. Luke Muehlhauser: But tractability is low (esp. for AI), so valuing future people may be needed, at least to beat bednets (if not govt VSLs). That said, in a lot of contexts I think the best pitch skips philosophy: "Sometime this century, humans will no longer be the dominant species on the planet. Pretty terrifying, no? I think I want to throw up."
    2. Dustin Moskovitz: They are merely 4-20X better (without future people) doesn’t seem like a slam dunk argument, also * lots * of non-xrisk QALYs to be earned in bio along the way
    3. Luke Muelhauser: Agree given Eli's estimates but I think AI x-risk's tractability is substantially lower than the 80k estimate Eli uses (given actually available funding opportunities).
      1. (Alexander Berger and Leopold Aschenbrenner share similar skepticism about the tractability)
  2. Sam Altman: also somewhat confused by the branding of longtermism, since most longermists seem to think agi safety is an existential risk in the very short term :)
    1. Vitalik Buterin: I think the philosophical argument is that bad AI may kill us all in 50 years, but the bulk of the harm of such an extinction event comes from the trillions of future humans that will never have a chance to be born in the billions of years that follow.
    2. Sam Altman: i agree that’s the argument, but isn’t the “unaligned agi could kill all of us, we should really prioritize preventing that” case powerful enough without sort of halfway lumping other things in?
    3. Vitalik Buterin: It depends! Worst-case nuclear war might kill 1-2 billion, full-on extinction will kill 8 billion, but adding in longtermist arguments the latter number grows to 999999...999 billion. So taking longtermist issues into account meaningfully changes relative priorities. Also, bio vs AI risk. A maximally terrible engineered pandemic is more likely to kill ~99.99% of humanity, leaving civilization room to recover from the remainder; AI will kill everyone. So it's another argument for caring about AI over bio.
  3. Matt Yglesias: What's long-term about "longtermism"? Trying to curb existential risk seems important even without big philosophy ideas... The typical person’s marginal return on investment for efforts to reduce existential risk from misaligned artificial intelligence is going to diminish at an incredibly rapid pace.
    1. My response thread: In sum, I buy the reasoning for his situation but think it doesn't apply to most people's career choices... To clarify: I agree that you don't need longtermism to think that more work on x-risks by society in general is great on the margin! But I think you might need some level of it to strongly recommend x-risk careers to individuals in an intellectually honest way
    2. Matt Yglesias: I find the heavy focus on career choice in the movement to be a bit eccentric (most people are old) but I agree that to the extent that’s the question of interest, my logic doesn’t apply.

This post estimates that animal welfare interventions may be ~10x more effective, implying $450 / human life-equivalent, though this is an especially rough number.

I have estimated corporate campaigns for chicken welfare are 10,000 times as effective as GiveWell's Maximum Impact Fund, for a moral weight of chickens relative to humans of 2 (most people think this is too high, but I think it is reasonable; see discussion here). 

I think most efforts to mitigate catastrophic risks also save lives by predictably mitigating subcatastrophic risks, and from a neartermist perspective, this significantly increases expected value. I’m most confident that this justifies prioritising biorisk and climate change, but less so with regards to nuclear war and AI.

For example, vaccine platform development, which is probably a key investment to mitigate biorisk, helped us fight COVID (which shouldn’t be classed as an existential risk), and COVID vaccines are estimated to have saved 20 million lives (https://www.imperial.ac.uk/news/237591/vaccinations-have-prevented-almost-20-million/).

Similar investments to combat biorisk will have predictably similar effects on endemic infectious diseases and future pandemics which aren’t quite existential risks.

Nitpick (that works in favor of your thesis): The cited estimate for the effectiveness of improving chicken welfare is far on the pessimistic end by assuming moral weight is proportional to neuron count. A behavioral argument would suggest that, since chickens respond to the suffering caused by factory farms in basically the same ways humans would, that they have roughly equally moral weight in the ways that matter in this case. It's not clear how to combine these estimates due to the two envelopes problem, but the proportional-to-neurons estimate is basically the most pessimistic plausible estimate, and I think an all-things-considered estimate would make chicken welfare interventions look much more cost-effective.

This supports your thesis in that if you don't care about future people, factory farming interventions are plausibly 10x to 1000x more cost-effective than x-risk.

(It's possible that the cited estimate is wrong about the difficulty of improving chicken welfare, but I didn't look at that.)

I haven't yet read this post*, but wanted to mention that readers of this might also find interesting the EA Wiki article Ethics of existential risk and/or some of the readings it links to/collects.

So readers can have more of a sense of what that page is about, here are the first two paragraphs of that page:

The ethics of existential risk is the study of the ethical issues related to existential risk, including questions of how bad an existential catastrophe would be, how good it is to reduce existential risk, why those things are as bad or good as they are, and how this differs between different specific existential risks. There is a range of different perspectives on these questions, and these questions have implications for how much to prioritise reducing existential risk in general and which specific risks to prioritise reducing.

In The Precipice, Toby Ord discusses five different "moral foundations" for assessing the value of existential risk reduction, depending on whether emphasis is placed on the future, the present, the past, civilizational virtues or cosmic significance.[1]

*Though I did read the Introduction, and think these seem like important & correct points. ETA: I've now read the rest and still broadly agree and think these points are important.

Thanks for linking; in particular, Greg covered some similar ground to this post in The person-affecting value of existential risk reduction:

Although it seems unlikely x-risk reduction is the best buy from the lights of a person-affecting view (we should be suspicious if it were), given ~$10000 per life year compares unfavourably to best global health interventions, it is still a good buy

 

although it seems unlikely that x-risk reduction would be the best buy by the lights of a person affecting view, this would not be wildly outlandish.

I agree with the broad argument here, but it seems substantially understated to me, for a few reasons:

  1. A priori, I expect a Pareto distribution in magnitude of catastrophes, such that the vast majority of expectation of lost lives would lie in non-existential catastrophes. AI could distort this to some degree, but I would need to be far more convinced of the inside view arguments to believe it's going to dominate the outside view distribution.
  2. AI development is a one-off risk, where at some point it will either kill us, fix everything, or the world will continue in a new equilibrium. Nukes and other advanced weaponry will basically be a threat for the whole of the future of technological civilisation, not just the next century, so any risk assessment that only looks that far is underrepresenting them.
  3. The source for your probability estimate for nuclear war is very ambiguous and seems to seriously understate the expected number of lives lost. The 80k article links to Luisa's post, which cites two very inconsistent estimates to which 80k's number might refer: '1) a nuclear attack by a state actor and 2) a nuclear attack by a state actor in Russia, which is 0.03%, or 0.01% per year (unpublished GJI data from Open Philanthropy Project; Apps, 2015).' (which is a very specific subset of nuclear war scenarios); and '[the respondents to the 2008 Global Catastrophic Risk expert survey] see the risk of extinction caused by nuclear war as ... about 0.011% per year.' In any case, clearly the overall annual risk of <a nuclear war> should be much higher than annual risk of either <state launches nuke and nuke kills at least 1 person in Russia> or <extinction by nuclear war>.

Looking at the survey in question, it looks as though the plurality of expected deaths come from 'non-extinction from all wars', with the next biggest expectations from (depending on what 'at least 1 billion' pans out to being shared approximately equally between 'non-extinction from nanotech', 'extinction from nanotech', 'extinction from AI', 'non-extinction from biopandemic', and 'non-extinction from nuclear wars'.

A structural note: I really like how you posted multiple relevant articles and summarized their claim and gave your take on them.  As someone with a lot of competing interests and things I want to read, this helped me reduce that load significantly on this subject matter, and gives me a really clear road map to where to go next if I'm interested following reading your post. 

I generally think that all these kinds of cost-effectiveness analyses around x-risk are wildly speculative and susceptible to small changes in assumptions. There is literally no evidence that the $250b would change bio-x-risk by 1% rather than, say, 0.1% or 10%, or even 50%, depending on how it was targeted and what developments it led to. On the other hand if you do successfully reduce the x-risk by, say, 1%, then you most likely also reduce the risk/consequences of all kinds of other non-existential bio-risks, again depending on the actual investment/discoveries/developments, so the benefit of all the 'ordinary' cases must be factored in. I think that the most compelling argument for investing in x-risk prevention without consideration of future generations, is simply to calculate the deaths in expectation (eg using Ord's probabilities if you are comfortable with them) and to rank risks accordingly. It turns out that at 10% this century, AI risks 8 million lives per annum (obviously less than that early century, perhaps greater late century) and bio-risk is 2.7 million lives per annum in expectation (ie 8 billion  x 0.0333 x 0.01). This can be compared to ALL natural disasters which Our World in Data reports kill ~60,000 people per annum. So there is an argument that we should focus on x-risk to at least some degree purely on expected consequences. I think its basically impossible to get robust cost-effectiveness estimates for this kind of work, and most of the estimates I've seen appear implausibly cost-effective. Things never go as well as you though they would in risk mitigation activities. 

Lets imagine a toy model, where we only care about current people. It's still possible for most of the potential qualies  to be far into the future. (If either lifetimes are very long, or quality is very high, maybe both.) So in this model, almost all the qualies come from currently existing people living for trillion years in an FAI utopia. 

So lets suppose there are 2 causes, AI safety, and preventing nuclear war. Nuclear war will happen unless prevented, and will lead to half as many current people reaching ASI. (Either it kills them directly, or it delays the development of ASI)  Let the QALY's of (No nukes, FAI) be X.

Case 1) Currently P(FAI)=0.99, And AI safety research will increase that to P(FAI)=1. If we work on AI safety, a nuclear war happens, half the people survive to ASI,  and we get U=0.5X. If we work on preventing nuclear war, the AI is probably friendly anyway, so U=0.99X

Case 2) Currently P(FAI)=0, but AI safety research can increase that to P(FAI)=0.01. Then if we prevent nuclear war, we get practically 0 utility, and if we work on AI safety, we get 0.005 utility,  a 1% chance of the 50% of survivors living in a post FAI utopia. 

Of course, this is assuming all utility ultimately resides in post ASI utopia, as well as not caring about future people. If you put a substantial fraction of utility on pre ASI world states, then the calculation is different. (Either by being really pesimistic about the chance of alignment, or by applying some form of time discounting to not care too much about the far future of existing people either. )

People generally don't care about their future QALYs in a linear way:  a 1/million chance of living 10 million times  as long and otherwise dying immediately is very unappealing to most people, and so forth. If you  don't evaluate future QALYs for current people in a way they find acceptable, then you'll wind up generating recommendations that are contrary to their preferences and which will not be accepted by society at large.

This sort of argument shows that person-affecting utilitarianism is a very wacky doctrine (also see this) that doesn't actually sweep away issues of the importance of the future  as some say, but it doesn't override normal people concerns by their own lights.

Curated and popular this week
Relevant opportunities