Hide table of contents

Longtermism, Aggregation, and Catastrophic Risk is a Global Priorities Institute Working Paper by Emma J. Curran. This post is part of my sequence of GPI Working Paper summaries.

Note: this summary is relatively in-depth. If you’d like a higher-level mini-summary, see this post.

Introduction

In this paper, Curran argues that longtermism conflicts with plausible deontic skepticism about aggregation from both the ex-ante and ex-post perspective: Long-term interventions, especially catastrophic risk mitigation, generate weak complaints from future individuals. Her argument doesn’t discount the value of future lives, as many objections to longtermism do; her results are simply contingent on the uncertainty with which we can shape the far future.

Here I’ve done my best to summarize Curran’s argument, making it more easily accessible while sacrificing as little argumentative strength as possible. However, I’m not a philosopher, so please let me know if there are areas for improvement.

A problem of aggregation

Late Train

Assuming long-term interventions bring about far more good than short-term interventions, in expectation, does not necessarily mean we are morally obligated to choose long-term interventions. Consider:

  • Late Train: A train is headed to run over a man lying on its tracks. You can pull a lever to stop the train, causing some N number of passengers to be very late, or allow the train to kill the man.

Intuitively, no matter how large the train gets, and thus how large N becomes, it is impermissible to allow the train to drive over the man.[1] However, for some vast N, we would expect to bring about far more good if we allowed the man to die, as each person’s minor harm from tardiness aggregates into one large harm. Hence, Late Train leads to the problematic verdict that, even though it would (axiologically) be best to allow the man to die, we are morally obligated not to do so (deontologically).

Remedies

This problem comes from aggregating small harms into a much larger sum.

There are two remedies for this:

  1. Axiological anti-aggregationism denies that small harms, like inconvenience, can ever aggregate to be worse than a large harm, such as dying (meaning letting the man die is not a higher-value consequence).
  2. Deontic anti-aggregationism agrees that for some N, letting the man die is a higher-value consequence, but denies that our obligations can aggregate many small complaints until they outweigh a large complaint, such as the man’s complaint against dying.

Curran focuses on deontic anti-aggregationism but believes much of the paper also applies to axiological anti-aggregationism.

The argument and terms

Other reasons for rejecting aggregation include:

  • The separateness of persons: Aggregating small complaints doesn’t make sense because there is no single individual who feels the sum of the small harms—people’s experiences are separate.
  • Disrespect: Aggregation is disrespectful to the individual who stands to incur morally significant harm because it fails to take seriously the significance of what is at stake for them.

Terms

  • Complaint/Claim: An individual has a complaint/claim in a decision if their wellbeing would be higher or lower under one of the options.
  • A complaint’s strength is a function of how much their wellbeing would be affected by the decision, discounted by the probability of that effect.
  • A complaint is relevant if it is sufficiently strong relative to competing (non-mutually-satisfiable) complaints.

Curran assesses two types of anti-aggregationism:

  1. Non-aggregationism disallows all aggregation. It chooses the option preferred by the individual with the strongest claim.
  2. Partial-aggregationism allows aggregation in some cases. It ultimately tells us to choose the option that satisfies the greatest sum of strength-weighted, relevant claims.

How to evaluate claims under risk

The ex-ante and ex-post perspectives

Long-term interventions are risky. We can evaluate risk from an ex-ante or ex-post perspective:

  • Ex-ante focuses on interventions’ impact on prospects.
    • Ex-ante evaluations forecast prospects by calculating the expected impact on each individual and choosing the option favored by those with the strongest prospective claims.
  • Ex-post focuses on interventions’ impact on outcomes.
    • In ex-post evaluations, we act as if we are looking retrospectively by ranking individuals by their claims’ strength, assuming we chose one option, then the other. For example, ‘the worst off assuming we chose option X,’ ‘the next worst off assuming we chose option X,’ and so on. Then, we choose the option favored by those with the strongest retrospective claims.

Dose Distribution

To better understand the difference, consider:

  • Dose Distribution: We have five doses of a medicine for a disease. Bernard has the disease. If we give him all of the medicine, he will live; if we don’t, he will die. Five others are each at risk of developing the disease. We are certain that exactly one will develop it. We can vaccinate each of the five using one dose, meaning none of them will develop the disease. Do we give all of the medicine to Bernard, or do we vaccinate all of the others?

Ex-ante, Bernard has a complaint of death in favor of giving him all five doses—otherwise, he will almost surely die. However, each of the five has a complaint in favor of vaccination equal to how much the intervention would improve their prospects—a 1/5 chance of death. Because Bernard’s claim of near-certain death is stronger than each of the others’ claims of a 1/5 chance of death, we should give the medicine to him.

Ex-post, we first assume we vaccinated the others and look retrospectively: Bernard died, giving him a complaint of death. We then assume we gave the medicine to Bernard instead and look retrospectively: Someone else died, giving them a complaint of death. Thus, the ex-post perspective says the claims of Bernard and whoever would die if we save Bernard have equal strength, meaning we could choose either option.

Hence, in Dose Distribution, ex-ante evaluation finds saving Bernard better than vaccination, whereas ex-post evaluation finds it equal to vaccination.

If you’d like further illustration of the difference between these perspectives, Curran offers two modifications of Dose Distribution to clarify the distinction page 12).

The ex-ante and ex-post perspectives leave us with four types of anti-aggregationism:

  1. ex-ante non-aggregationism
  2. ex-ante partial-aggregationism
  3. ex-post non-aggregationism
  4. ex-post partial-aggregationism

Curran argues all four conflict with longtermism: ex-ante anti-aggregationism finds claims generated by long-term intervention too weak compared to those by short-term interventions, while ex-post anti-aggregationism prohibits investment in a large, important class of long-term interventions.

Risk, aggregation, and catastrophes

Consider this analogue for long-term interventions:[2]

  • Catastrophic Risk: You could either fund ten patients’ treatment, almost certainly saving their lives, or you could fund AI safety research in a country of 100 million people currently at a 1-in-a-million risk of dying from AI in their lifetimes. Your donation to AI safety research will reduce this risk to 5 in 10 million[3].

The ex-ante perspective

Ex-ante non-aggregationism

Ex-ante non-aggregationism finds each of the ten’s complaints to be significantly greater than any of the 100 million’s. Each of the ten can claim near-certain death if not treated. On the other hand, each of the 100 million can only claim they will be exposed to a 1-in-a-million risk of death if it isn’t mitigated. Thus, you are obligated to fund the short-term intervention of saving the ten, under ex-ante non-aggregationism.

Ex-ante partial-aggregationism

Ex-ante partial-aggregationism would reach the same conclusion because the 100 million claims wouldn’t meet partial-aggregationist criteria for us to aggregate them:

  1. Looking at the separateness of persons, there is no single individual that will experience the sum of the harm.
  2. Choosing the low-probability AI research seems disrespectful to those with the largest claims.
  3. Their claims fail the following relevance test[4]:
  • The Sacrifice Test: Say you can prevent another individual from incurring a larger harm, y, but must incur a smaller harm, x. A claim against x is only relevant to a competing claim against y if you are morally permitted[5] to allow the other individual to incur y for you to avoid x, because x is sufficiently bad relative to y.
  • Under The Sacrifice Test, no individual in the 100 million, if they had the option, would be morally permitted to forgo saving ten people to avoid a 1-in-a-million chance of death.[6]

Thus, ex-ante partial-aggregationism would argue the 100 million’s claims aren’t relevant and therefore shouldn’t be aggregated, meaning you are obligated to fund the short-term intervention of saving the ten.

The ex-post perspective

Curran argues the ex-post perspective prohibits a large, important class of long-term interventions, and we should be skeptical of even those it permits, as ex-post nonconsequentialism faces significant criticism.[7]

Ex-post non-aggregationism

Assuming we chose to save the ten, the 100 million would have complaints of death discounted according to its probability of occurrence: 99.99995%. There are 100 million discounted complaints instead of 50 complaints of death because 50 people won't actually die if we don’t choose the AI research. If we aggregated the 100 million discounted complaints, this would equal roughly 50 complaints of death. But we aren’t aggregating—because that would leave Late Train unresolved—so we give each of the 100 million a complaint of death discounted by 99.99995%.

Assuming we chose the AI research, each of the ten gets a strong complaint of near-certain death.

Thus, you are obligated to save the ten, as each of their complaints is much stronger than any of the 100 million’s.

Ex-post partial-aggregationism

The ex-post partial-aggregationist wouldn’t aggregate the complaints either, because:

  1. As before, looking at the separateness of persons, there is no single individual that will experience the sum of the harm.
  2. Because choosing the AI research will almost certainly save no one else’s life in reality, it is disrespectful to the ten who will resultingly die.
  3. Under The Sacrifice Test, you wouldn’t be allowed to forgo saving ten people from death because doing so would increase your chance of death by 5 in 10 million.

Hence, ex-post partial-aggregationism finds the massively discounted complaints of death irrelevant to slightly discounted complaints of death, preventing you from aggregating the 100 million complaints, and obligating you to save the ten.

Catastrophic Risk (Independent)

Curran notes that the ex-post partial-aggregationist argument depends on the fact that each of the 100 million’s risk of death is not independent of the others’—that is, either all of them live or all of them die from AI, meaning the AI research would most likely, in reality, save no one.

However, if each of their risks was independent—meaning the AI research would reduce each of the 100 million’s individual chance of death instead of the whole group of 100 million’s chance of death—choosing the AI research would, in reality, most likely save ~50 lives. In this case, aggregation would be permitted by ex-post partial-aggregationism, and you’d be obligated to choose the long-term intervention of AI-safety research.

More broadly, this means there is a subset of long-term interventions that partial-aggregationism would justify (i.e., those that you can reasonably expect[8] to save lives in reality), but not catastrophic risk mitigation or similarly risky interventions.

Conclusion

Recap

Curran argues…

  1. Aggregation leads to a problematic conclusion in Late Train, which can be remedied with anti-aggregative moral theories.
  2. Ex-ante anti-aggregationism favors short-term interventions over long-term interventions because they change individuals’ prospects more significantly than long-term interventions do.
  3. Ex-post anti-aggregationism can prefer some long-term interventions, but not a key class of them, including catastrophic risk mitigation; it only justifies those you can reasonably expect[8] to save lives in reality.
    1. Hence, for longtermists skeptical of aggregation, “the important question becomes an empirical one: are there any very far-future interventions [that] we can reasonably expect[8] to save a life?”

Implications

  • Skeptics of aggregation should be similarly skeptical of longtermism and long-term interventions.
    • Ex-ante anti-aggregationism systematically prefers short-term interventions.
    • Ex-post anti aggregationism only prefers long-term interventions that we reasonably expect[8] to save lives in reality, which excludes catastrophic risk mitigation.
  • Alternatively, you might view the paper as reason to doubt anti-aggregative moral theories because…
    • Anti-aggregationism fails to value the good long-term interventions can bestow, making it insufficiently sensitive to high-value stakes.
    • It excludes work on many present-day practices, including catastrophic risk mitigation, making it deeply unpracticable.
  • We may face a decision about which of our philosophical commitments to give up:
    • Is it easier to throw out our intuitions in Late Train, along with the separateness of persons?
    • Or is it easier to give up our obligation to those in the far future?
    • Or, perhaps the paper suggests a third, less obvious conclusion:
      • It seems a plausible moral theory ought not to tell us to let the man die in Late Train.
      • It also seems a plausible moral theory ought to permit us to attend to catastrophic risks.
      • However, Curran suggests, there may not be a moral theory that can consistently do both.
  1. ^

    "One might wonder about the larger societal consequences of making a train late if said train is sufficiently large; for example, if enough people are late, the world economy might crash, or many people who needed urgent care might die. For the sake of the argument, let’s presuppose that such catastrophes will not occur, and simply consider the cost to the individuals who are made slightly late."

  2. ^

    Curran considers Catastrophic Risk analogous to long-term interventions because it features a low probability of improving a large number of people's welfare.

  3. ^

    In expectation, this would save 50 lives—40 more than funding the medical treatment.

  4. ^

    Curran created this test based on a common explanation of relevance. She states the literature often uses explanations inspired by Alex Voorhoeve (Voorhoeve, 2014, 2017; Lazar, 2018; Steuwer, 2021b; Mann, 2021, 2022).

  5. ^

    "It is plausible to think that individuals are, to some extent, permitted to have stronger concern for their own lives and well-being; it is for this reason that we are permitted not to sacrifice our lives even if doing so would bring about the better outcome. However, such self-concern has limits; it would be, for example, impermissible if an agent failed to a make a trivial sacrifice, such as incurring a slight sore throat or headache, if doing so would save a life."

  6. ^

    If you'd like more clarity about how this test is used in cases of risk, Curran provides an example on pages 17-18.

  7. ^

    "Many [point] to the fact that it can be counter-intuitively risk sensitive and constraining (Ashford, 2003; Fried, 2012; John, 2014; Verweij, 2015; Frick, 2015), and that it faces problems with decomposition (Hare, 2016)."

  8. ^

    If you need some clarity as to what is meant by "reasonably expect," this further discussion of Catastrophic Risk should clarify things: "As the distribution of chances across the outcomes in Catastrophic Risk changes what [you] can expect to result from [your] actions, it also changes the sort of justifications [you] can offer... In Catastrophic Risk (Independent), [you] can offer a justification grounded in the fact that [you] expect to save fifty lives, [but] in Catastrophic Risk, [you] cannot do so. It is not the case that [you] can... justify [your] decision not to save [the ten] from death on the basis that, if [you] were to help them, then [you] would expect to bring about a state of affairs in which fifty other people die who could have been saved. Quite the opposite, [you are] almost certain of the fact that if [you] were to save the ten, [you] would not be bringing about a state of affairs in which anyone died when they otherwise would not have."

Comments7
Sorted by Click to highlight new comments since: Today at 4:22 PM

Thanks for the excellent summary, Nicholas! 

"50 people wouldn’t actually die if we don’t choose the AI research, instead, 100 million people would face a 0.00005% chance of death."

I'm a bit puzzled by talk of probabilities ex post.  Either 100 million people die or zero do. Shouldn't the ex post verdict instead just depend on which outcome actually results?

(I guess the "ex post" view here is really about antecedently predictable ex post outcomes, or something along those lines, but there seems something a bit unstable about this intermediate perspective.)

"50 people wouldn’t actually die if we don’t choose the AI research, instead, 100 million people would face a 0.00005% chance of death." I think, perhaps, this line is infelicitous. 

The point is that all 100 million people have an ex-post complaint, as there is a possible outcome in which all 100 million people die (if we don't intervene). However, these complaints need to be discounted by the improbability of their occurrence. 

To see why we discount, imagine we could save someone from a horrid migraine, but doing so creates a 1/100 billion chance some random bystander would die. If we don't discount ex-post, then ex-post we are comparing a migraine to death - and we'd be counterintuitively advised not to alleviate the migraine.

Once you discount the 100 million complaints, you end up with 100 million complaints of death, each discounted by  99.99995%. 

I hope this clears up the confusion, and maybe helps with your concerns about instability? 

Thanks! But to clarify, what I'm wondering is: why take unrealized probabilities to create ex post complaints at all? On an alternative conception, you have an ex post complaint if something bad actually happens to you, and not otherwise.

(I'm guessing it's because it would mean that we cannot know what ex post complaints people have until literally after the fact, whereas you're wanting a form of "ex post" contractualism that is still capable of being action-guiding -- is that right?)

Your guess is precisely right. Ex-post evaluations have really developed as an alternative to ex-ante approaches to decision-making  under risk. Waiting until the outcome realises does not help us make decisions. Thinking about how we can justify ourselves depending on the various outcomes we know could realise does help us. 

The name can definitely be misleading, I see how it can pull people into debates about retrospective claims and objective/subjective permissibility. 

 

Sorry I edited this as I had another thought.

I apologize for this confusion. I've updated the section with the inaccurate statement @Richard Y Chappell quoted.

Executive summary: Curran argues that longtermism conflicts with plausible deontic skepticism about aggregation from both ex-ante and ex-post perspectives, as long-term interventions like catastrophic risk mitigation generate weak complaints from future individuals compared to short-term interventions.

Key points:

  1. The Late Train thought experiment illustrates the problematic conclusion that aggregating small harms can outweigh a large harm, which can be remedied by anti-aggregative moral theories.
  2. Ex-ante anti-aggregationism finds long-term interventions generate weaker complaints than short-term interventions as they change individuals' prospects less significantly.
  3. Ex-post anti-aggregationism only justifies long-term interventions reasonably expected to save lives in reality, excluding catastrophic risk mitigation.
  4. Skeptics of aggregation should be similarly skeptical of longtermism, while the paper may cast doubt on anti-aggregative theories for insufficiently valuing long-term interventions.
  5. The conflict between intuitions in Late Train and the importance of long-term interventions suggests there may not be a moral theory that can consistently accommodate both.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
Relevant opportunities