Hide table of contents



This post is a part of Rethink Priorities’ Worldview Investigations Team’s CURVE Sequence: “Causes and Uncertainty: Rethinking Value in Expectation.” The aim of this sequence is twofold: first, to consider alternatives to expected value maximization for cause prioritization; second, to evaluate the claim that a commitment to expected value maximization robustly supports the conclusion that we ought to prioritize existential risk mitigation over all else. This post considers the implications of contractualism for cause prioritization. 

Executive Summary

  • Contractualism says that morality is about what we can justify to those affected by our actions. It explains why we care about morality by way of our interest in justifying ourselves to reasonable others. Insofar as we care about the relevant kind of justification, we should be able to feel the pull of contractualism. And insofar as contractualism captures moral judgments that consequentialist moral theories don't, we may be inclined to give some credence to the view. 
  • Contractualism says: When your actions could benefit both an individual and a group, don't compare the individual's claim to aid to the group's claim to aid, which assumes that you can aggregate claims across individuals. Instead, compare an individual's claim to aid to the claim of every other relevant individual in the situation by pairwise comparison. If one individual's claim to aid is a lot stronger than any other's, then you should help them. (That being said, contractualism is also compatible with saying: When the group is large enough and each group member’s claim is nearly as strong as the individual’s, you should help the group.) Contractualism, therefore, offers an alternative way of thinking about what it means to help others as much as possible with the resources available.
  • Accordingly, contractualism recommends a different approach to cost-effectiveness analysis than the one that’s dominant in EA. The standard EA view is that we should maximize expected value. The contractualist view is that insofar as we should maximize, we should be maximizing something like “the relevant strength-weighted moral claims that are addressed per dollar spent,” where the strength of an individual’s claim is largely determined by the gain in expected value for that individual if the act is performed.
  • So, it may be true that some x-risk-oriented interventions can help us all avoid a premature death due to a global catastrophe; maybe they can help ensure that many future people come into existence. But how strong is any individual's claim to your help to avoid an x-risk or to come into existence? Even if future people matter as much as present people (i.e., even if we assume that totalism is true), the answer is: Not strong at all, as you should discount it by the expected size of the benefit and you don’t aggregate benefits across persons. Since any given future person only has an infinitesimally small chance of coming into existence, they have an infinitesimally weak claim to aid. By contrast, you can help the world's poorest people a lot and with high confidence. So, given contractualism, the claims of the world's poorest people win.
  • Contractualism probably won't justify prioritizing all the things that EAs do related to global poverty. Essentially, it looks like it most clearly supports GiveWell-Top-Charity-like interventions—e.g., the kind of work that the Against Malaria Foundation (AMF) does—because of the high probability of significant impact.

1. Introduction

What, if anything, could justify shifting resources away from the Against Malaria Foundation (AMF) and toward existential risk mitigation efforts (x-risk)?

One way to feel the force of this question is to imagine having to justify such a shift to a desperately poor person—someone who may well die, or whose child may well die, because of your decision to allocate resources elsewhere. What, if anything, could you say to such a person?

There might be an adequate answer. Even if so, it would be surprising if it were easy to make the case to someone in such dire circumstances. And insofar as you feel pressure to be able to give a satisfying case to such a person, you should feel the pull of contractualism—a moral theory that’s become increasingly popular over the last 40 years. The central contractualist thesis (at least as T. M. Scanlon (1998) understands it, which is the version of the position on which we’ll focus here) is that moral wrongness is a matter of unjustifiability to others (and moral rightness a matter of justifiability to others). This theory promises to explain why morality is motivating: we care about morality because we care about being able to justify ourselves to others.

The purpose of this post is to consider the question of cause prioritization from the perspective of a non-consequentialist moral theory. We argue that contractualism generally looks more favorably upon interventions targeting very poor presently-existing individuals over interventions directed at protecting future people even when the expected value (EV) of the other interventions would be higher. Finally, we briefly consider the kinds of interventions that, you might think, would be especially important given contractualism: namely, interventions aimed at redressing injustices and those aimed at responding to claims grounded in special relations between beneficiaries and the benefactor. In those cases, we argue that contractualism does not require prioritizing injustices or those to whom we stand in special relations over the needs of badly-off distant strangers.

To be clear: this document is not a detailed vindication of any particular class of philanthropic interventions. For example, although we think that contractualism supports a sunnier view of helping the global poor than funding x-risk projects, contractualism does not, for all our argument implies, entail that many EA-funded global poverty interventions are morally preferable to all other options (some of which are probably high-risk, high-reward longshots). In addition, contractualism doesn’t vindicate (say) global-poor–targeting interventions on the same grounds that many other theories would.[1] For example, while some would favor antimalarial bednet distribution because of the number of DALYs that such an intervention averts, if contractualism favors antimalarial bednet distribution, then this is for very different reasons (as will become clearer in the next couple of sections). Still, we might have thought that contractualism couldn't provide any guidance at all when it comes to cause prioritization. If the arguments of this post are correct, that isn't true.

2. Contractualism

The canonical statement of contemporary contractualism is Thomas Scanlon’s What We Owe to Each Other (1998).[2] We’ll mostly (though, as will emerge, not entirely) treat Scanlon’s version of contractualism as representative of the view—or, at any rate, the aspects of the view relevant to our discussion.

Scanlon’s view has been the subject of an enormous amount of discussion[3]; philosophers disagree about how best to interpret it as well as about its implications for practical ethics. However, it’s safe to say that the heart of Scanlon’s view is the following thesis:

  • (C1) An act is wrong if and only if, and because, it is unjustifiable to others.[4]

Many moral theorists would agree that (as (C1) implies) the wrong acts are precisely those that are unjustifiable to others. But (C1) says that when an act is wrong, it's wrong because it's unjustifiable to others. (C1) is therefore incompatible with the (more commonplace) claim that when an act is wrong, it’s also unjustifiable to others as a consequence of whatever wrong-making property it has.[5]

Utilitarians, for instance, can accept that the wrong acts are precisely those that are unjustifiable to others, but they'll say that wrong acts are wrong not because they are unjustifiable to others but because they fail to maximize expected value (EV). And there is nothing special about utilitarianism in this respect. Virtually all non-contractualist moral theorists will hold that wrongness is grounded not in unjustifiability to others but in some other property, which might itself also ground unjustifiability to others. Hence (C1)’s distinctiveness.

What makes an act unjustifiable to others? Scanlon formulates his answer to this question in a few ways, but here’s a simple formulation for present purposes:

  • (C2) An act is unjustifiable to others (under the circumstances) if and only if, and because, any principle that would permit its performance (under the circumstances) could be reasonably rejected by someone other than the agent.[6]

We’ll clarify the post-“because” clause of (C2) below. But first, note that as (C1) and (C2) suggest, Scanlon holds that wrongness (of your act), unjustifiability (of your act) to others, and reasonable rejectability (of any principle that would permit your act) are a package deal: they’re all present in a given situation or all absent from it. Scanlon takes the co-presence of these properties to be a source of contractualism’s attractiveness, for their co-presence allows contractualism to vindicate the importance of morality and the possibility of moral motivation. Scanlon regularly repeats that we have powerful reasons to want to stand in relations of justifiability to others, or in what he calls relations of “mutual recognition.”[7] If morally right conduct is the conduct that allows us to stand in such relations with others, then morality turns out to be important and our ability to be moved by moral considerations turns out to be unmysterious.

Let’s clarify the post-“because” clause of (C2) with one of Scanlon’s examples:

Transmitter Room. The World Cup final is currently being played. Jones, a technician in the room containing the equipment that is causing the game’s worldwide television broadcast, has inadvertently come into contact with some exposed wires that are causing him very painful electric shocks. He is unable to extricate himself from his situation, but you can help him by turning off the machine with the exposed wires. Unfortunately, if you do this, then the World Cup broadcast will be shut down, and it won’t be able to be restarted for 10 minutes.[8]

All contractualists of whom we’re aware (and many other non-consequentialists) share Scanlon’s view that you ought to help Jones, even though doing so would (we can assume[9]) yield a much worse state of affairs overall than allowing him to continue being shocked. Suppose this is correct. How does the contractualist secure this verdict?

The contractualist first imagines some principles that could be thought to bear on the present situation. Consider four examples:

  • (P1) Do whatever would make things go best.
  • (P2) Do whatever would help the most people.
  • (P3) Prevent people from suffering serious harms unless doing so would result in misfortunes to others.
  • (P4) Prevent serious harms to some even if in doing so you would expose many others to minor inconveniences.

The contractualist now asks whether someone other than you could reasonably reject any of these principles. Scanlon discusses at great length what it takes to be able reasonably to reject a principle, with two constraints on reasonable rejection being especially important here. First, you can reasonably reject a principle only for what Scanlon calls “personal reasons,” i.e., reasons “tied to the well-being, claims, or status of individuals in [some] particular situation” (219). As Scanlon emphasizes, this requirement rules out various forms of interpersonal aggregation.[10] It isn’t possible for, say, five people to “combine” their personal reasons for objecting to a given principle into a stronger super-reason for objecting to this principle.

Second, and relatedly, whether you can reasonably reject a principle is determined by (among other things) the sizes of the burdens that would befall different parties as a result of your acting on this principle vs. an alternative. In particular, A can’t reasonably reject some principle, P, merely on the grounds that your acting on P would impose some burden on A: after all, it could work out that your acting on any principle other than P would impose much greater burdens on someone else. By contrast and all else equal, B can reasonably reject some principle, P*, on the grounds that your acting on P* would impose burdens on B far greater than those that would be imposed on anyone other than B by your acting on some alternative principle.[11]

Now consider how the contractualist will assess (P1)–(P4). On the one hand, your acting on (P4) would impose a burden on each of the millions of viewers, namely the inconvenience of missing out on 10 minutes of the World Cup. On the other hand, your acting on any of (P1)–(P3) would impose on Jones a vastly greater burden, namely continued seriously painful electric shocks. It thus seems clear that Jones’s personal reason for objecting to each of (P1)–(P3) is far stronger than anyone’s personal reason for objecting to (P4). More generally, each of the millions of viewers seems to be such that whatever personal reason she might have for objecting to a principle that would license your saving Jones is far less strong than Jones’s personal reason for objecting to any principle that would license your not saving him. So, no viewer can reasonably reject (P4); nor, it seems, can anyone else. Furthermore, it seems that Jones can reasonably reject each of (P1)–(P3) and, indeed, it seems that Jones can reasonably reject any relevantly similar principle. Given contractualism, then, saving Jones is the uniquely permissible act available to you.

We just explained what a contractualist would say about Transmitter Room in line with Scanlon’s own presentation of his ideas. But in what follows, we put talk of “principles” and “reasonable rejection” to the side. It will simplify matters, and do no damage to the relevant features of contractualism, to formulate our discussion primarily in terms of claims to assistance. For example, in Transmitter Room, each viewer, V, has (as we’ll put it) a claim to your allowing the broadcast to continue in virtue of the burden that would befall V as a result of your saving Jones instead; Jones has a far stronger claim to your saving him in virtue of the far more serious burden that would (continue to) befall him as a result of your allowing the broadcast to continue instead; and, given contractualism, you should satisfy a strong claim to your assistance over any number of individually far weaker claims.[12]

We’ve said enough to turn to contractualism’s implications for philanthropic interventions. Additional details about the theory, and about specific versions of it, will emerge along the way.

3. Welfare-Oriented Interventions

In Transmitter Room, it’s natural to say that the (sizes of the) claims to assistance of the various parties are grounded in welfare considerations. This section considers some philanthropic interventions naturally characterized as welfare-oriented: interventions aimed at improving the welfare of the present global poor (hereafter, “the global poor" and various x-risk interventions. What we said above about Transmitter Room will illuminate what the contractualist should say about these interventions.

3.1. Global-Poor–Oriented and X-risk Interventions

Suppose you’re in a position to do either but not both of these things:

  • Make a substantial donation to some charity helping the global poor (by, say, funding the manufacture and distribution of a large number of anti-malarial bednets);


  • Make a substantial donation to some organization that will yield a tiny reduction of the probability of some extinction event (e.g., the destruction of humanity via a large asteroid’s collision with the earth).

We’ll present a three-step argument that, given contractualism, if you ought to do anything, you ought to donate to the global poor vs. try to mitigate x-risk, other relevant things equal.[13] The central plank of our argument is that, given contractualism, several people have moral claims to your donating to the global poor that are much stronger than even the strongest moral claims had by any individual to your trying to mitigate x-risk. After that, we’ll argue that our argument also supports the conclusion that, given contractualism, you ought to pursue interventions like AMF over suffering-risk-mitigating (s-risk-mitigating) interventions. We’ll conclude that contractualism supports interventions like AMF over x-risk interventions quite generally.

Step 1

First, we argue that contractualism recommends preventing:

  • small numbers of grave harms that are certain to occur without your intervention and certain not to occur with your intervention


  • even far larger numbers of comparatively tiny such harms.

In defense of this step, let's return to Transmitter Room and note two things about the case.

First, Transmitter Room involves no uncertainty about what will happen following each of your available actions. If you turn off the power, then Jones will stop being shocked and the viewers will be deprived of 10 minutes of the game; if you don’t, then Jones will continue to be shocked and the viewers won’t be deprived of any of the game. 

Second, Transmitter Room plausibly counts as a case in which, in preventing one person from suffering serious harm, you would cause several others to suffer comparatively tiny harms. It isn’t a straightforward case of choosing whether to prevent one person from suffering a serious harm or to prevent many others from suffering comparatively tiny ones. But, if anything, it seems harder to justify your conduct to others when, in preventing one from suffering serious harm, you cause several others to suffer comparatively tiny harms than when you merely prevent serious harm from befalling one over preventing comparatively tiny harms from befalling many others. 

So, if (as we argued in Section 1) contractualism yields the verdict that you ought to aid Jones in Transmitter Room, then, if anything, it more strongly supports the verdict that you ought to prevent small numbers of grave harms over even far larger numbers of comparatively tiny harms. This completes Step 1 of our argument.

One point before we proceed. Call a harm that is certain to occur without your intervention and certain not to occur with your intervention an “otherwise certain harm.” We don’t assume that contractualism recommends preventing a serious otherwise certain harm over any number of arbitrarily slightly individually smaller otherwise certain harms. For example, a contractualist can hold that you ought to prevent 1 million people from undergoing electric shocks slightly less painful than the ones Jones is suffering rather than preventing just one person from suffering electric shocks just like the ones Jones is suffering. Our argument is compatible with such a “limited aggregationist” form of contractualism (and with its denial).

Admittedly, some contractualists (and other non-consequentialists) reject all forms of aggregation, limited and unlimited. And, as indicated already, Scanlon himself insists that his theory is non-aggregationist, at least insofar as it forbids combining different individuals’ claims into super-claims of groups. However, Scanlon also rather famously argues that, sometimes, contractualism implies that the numbers count. For example, consider a case where you can prevent one person from dying or two others from dying but cannot aid all three. Suppose each person has a claim to your assistance of equal strength. Scanlon argues that you ought to save the two, by arguing that if you were to “just save” the one, or were to give the two groups equal chances to be saved, then the additional party on the side of the two would have a justified complaint against you, on the grounds that you would thereby act as though the case were a one-versus-one case, and so act in a way that is inappropriately responsive to his presence. The idea, then, is that contractualists ought to count numbers in at least some cases without aggregating interests or claims.[14]

Suppose that this argument, or something relevantly like it, succeeds. Now consider the following case:

Death/Paraplegia. You can prevent Nora from dying or n other people from getting permanent paraplegia, but you can't save everyone.

We find it plausible that if Scanlon’s argument for saving the greater number succeeds, then, for some n, you ought to save the n. Here’s our thinking: First, imagine a version of Death/Paraplegia in which n = 1. In this case, you ought to save Nora outright. Now imagine a version in which n = 2. In this case, if you were to save Nora, then, plausibly, the additional person on the side of the many has a complaint, for you would thereby treat the case as though the additional person weren't even there. (Recall that we’re supposing that Scanlon’s argument described above succeeds.) So, you ought to do something appropriately responsive to the additional person’s presence. Perhaps this will take the form of flipping a coin to determine whether to save Nora or the many; perhaps it will take the form of running a lottery heavily weighted on Nora’s side—the details won’t matter here. What matters is that whatever such an act would be, it would presumably be “closer” to saving the n than what it was permissible to do when n = 1 (namely, saving Nora outright).

But now imagine iterating this process over and over, increasing the size of n by 1 each time. Eventually, we think, you’ll get to a point where outright saving the n is the only acceptable thing to do. This suggests that Scanlonian contractualism can accommodate some aggregation of lesser bads, at least if Scanlon’s argument for saving the greater number is successful. We note, though, that this “iteration” argument doesn’t commit you to full aggregation of the sort that’s incompatible with (say) the intuitive verdict in Transmitter Room. If you can prevent Nora’s death or n papercuts (each had by a different person), then no matter how big n is, each papercut plausibly counts as an irrelevant utility, (cf. Kamm 1998) in virtue of how much less significant it is than Nora’s death, and so ought to be ignored for purposes of determining which group to save.

Step 2

We now argue that, given contractualism, what goes morally for grave otherwise certain harms and comparatively tiny otherwise certain harms, on the one hand, also goes morally for probabilities of harms and comparatively tiny probabilities of harms of roughly equal magnitudes, on the other.

We first make an assumption about contractualism: The most plausible version of contractualism is (some version of) ex ante contractualism, hereafter “EAC.”[15] According to EAC, the strength of a person’s claim to your doing some act is grounded at least partly in the difference between the EV for this person of your doing this act and the EV for this person of your not doing this act.[16] EAC helps to explain why, in Transmitter Room, Jones has a much stronger claim to your assisting him than does any other person to your assisting her instead, and thus also why aiding Jones is the only way of acting available to you that would be justifiable to others: For all X, where X ranges over all people other than Jones, (EVJones(assisting Jones) - EVJones(not assisting Jones)) >> (EVX(assisting X) - EVX(not assisting X)).

EAC also helps to justify, from a contractualist perspective, certain attractive interventions in which very small risks of serious harms are imposed on large groups of people, even when doing so foreseeably causes some such harms. For example, as Frick (2015) points out, EAC seems well-placed to vindicate mass vaccination programs in which very small independent risks of very serious harms are imposed on many people who receive the vaccines, with the all-but-certain consequence that some people will suffer these very serious harms: It’s often in the ex ante interest of each would-be vaccinated person to get the vaccine in question, even though it’s obviously not in the ex post interest of anyone who ends up getting seriously harmed by such a vaccine to get one. Such vaccination programs can thus plausibly be taken to be justifiable to all affected individuals ex ante even if they end up causing serious harms to some. (As Frick also notes, ex post contractualists have a comparatively difficult time accommodating such interventions; given ex post contractualism, it seems that those who end up harmed by such interventions have legitimate complaints.)

Given EAC, it seems that just as some very serious otherwise certain harms morally outweigh any number of comparatively tiny otherwise certain harms, individual probabilities of harms morally outweigh any number of individual comparatively tiny probabilities of harms (faced by different individuals) of roughly equal magnitude. Consider this case:

Different Probabilities. Amy has probability .5 of dying within the next hour. 200 other people each have probability .01 of dying within the next hour. Death would be no worse or less bad for anyone among these 201 people than for anyone else among these 201 people. You can reduce Amy’s probability of dying within the next hour to 0 or do the same for the 200, but you can’t do this for all 201 people.

Given EAC, Amy’s claim to your eliminating her probability of dying within the next hour probably morally outweighs the competing claims. This is because, given EAC, Amy’s .5 probability of death probably relates morally to each .01 probability of death possessed by the 200 as Jones’s electric shocks relate morally to the 10 minutes of frustration that the World Cup final viewers would experience if you were to aid Jones. Amy’s claim trumps each claim with which it competes, just as Jones’s claim trumps each claim with which it competes. These considerations suggest a general lesson: Given EAC, you ought to make reductions in some people’s probabilities of suffering harms rather than making comparatively tiny reductions in even far more people’s probabilities of suffering harms of roughly equal magnitude. This completes Step 2 of our argument.

Just as the contractualist doesn’t need to hold that one otherwise certain harm lexically morally outweighs any number of even arbitrarily slightly less serious otherwise certain harms, the contractualist doesn’t need to hold (and our argument doesn’t require) that, for any 0 < n < n+ ≤ 1, one instance of probability n+ of a harm of a given magnitude lexically morally outweighs any number of instances of probability n of harm of roughly equal magnitude. The contractualist can allow (and our argument is compatible with its being the case) that you ought to eliminate the probabilities of death possessed by 1,000 people who each have .49 probabilities of death rather than eliminate the .5 probability of death possessed by one other person.

Step 3

In this final step of our argument, we’ll defend these two claims:

  1. If you donate to the global poor, then you’ll thereby cause several people to undergo reductions in their probabilities of suffering a serious harm (death from malaria, say), whereas
  2. If you try to mitigate x-risk, then you’ll thereby cause a much larger number of people to undergo comparatively tiny reductions in their probabilities of suffering a harm of roughly equal magnitude (death from asteroid collision with the earth, say).[17]

Claim (1) would be obvious, or close to it, if trying to help the global poor were a matter of handing 1,000 bednets to 1,000 needy persons standing in front of you who will otherwise certainly die of malaria if not given the bednets and who will certainly live for many more happy years if given the bednets. And some philanthropic acts may indeed be nearly this direct in character (e.g., cash transfers). But donating to support AMF-like interventions ordinarily doesn’t take such a form. For most philanthropists, it's more like randomly selecting n people in some region of the world each to receive a bednet.[18] This means that the pool of potential beneficiaries of such an intervention is very large, even if the number of actual beneficiaries is comparatively small and the individual reduction in the probability of harm that’s secured for each of the potential beneficiaries isn’t very large. Nevertheless, the diffuse and uncertain character of your act shouldn’t prevent us from acknowledging that donating to help the global poor decreases the probability of suffering a serious harm for each person in the pool of potential beneficiaries (even if these probability reductions are fairly small). This conclusion is enough to establish (1).

How about (2)? Restricting our attention to people, there are two groups of potential beneficiaries here: currently existing people and non-existent people. The currently existing people are the simple case. Here, it’s straightforwardly true that you’ll benefit a much larger number of people via trying to mitigate x-risk, as every currently existing person enjoys a reduction in their probability of suffering a significant harm (death). However, the beneficiaries of AMF-like interventions face the same harm. Moreover, the probability of providing any benefit to every currently existing person is very low by comparison. So, (2) is true for currently existing people.

What about non-existent people? Here, the case for (2) seems even stronger. Suppose we think of non-existent people as non-existent objects—things that, in some sense, haven’t acquired the property of existence. (This might be an inaccurate way to think of them, but it might nevertheless be an innocent way for present purposes.) Once we think of non-existent people this way, we should allow that there are very, very many non-existent, merely possible people, and that many (perhaps, strictly speaking, all) of them will undergo an expected decrease in risk of personal disaster as a result of your trying to mitigate x-risk. However, it seems inescapable that the biggest expected risk reductions that you’ll secure for any of these individuals will be much smaller than the biggest risk reductions that you’ll secure for at least some individuals by donating to help the global poor.[19]

There are two main considerations that recommend this conclusion.

First, the probability, relative to your current evidence, that any given non-existent individual will exist is extremely small. The simple argument for this: If men produce 2 trillion sperm in their lifetimes and women have some 350,000 eggs, the odds of any particular individual coming into existence even fixing the parents are minuscule.[20]

Second, by hypothesis, your intervention yields a very small reduction of the probability of a disaster of the sort targeted by your intervention (asteroid collision with the earth, say). These considerations support the conclusion that, in trying to mitigate x-risk, you don’t confer on any individual a risk reduction of serious harm that is more than tiny compared to at least some of the risk reductions of roughly equally serious harms that you confer on individuals in donating to support AMF-like interventions. Hence, whether we focus on currently existing or non-existent people, (2) looks plausible.

Someone might object as follows:

All this depends on the risk of extinction and the amount of that risk that we can reduce. Sure, the probability of an extinction-causing asteroid strike may be very low; sure, we may not be able to do much to reduce the probability of at least some such asteroids hitting Earth. However, many people think that other risks are much higher—e.g., risks due to misaligned AI—and our ability to mitigate those risks much greater. So, while global poverty work may beat x-risk work for non-existent individuals, given the low probability that any one of them comes into existence, global poverty work doesn’t obviously beat x-risk work for currently existing individuals.

This objection is fair: the argument indeed turns on the relative magnitudes of both the risks of serious harm (death) and the expected risk reductions. However, it’s important to recognize two points. First, apart from AI-related threats, the standard view is that the risk of extinction is quite low in absolute terms (Ord famously estimates the threat of an extinction-causing asteroid strike at 1/1,000,000 and the highest non-AI anthropogenic threat at 1/30.) Moreover, it’s common to assume that efforts to reduce the risk of extinction might reduce it by one basis point—i.e., 1/10,000. So, multiplying through, we are talking about quite low probabilities. Of course, the probability that any particular poor child will die due to malaria may be very low as well, but the probability of making a difference is quite high. So, on a per-individual basis, which is what matters given contractualism, donating to AMF-like interventions looks good.

Second, and turning our attention to AI-related extinction events, we should just concede: if the probability of extinction is high enough and the difference you can make large enough, then yes, that sort of x-risk work clearly beats global poverty work even given contractualism. However, there are some key caveats here.

To start, the probability of extinction needs to be high enough to affect currently existing humans. Otherwise, the point we made about the low probability of any particular non-existent individual’s coming into existence will be decisive.

Next, and again, reducing risk by one basis point will result in a significant discount of the ostensible benefit (preventing death).

Finally, there is massive disagreement about the odds of there being an AI-related extinction event in the next several decades, with many estimates being quite low indeed (as demonstrated by Open Philanthropy’s recent AI Worldview Contest). So, some people may have views about the level of risk and their prospects for mitigating it such that, even given contractualism, x-risk work beats global poverty work. However, we doubt that most do. So, we submit that most people should think that (A) if you donate to AMF-like interventions, then you’ll thereby cause several people to undergo reductions in their probabilities of suffering a serious harm, whereas (B) if you try to mitigate x-risk, then you’ll thereby cause a much larger number of people to under comparatively tiny reductions in their probabilities of suffering a harm of roughly equal magnitude.

This completes Step 3 of our argument.[21] Together, steps 1–3 yield the verdict that, given contractualism, you ought to do (Poor) over (Extinction), other relevant things equal. And we can generalize beyond this conclusion: A contractualist perspective will generally encourage philanthropic interventions reducing present people’s probabilities of suffering great harms over interventions reducing even far more numerous not-yet-existent people’s much smaller probabilities of suffering comparably sized harms.

3.2. S-risk Interventions?

So far, we’ve made a contractualist case for helping the global poor over x-risk work. But some suffering risk (s-risk) interventions might try to reduce the threat of harms that would be far worse than even the worst harms faced by the global poor. For example, death due to malaria is very bad, but it would be far worse to spend decades or centuries being tortured by sadistic misaligned AI. Presumably, some interventions targeting s-risks are at least partly aimed at reducing risks of relevantly similar outcomes.[22] Moreover, it’s arguable that, given contractualism, A can have a stronger claim to your assistance than B does in virtue of the fact that you’re in a position to make a small reduction in A’s probability of suffering a horrendous harm and you’re in a position to make a much larger reduction in B’s probability of suffering a great but not horrendous harm. For example, it’s plausible that, given contractualism, if you can reduce A’s probability of being kept alive and tortured for 50 years and then killed from .0002 to .0001 or you can reduce B’s probability of dying now from .2 to .1, then A has a stronger claim to your assistance than B.[23] If this is correct, then it might be that some s-risk interventions that make tiny differences to people’s probabilities of suffering the harms that they target are preferable, given contractualism, to some interventions targeting the global poor that make much larger differences to people’s probabilities of suffering the harms that they target.

This complication doesn’t undermine our argument that EAC supports trying to help the global poor over mitigating x-risk. Rather, the present complication brings out that some future-oriented interventions might be relevantly dissimilar from attempts to mitigate x-risk. In so doing, it raises the question of whether it might be appropriate, given contractualism, to finance interventions targeting some extremely remote s-risks, in particular risks of horrendous individual sufferings, over interventions aiding the present global poor.

Again, everything will come down to the probabilities. We grant that, given contractualism, a person facing a small risk of a fate far worse than death could have a stronger claim to your assistance than another person facing a far greater risk of (“mere”) death. However, the probability of these kinds of s-risks may be very low. After all, the relevant probability here isn't just the probability of misaligned AI, but the probability of a very specific kind of misalignment, a considerable amount of power, the AI being able to create suffering that's orders of magnitude worse than death (to offset the low probability of the situation in the first place), etc.

On top of all that, it's important to consider that when the probability of negative utility gets sufficiently low, contractualists may well have an independent reason to dismiss it as irrelevant utility, as mentioned earlier. After all, it would be very surprising if contractualists were willing to accept expected value theory, where any decrease in probability can be offset by a corresponding increase in the (dis)value of the option in question. In brief, this is because the point of discounting possible harms by their probabilities is not to satisfy the axioms of a particular decision theory; instead, it's to capture what counts as reasonable within a (perhaps somewhat idealized) community, as the ultimate goal here is some kind of justifiability to others. Just imagine someone trying to justify his decision not to spare a child from contracting malaria to that very child by saying that he thought was more important to make a marginal difference to the already-extraordinarily-low probability of some future people’s suffering intensely due to malicious AI. Even if the per-individual EV of the s-risk effort is higher, the justification seems quite strained.

We conclude that contractualism generally favors interventions targeting the global poor over x-risk and s-risk interventions.

In the rest of this post, we extend our discussion to justice-oriented interventions.

4. Injustice-targeting and Special-relations-based Interventions

We’ve argued that contractualism recommends interventions like AMF over x-risk interventions. But someone might worry that, in fact, contractualism recommends neither type of intervention, instead preferring interventions targeting injustices and interventions based on special relations. The thought goes: because contractualism prizes fairness and justifiability to others, it will strongly prefer interventions that address unfairness and are sensitive to special duties to the near and dear. What should the contractualist say here?

Consider, to take just one prominent example of an injustice-focused intervention, the much-discussed possibility of providing reparations for slavery to present-day African-Americans with ancestors who were slaves in the antebellum U.S. South. The present question, applied to reparations, is whether EAC implies that (American) philanthropists ought to fund efforts to lobby the US Government to pay reparations over, say, funding efforts to help the global poor. If EAC is committed to the priority of justice-based claims over welfare-based claims, then, perhaps, philanthropists would be wrong to prioritize the latter over the former.

There are several assumptions baked into this suggestion, but let's focus on the basic question of whether EAC invariably requires the priority of justice-based claims over welfare-based claims, as that will suffice for our purposes. With that in mind, consider this case:

Window/Agony. Bill recently threw a rock through Ann’s window out of pure malice. Bill now has the ability either to fix Ann’s window or to prevent Carl, a total stranger, from suffering an extremely slow and agonizing death, but he cannot do both of these things.

Obviously, Bill has wronged Ann (in a particularly direct and objectionable manner), and Ann has a claim of corrective justice against Bill that he fix her window (among other things). Nevertheless, we find it plausible that Bill ought to save Carl. Our core thought is straightforward: Carl’s plight is so much more serious than Ann’s that Bill ought to help him rather than satisfy Ann’s (entirely valid) claim of corrective justice against him. (Given the circumstances, we also find it plausible that it would be immoral for Ann to press her claim against Bill.) This suggests that claims of corrective justice do not necessarily trump purely welfare-grounded claims. We need to consider the strengths of the claims on both sides. The case of American slavery is interesting precisely because it seems to ground such strong justice-based claims. However, even in this case, it isn’t clear the strength of the claim of any current would-be beneficiary of reparations is strong than, say, the claim of a child who would die from malaria without aid. And if that’s right, then the mere existence of justice-based claims doesn’t immediately show that philanthropists ought to act in any particular way.

Welfare and injustice aren't the only possible contributors to claim strength. For example, some find it plausible that our fellow citizens have special claims to our assistance grounded in our co-citizenship. More generally, special relations between a needy party and the potential benefactor are often taken to amplify the strength of the needy party’s claim to assistance. What should the contractualist say about interventions sensitive to such relations?

Considering all possible special relations that give rise to special claim-strengths would obviously be impossible in the present context, but we think that we can defend some general claims about this topic without getting too far into the weeds. In short, you don’t need to dismiss the validity of such contributors to claim strength to be skeptical that they yield plausible contractualism-friendly justifications for philanthropic interventions targeting much-better-off individuals who satisfy the relevant conditions over much-worse-off individuals who don’t, or ones securing much smaller EV increases for individuals who satisfy the relevant conditions over ones securing much larger EV increases for individuals who don’t.[24] 

For example, even if co-citizenship is a claim-strength enhancer, we doubt that a plausible contractualist case can be made for aiding homeless people in your own affluent nation over far worse-off people on the other side of the planet.[25] Welfare considerations, taken on their own, are not everything, given most versions of contractualism, but they are extremely important. And so we suspect that, given contractualism, interventions like AMF will generally fare better than interventions sensitive to special relations like co-citizenship targeting much better-off people and yielding much smaller EV increases. By contrast, interventions targeting (say) homeless people in your own nation might well secure significantly larger EV increases for worse-off people than x-risk or injustice-targeting interventions, and so might well fare better than such interventions, by the lights of contractualism.

5. Conclusion

Many EAs may be more sympathetic to some form of consequentialism than they are to Scanlon’s contractualism. Still, they might put some credence in a view of this kind. And if they do, then it’s worth considering the implications of contractualism for cause prioritization. As we’ve argued, we get quite a different picture than the one that stems from a general commitment to maximizing EV. Indeed, if contractualism is true, then there are many cases in which it would be wrong to maximize EV. Instead, insofar as contractualism supports maximization, it tells us to maximize something like “the relevant strength-weighted moral claims that are addressed per dollar spent”—where, as we’ve seen, the strength adjustment involves discounting the claim by the probability of being able to help the individual in question. Since those probabilities are very low for any particular future individual, and since contractualism rejects the view that a sufficient number of extremely weak claims can sum to outweigh a very strong claim, the view generally favors reliably helping the global poor (whose claims are often very strong) over many other options.



The post was written by Bob Fischer and an anonymous collaborator. Thanks to the members of WIT and Emma Curran for helpful feedback. The post is a project of Rethink Priorities, a global priority think-and-do tank, aiming to do good at scale. We research and implement pressing opportunities to make the world better. We act upon these opportunities by developing and implementing strategies, projects, and solutions to key issues. We do this work in close partnership with foundations and impact-focused non-profits or other entities. If you're interested in Rethink Priorities' work, please consider subscribing to our newsletter. You can explore our completed public work here.


  1. ^

     That being said, it’s perfectly compatible with contractualism to prioritize interventions that will help many people per dollar over those that don't. This is because there are so many people who are roughly on a par in terms of being badly off. And all else equal, if you can help more very badly off people, then contractualism says you should, as there probably isn’t any principle that someone couldn’t reasonably reject that would justify assisting n very-badly-off strangers when you could just as easily help n+1 very-badly-off strangers. What, after all, would you say to the +1?

  2. ^

     Contractualism is distinct from contractarianism, a moral and political theory whose central historical advocate is Thomas Hobbes (1651) and whose chief contemporary proponent is David Gauthier (1986).

  3. ^

     Including on episode 6 of season 1 of the TV series The Good Place, entitled “What We Owe to Each Other.”

  4. ^

     This isn’t a direct quote, but it’s close enough. See, e.g., Scanlon (1998: 189). A qualification: Scanlon says that contractualism isn’t a comprehensive theory of moral wrongness. Rather, according to Scanlon, contractualism is “meant to characterize…only that part of the moral sphere that is marked out by certain specific ideas of right and wrong, or ‘what we owe to others’” (178). (Parenthetical page number citations are to Scanlon (1998).) So, it might be clearer to formulate (C1) as the claim that an act is wrong by the lights of the dimension of morality concerned with what we owe to others if and only if, and because, it is unjustifiable to others. However, we’ll omit such qualifications as they aren’t relevant to philanthropic decision-making. If we owe it to others to favor GHW-like interventions over x-risk interventions, then that’s enough for practical purposes.

  5. ^

     In fact Scanlon can be interpreted as endorsing an even tighter connection between unjustifiability to others and wrongness than the one that we have attributed to him, for he can be read as holding that wrongness is identical to unjustifiability to others. Many people in fact read him this way; indeed, the Stanford Encyclopedia of Philosophy article about his view attributes this view to him without defense or citation; see Ashford and Mulgan (2018: Section 1).

  6. ^

     Like (C1), (C2) is not a direct quotation, but see Scanlon (1998: 153, 189, 195).

  7. ^

     See for example Scanlon (1998: Chapter 4, Sections 3–5).

  8. ^

     This case is adapted from Scanlon (1998: 235).

  9. ^

     “We can assume” because many contractualists will deny that it is worse overall for the millions to suffer their inconveniences than for Jones to suffer his electric shocks, either on the grounds that nothing is just plain worse than anything else or on other grounds. But this isn’t part of contractualism: it’s a separate claim that some non-consequentialists find plausible on independent grounds. In any case, we can safely ignore this debate here.

  10. ^

     See Scanlon (1998: 229).

  11. ^

     See Scanlon (1982: 111; 1998: 229–230).

  12. ^

     This talk of “claims” is a commonplace in contractualist and contractualism-adjacent moral theory; see for example Voorhoeve (2014) for a representative discussion. Corresponding talk of “complaints” (against a person grounded in his failing to assist one or against a principle) is also common. See Scanlon (1998: 229ff) for relevant discussion.

  13. ^

     We’ll bracket the “if you ought to do anything” clause in what follows, but we include it here to flag that there’s some room for debate about whether contractualism implies that you have any obligations at all regarding your philanthropic activities. However, on the assumption that you do have some obligations, we think they’re structured as we describe here. 

  14. ^

     This “tiebreaker” argument (as it has come to be called) has been the object of a great deal of discussion, much of it critical. See, e.g., Otsuka (2000).

  15. ^

     This is a point of departure on our part from Scanlon, for Scanlon endorses not EAC but ex post contractualism. See Frick (2015) for discussion. It has been argued that ex post contractualism also recommends aiding present needy individuals over x-risk mitigation; see Curran (forthcoming). So, apart from any principled reasons to prefer EAC to ex post contractualism, it’s valuable to show that we get the same practical result either way.

  16. ^

     There are almost certainly additional factors that contribute to the strength of a person’s claim, given EAC, but the EV difference factor is the one most important to our present purposes.

  17. ^

     We don’t mean that it would be roughly equally bad overall for a death due to asteroid collision with the earth to occur as for a death due to malaria to occur. The former would be overall vastly worse than the latter, given that it would also involve the deaths of an enormous number of other morally significant individuals, human and non-human alike, the extinction of humanity, and other such horrendous outcomes. We mean only that a person’s death due to asteroid collision with the earth taken on its own and a person’s death due to malaria taken on its own are roughly equally serious.

  18. ^

     Our thinking here is that when you support AMF-like interventions, there are many potential beneficiaries, but it’s also virtually certain that there will be several actual beneficiaries. This seems to make the act relevantly like randomly selecting some people from a large group each to receive a bednet. We could avoid this complexity by focusing on different poverty-focused interventions, like direct cash transfers, where the impact on an individual doesn't depend on that individual counterfactually suffering some specific harm and the likelihood of some benefit or other is higher. But the argument is stronger if it works even for AMF, so we focus on it here.

  19. ^

     We emphasize that contractualism, as we understand it, doesn’t utterly ignore the claims of future people. It merely holds that the strengths of their claims are appropriately sensitive to the sorts of probability considerations just mentioned.

  20. ^

    Someone might object that we don't owe our justifications de re to each non-existent person; instead, we owe them de dicto to "future people," whoever they happen be (see, e.g., Hare 2007). Given as much, the probability of benefitting future people is much higher. However, this isn't a plausible move for contractualists to make. The appeal of contractualism is partially based on its ability to explain why we care about morality at all: namely, in terms of the value we place on mutual recognition and acting in ways that are justifiable to others. However, there's a psychological constraint here: most people don't care about justifying themselves to people who might live in the far future. So, if we go for the de dicto interpretation of contractualism, we end up with a view that creates the very problem that contractualism was supposed to solve: that is, we end up with a moral view where the rightness and wrongness of actions is based on a property ((un)justifiability to others) that is too far from our actual concerns. So, contractualists should stick with the de re interpretation of their view.  

  21. ^

     Our Meinongian treatment of not-yet-existent individuals is actually helpful to x-risk interventions, as many philosophers (including many contractualists) deny that future people have claims to our assistance at all, whereas treating these individuals as somehow real makes it more plausible that they have claims to our assistance.

  22. ^

     Not all risks that might be classified as “s-risks” are risks even partly of large individual sufferings. For example, a risk of creating several googolplexes of insects throughout the cosmos, each of whom would have an on-balance hedonically slightly negative life, will qualify as an s-risk on some classifications, but this is not a risk even partly of large individual sufferings. Presumably, though, misaligned AI could generate the relevant kinds of situations.

  23. ^

     Consider a version of EAC on which the sole determinant of the strength of a person’s claim is the difference between (a) the claimant’s EV of the claim’s being satisfied and (b) the claimant’s EV of the claim’s not being satisfied. That view will clearly yield this verdict, given suitable plausible assumptions about the personal badness of 50 years of torture. But there are other plausible versions of EAC that will yield this verdict too. For example, versions of EAC that include prioritarian assumptions—e.g., that the lowness of the EV for a given person of your not assisting him makes a difference in its own right to the strength of his claim to your intervention—will have the same upshot. (Many proponents of EAC accept views like this; accordingly, many proponents of EAC don’t treat positive and negative utility symmetrically, morally speaking.)

  24. ^

     Note that our present point is not the (widely accepted) claim that a given amount of money donated to an aid agency targeting the global poor will tend to do more overall good than the same amount of money donated to an agency targeting the worst off people in your own affluent country. (This study has some relevant figures.)

  25. ^

     Though if you are co-citizens with homeless people who are not much better off than the worst-off aidable global poor, then this will complicate matters.

Sorted by Click to highlight new comments since:

(My only understanding of contractualism comes from this post, The Good Place, and the SEP article. Apologies for any misunderstandings)

tl;dr: I think contractualism will lead to pretty radically different answers than AMF. So I dispute the "if contractualism, then AMF" conditional. Further, I think the results it gives is so implausible that we should be willing to reject contractualism as it applies to the impartial allocation of limited resources. I'm interested in responses to both claims, but I'm happy to see replies that just address one or the other.

Suppose there's a rare disease that would kill Mary rather painfully with probability ~=1. Suppose further that we estimate that it takes ~1 billion dollars to cure her. It seems that under contractualism, every American (population ~330 million) is obligated to chip in 3 dollars to save Mary's life. It is after all implausible that a tax increase of 3 dollars per year has nearly as much moral claim to wrongness as someone dying painfully, even under the much more relaxed versions that you propose. [1]

Without opining on whether contractualism makes sense in its own lane[2], I personally think the above is a reductio ad absurdum of contractualism as applied to the rational impartial allocation of limited resources, namely that it elevates a cognitive bias (the identifiable victim effect) to a core moral principle. But perhaps other people think privileging Mary's life over every American having 3 dollars (the equivalent on the margin to 330 million used books or 330 million ice cream cones) is defensible or even morally obligatory. Well it so happens that 3 dollars is close to the price of a antimalarial bednet. My guess is that contractualism, even under the more relaxed versions, will have trouble coming up with why preventing some number of people (remember, contractualism doesn't do aggregations!) having a ~50% chance of getting malaria and a ~0.3% chance of dying is morally preferable to preventing someone from dying with probability ~1. This despite the insecticidal bednets potentially saving tens or even hundreds of thousands of lives in expectation! 

But I guess one person's modus tollens is another's modus ponens. What I consider to be a rejection of contractualism can also logically be interpreted by others as a rejection of AMF, in favor of much more expensive interventions that can save someone's life with probability closer to 1. (And in practice, I wouldn't be surprised if the actual price to save a life for someone operating under this theory is more like a million dollars than a billion). So I would guess people who believe in contractualism as applied to charities to end up making pretty radically different choices than the current EA set of options.

EDIT: 2023/10/15: I see that Jakub has already made the same point I had before I commented, just more abstract and philosophical. 

  1. ^

    A related issue with this form of contractualism is its demandingness, which taken literally seems more demanding than even naive act utilitarianism. Act utilitarianism is often criticized for its demandingness, but utilitarianism at least permits people to have simple pleasures while others suffer (as long as the simple pleasures are cheap to come by).

  2. ^

    As you and SEP both note, contractualism is only supposed to describe a subset of morality, not all of it. 

The problem (often called the "statistical lives problem") is even more severe: ex ante contractualism does not only prioritize identified people when the alternative is to potentially save very many people, or many people in expectation; the same goes when the alternative is to save many people for sure as long as it is unclear who of a sufficiently large population will be saved. For each individual, it is then still unlikely that they will be saved, resulting in diminished ex ante claims that are outweighed by the undiminished ex ante claim of the identified person. And that, I agree, is absurd indeed. 

Here is a thought experiment for illustration: There are two missiles circulating earth. If not stopped, one missile is certain to kill Bob (who is alone on a large field) and nobody else. The other missile is going to kill 1000 people; but it could be any 1000 of the X people living in large cities. We can only shoot down one of the two missiles. Which one should we shoot down?

Ex ante contractualism implies that we should shoot down the missile that would kill Bob since he has an undiscounted claim while the X people in large cities all have strongly diminished claims due to the small probability that they would be killed by the missile. But obviously (I'd say) we shoud shoot down the missile that would kill 1000 people. (Note that we could change the case so that not 1000 but e.g. 1 billion people would be killed by the one missile.)

Or many people in expectation; the same goes when the alternative is to save many people for sure as long as it is unclear who of a sufficiently large population will be saved

Yep you're right. And importantly, this isn't a far-off hypothetical: as Jaime alludes to, under most reasonable statistical assumptions AMF will almost certainly save a great number of lives with probability close to 1, not just save many lives in expectation. The only problem is that you don't know for sure who those people are, ex ante.

Yes indeed! When it comes to assessing the plausibility of moral theories, I generally prefer to make "all else equal" to avoid potentially distorting factors, but the AMF example comes close to being a perfect real-world example of (what I consider to be) the more severe version of the problem. 

Note that the AMF example does not quite work, because if each net has a 0.3% chance of preventing death, and all are independent, then with 330M nets you are >99% sure of saving at least ~988k people.

Contractualism doesn't allow aggregation across individuals. If each person has 0.3% chance of averting death with a net, then any one of those individual's claims is still less strong than the claim of the person who will die with probability ~=1. Scanlon's theory then says save the one person.

Yeah Scanlon's theory doesn't allow for differentiation even between a strong claim and many only slightly worse claims. The authors of this post tries to rescue the theory by the small relaxation that you can treat high probabilities and numbers of morally almost-as-bad things to be worse than 1 very bad and certain thing.

We find it plausible that if Scanlon’s argument for saving the greater number succeeds, then, for some n, you ought to save the n. Here’s our thinking: First, imagine a version of Death/Paraplegia in which n = 1. In this case, you ought to save Nora outright. Now imagine a version in which n = 2. In this case, if you were to save Nora, then, plausibly, the additional person on the side of the many has a complaint, for you would thereby treat the case as though the additional person weren't even there. (Recall that we’re supposing that Scanlon’s argument described above succeeds.) So, you ought to do something appropriately responsive to the additional person’s presence. Perhaps this will take the form of flipping a coin to determine whether to save Nora or the many; perhaps it will take the form of running a lottery heavily weighted on Nora’s side—the details won’t matter here. What matters is that whatever such an act would be, it would presumably be “closer” to saving the n than what it was permissible to do when n = 1 (namely, saving Nora outright).

But now imagine iterating this process over and over, increasing the size of n by 1 each time. Eventually, we think, you’ll get to a point where outright saving the n is the only acceptable thing to do. This suggests that Scanlonian contractualism can accommodate some aggregation of lesser bads, at least if Scanlon’s argument for saving the greater number is successful.[emphasis mine]

But while I could imagine it going through for preventing 2 people from dying with 80% probability vs 1 person with 100%, I don't think it goes through for ice cream, or AMF. A system that doesn't natively do aggregation has a lot of trouble explaining why many numbers of people each with a 0.3% of counterfactually dying has as much ore more moral claim to your resources as a single identified person with ~100% chance of counterfactually dying.

(As a side note, I try to ground my hypotheticals in questions that readers are likely to have first-hand familiarity with, or can easily visualize themselves in that position. Either very few or literally no one in this forum has experience with obscenely high numbers of dust specks, or missile high command. Many people in this conversation have experience with donating to AMF, and/or eating ice cream). 

One way you could do this is by defining what kinds of claims would be "relevant" to one another and aggregatable. If X is relevant to Y, then enough instances of X (or any other relevant claims) can outweigh Y. Deaths are relevant to other deaths, and we could (although need not) say that should hold no matter the probability. So multiple 0.3 percentage point differences in the probability of death can be aggregated and outweigh a 100 percentage point difference.

Some serious debilitating conditions could also be relevant to death, too, even if less severe.

On the other hand, ice cream is never relevant to death, so there's no trade off between them. Headaches (a common example) wouldn't be relevant to death, either.

I think this is the idea behind one approach to limited aggregation, specifically Voorhoeve, 2014 (https://doi.org/10.1086/677022).

But this seems kind of wrong as stated, or at least it needs more nuance.

There's a kind of sequence argument to worry about here, of increasingly strong claims. Is ice cream relevant to 1 extra second of life lost for an individual? Yes. If ice cream is relevant to n extra seconds of life lost for an individual, it seems unlikely 1 more second on top for the individual will make a difference to its relevance. So by induction, ice cream should be relevant to any number of extra seconds of life lost to an individual.

However, the inductive step could fail (with high probability). Where it could fail seems kind of arbitrary, but we could just have moral uncertainty about that.

Also, there are nonarbitrary (but uncertain) places it could fail for this specific sequence. Some people have important life goals that are basically binary, e.g. getting married. Losing enough years of life will prevent those goals from being fulfilled. So, rather than some cutoff on seconds of life lost or death itself, it could be such preferences that give us cutoffs.

Still, preference stength plausibly comes in many different degrees and many preferences themselves are satisfiable to many different degrees, so we could make another sequence argument over preference strengths or differences in degree of satisfaction.

Yeah I feel that sometimes theories get really convoluted and ad hoc in an attempt to avoid unpalatable conclusions. This seems to be one of those times.

I can give Scanlon a free pass when he says under his theory we should save two people from certain death rather than one person from certain death because the 'additional' person would have some sort of complaint. However when the authors of this post say, for a similar reason, that the theory implies it's better to do an intervention that will save two people with probability 90% rather than one person with probability 100%, I just think they're undermining the theory. 

The logic is that the 'additional' person in the pair has a complaint because you're acting as if they aren't there. But you aren't acting as if they aren't there - you're noticing they have a lesser claim than the single individual and so are (perhaps quite reluctantly) accommodating the single individual's larger claim. Which is kind of the whole point of the theory!

As a fairly unimportant side note, I was imagining that some nets has a 0.3% chance of saving some (unusually vulnerable) people, but the average probability (and certainly the marginal probability) is a lot lower. Otherwise $1B to AMF can save ~1M lives, and which is significantly more optimistic than the best GiveWell estimates.

Thanks for all the productive discussion, everyone. A few thoughts.

First, the point of this post is to make a case for the conditional, not for contractualism. So, I'm more worried about "contractualism won't get you AMF" than I am about "contractualism is false." I assumed that most readers would be skeptical of this particular moral theory. The goal here isn't to say, "If contractualism, then AMF---so 100% of resources should go to AMF." Instead, it's to say, "If contractualism, then AMF---so if you put any credence behind views of this kind at all, then it probably isn't the case that 100% of resources should go to x-risk."

Second, on "contractualism won't get you AMF," thanks to Michael for making the move I'd have suggested re: relevance. Another option is to think in terms of either nonideal theory or moral uncertainty, depending on your preferences. Instead of asking, "Of all possible actions, which does contractualism favor?" We can ask: "Of the actual options that a philanthropist takes seriously, which does contractualism favor? It may turn out that, for whatever reason, only high-EV options are in the set of actual options that the philanthropist takes seriously, in which case it doesn't matter whether a given version of contractualism wouldn't select all those options to begin with. Then, the question is whether they're uncertain enough to allow other moral considerations to affect their choice from among the pre-set alternatives.

Finally, on the statistical lives problem for contractualism, I'm mostly inclined to shrug off this issue as bad but not a dealbreaker. This is basically for a meta-theoretic reason. I think of moral theories as attempts to systematize our considered judgments in ways that make them seem principled. Unfortunately, our considered judgments conflict quite deeply. Some people's response to this is to lean into the process of reflective equilibrium, giving up either principles or judgments in the quest for perfect consistency. My own experience of doing this is that the push for *more* consistency is usually good, whereas the push for *perfect* consistency almost always means that people endorse theories with implications that I find horrifying *that they come to believe are not horrifying,* as they follow from a beautifully consistent theory. I just can't get myself to believe moral theories that are that revisionary. (I'm reporting here, not arguing.) So, I prefer relying on a range of moral theories, acknowledging the problems with each one, and doing my best to find courses of action that are robustly supported across them. In my view, EAC is based on the compelling thought that we ought to protect the known-to-be-most vulnerable, even at the cost of harm to the group. In light of this, what makes identified lives special is just that we can tell who the vulnerable are. So sure, I feel the force of the thought experiments that people offer to motivate the statistical lives problem; sure, I'm strongly inclined to want to save more lives in those cases. But I'm not so confident to rule out EAC entirely. So, EAC stays in the toolbox as one more resource for moral deliberation.

I'm far from an expert on contractualism, but iirc it's standardly presented as a theory of just one part of morality, which Scanlon characterizes as "what we owe to each other". Do many regard it as a serious contender for what we all things considered ought to do? (The exclusion of animal interests, for example, would seem to make this implausible. But the implicit disregard for overall value also strikes me as entirely disqualifying.  If I became convinced that contractualism were the true account of "morality", I would probably also become an amoralist of a sort, because other things just strike me as vastly more objectively important than "what we owe to each other".)

Edit: just saw footnote 4 (initially hidden) relates to this point. You say, "If we owe it to others to favor GHW-like interventions over x-risk interventions, then that’s enough for practical purposes." I guess I'm questioning that. Surely what's practically relevant is what we all things considered ought to do.

Fair enough re: the view that contractualism is just one part of morality. I suppose that the contractualist has two obvious maneuvers here. One of them is to reject this assumption and take what we owe one another to be all of morality. Another is to say that what we owe one another is sensitive to the rest of morality and, for that reason, it's appropriate to have what we owe one another trump other moral considerations in our practical deliberations. Either way, if we owe it to the global poor to prioritize their interests, it's what we  ought to do all things considered.

FWIW, given my own uncertainties about normative theory, I care more about the titular conditional (If contractualism, then AMF) than anything else here.

Hey Bob, I'm currently working on a paper about a similar issue, so this has been quite interesting to read! (I'm discussing more generally the implications of limited aggregation, but as you note contractualism has primarily distinct implications because of its (partially) non-aggregative nature.) While I mostly agree with your claims about the implications of the ex ante view, I disagree with your claim that this is the most plausible version of contractualism. In fact, I think that the ex ante view is clearly wrong and we should not be much concerned with what it implies.

First, briefly to the application part. I think you are right that, given the ex ante view, we should not focus on mitigating x-risks, and that we should rather perform global health interventions. However, as you note, there is usually a very large group of potential beneficiaries when it comes to global health interventions, so that the probability for each individual to be benefited is quite small, resulting in heavily diminished ex ante claims. I wonder, therefore, if we shouldn't, on the ex ante view, rather spend our resources on (relatively needy) people we know or people in small communities. Even if these people would benefit from our resources 100+ times less than the global poor, this could well be overcompensated by the much higher probabilities for each of these individuals to actually be benefited. 

But again, I think the ex ante view is clearly false anyway. The easiest way to see this is that the view implies that we should prioritize one identified person over any number of "statistical" people. That is: on the ex ante view, we should save a given person for sure rather than (definitely!) save one million people if these are randondomly chosen from a sufficiently large population. In fact, there are even worse implications (the identified person could merely lose a finger and not her life if we don't help), but I think this implication is already bad enough to confidently reject the view. I don't know of anybody who is willing to accept that implication. The typical (if not universal?) reaction of advocates of the ex ante view is to go pluralist and claim that the verdicts of the ex ante view only correspond to one of several pro tanto reasons. As far as I know, no such view has actually been developed and I think any such view would be highly implausible as well; but even if it succeeded, its implications would be much more moderate: all we'd learn is that there is one of several pro tanto reasons that favour acting in (presumably) some short-term way. This could be well compatible with classic long-term interventions being overall most choiceworthy / obligatory.

I'm sure that I'm not telling you much, if anything, new here, so I wonder what you think of these arguments? 

I wonder if the Greater Burden Principle over ex ante interests tells you not to do broad exploratory research into interventions and causes or even much or any research at all, because any such research is very unlikely to benefit any particular individual. Instead, you should just pick from one of the interventions you already know rather than spread the ex ante benefits more thinly by investigating more options. Any time you expand the set of interventions under consideration, those who'd benefit ex ante in the original set lose substantially ex ante in the expanded set because they're now less likely to be targeted at all, while those added only stand to gain a little ex ante, because whatever intervention is chosen is unlikely to help them.

To make it even more concrete, consider helping A with 100% probability. Now, you consider the possibilities of helping B or C, and you're very unsure now about which of A, B or C you'll help after you investigate further, so now assign each a 1/3 chance of being helped. A loses a ~67% chance of being helped, which is larger than the ~33% chance each of B and C gain. So, you shouldn't even start to consider helping B and C instead of A.

However, if you did it one at a time, i.e. first consider B, going from 100% A to 50% A and 50% B, this would be permissible. And then going from 50% A and 50% B to 33% to each of A, B and C is also permissible (and required, because C gains 33% compared to the loss of 17% to each of A and B).

Is this a strawman? Or maybe contractualists make more space for deliberation by recognizing other reasons, like you suggest.

Thanks so much for this, Jakob. Really great questions. On the application part, let me first quote something I wrote to MSJ below:

I was holding the standard EA interventions fixed, but I agree that, given contractualism, there's a case to be made for other priorities. Minimally, we'd need to evaluate our opportunities in these and similar areas. It would be a bit surprising if EA had landed on the ideal portfolio for an aim it hasn't had in mind: namely, minimizing relevant strength-weighted complaints. 

That being said, a lot depends here on the factors that influence claim strength. Averting even a relatively low probability of death can trump lots of other possible benefits. And cost matters for claim strength too: all else equal, people have weaker claims to large amounts of our resources than they do to small amounts. So, yes, it could definitely work out that, given contractualism, EA has the wrong priorities even within the global health space, but isofar as some popular interventions are focused on inexpensive ways of saving lives, we've got at least a few considerations that strongly support those interventions. That being said, we can't really know unless we run the numbers.

Re: the statistical lives problem for the ex ante view, I have a few things to say---which, to be clear, don't amount to a direct reply of the form, "Here's why the view doesn't face the problem." First, every view has horrible problems. When it comes to moral theory, we're in a "pick your poison" situation. There are certainly some views I'm willing to write off as "clearly false," but I wouldn't say that of most versions of contractualism. In general, my approach to applied ethics is to say, "Moral theory is brutally hard and often the best we can do is try to assess whether we end up in roughly the same spot practically regardless of where we start theoretically." Second, and in the same spirit, my main goal here is to complement Emma Curran's work: she's already defended the same conclusion for the ex post version of the view. So, it's progress enough to show that, whichever way you go, you get something other than prioritizing x-risk. Third, the ex ante view doesn't imply that we should prioritize one identified person over any number of "statistical" people unless all else is equal---and all else often isn't equal. I grant that there are going to be lots of cases where identified lives trump statistical lives, but for the kinds of reasons I mentioned when thinking about your great application question, we still need to sort out the details re: claim strength.

Really appreciate the very helpful engagement!

Thanks for your helpful reply! I'm very sympathetic to your view on moral theory and applied ethics: most (if not all) moral theories face severe problems and that is not generally sufficient reason to not consider them when doing applied ethics. However, I think the ex ante view is one of those views that don't deserve more than negligible weight - which is where we seem to have different judgments. Even when taking into consideration that alternative views have their own problems, the statistical lives problem seems to be as close to a "knock-down argument" as it gets. You are right that there are possible circumstances in which the ex ante view would not prioritize identified people over any number of "statistical" people, and these circumstances might be even common. But the fact remains that there are also possible circumstances in which the ex ante view does prioritize one identified person over any number of "statistical" people - and at least to me this is just "clearly wrong". I would be less confident if I knew of advocates of the ex ante view who remain steadfast in light of this problem; but no one seems to be willing to bite this bullet.

After pushing so much that we should reject the ex ante view, I feel like I should stress that I really appreciate this type of research. I think we should consider the implications of a wide range of possible moral theories, and excluding certain moral theories from this is a risky move. In fact, I think that an ideal analysis under moral uncertainty should include ex ante contractualism, only I'm afraid that people tend to give too much weight to its implications and that this is worse than (for now) not considering it at all.

I should also at least mention that I think that the more plausible versions of limiting aggregation under risk are well compatible with classic long-term interventions such as x-risk mitigation. (I agree that the "ex post" view that Emma Curran discusses is not well compatible with x-risk mitigation either, but I think that this view is not much better than the ex ante view and that there are other views that are more plausible than both.) Tomi Francis from GPI has an unpublished paper that reaches similar results. I guess this is not the right place to go into any detail about this, but I think it is even initially plausible that small probabilities of much better future lives ground claims that are more significant than claims that are usually considered irrelevant, such as claims based on the enjoyment of watching part of a football match or on the suffering of mild headaches.

Very interesting, Jakob! I'll have to contact Tomi to get his draft. Thanks for the heads up about this work. And, of course, I'll be curious to see what you're working on when you're able to share!

Thanks for your interest! I will let you know when my paper is ready/readable. Maybe I'm also going to write a forum post about it.

(None of this may be news to you, either, but potentially of interest to other readers.)

Furthermore, ex ante views will tend to be dynamically inconsistent. For example, you have a lottery where you pick one person to be sacrificed for the benefit of the many, and this looks permissible to everyone ex ante, but once we find out who will be sacrificed, it's no longer permissible. And it wouldn't be permissible no matter who we found out would be sacrificed. This violates the Sure-Thing Principle. That being said, I’m not sure I'd call violating the STP enough to rule out a view or principle, but it should count against the view.

To satisfy the STP, you're also pretty close to maximizing expected utility due to Savage's Theorem and generalizations. But maximizing expected utility with a specifically unbounded utility functions, like total welfare, also violates a more general version of the Sure-Thing Principle, because of St Petersburg prospects (infinite expected utility, but finite actual utility in each outcome), e.g. Russell and Isaacs, 2021 https://philarchive.org/rec/RUSINP-2 . It also gets worse, because Anteriority (weaker than ex ante Pareto, but generalized to individuals whose existence is uncertain) + Impartiality + Stochastic Dominance are jointly inconsistent due to St Petersburg-like prospects over population sizes, with a few additional modest assumptions (Goodsell, 2021, https://philpapers.org/rec/GOOASP-2 ).

We could idealize and decide as if we had full information, looking for agreement, or just take an ex post view. See Fleurbaey and Voorhoeve, 2013 https://philarchive.org/rec/VOODAY

Yes, that's another problem indeed - thanks for the addition! Johann Frick ("Contractualism and Social Risk") offers a "decomposition test" as a solution on which (roughly) every action of a procedure needs to be justifiable at the time of its performance for the procedure to be justified. But this "stage-wise ex ante contractualism" has its own additional problems.

Thanks for sharing! I think Frick's approach looks pretty promising, although either with limited/partial aggregation or, as he does, recognizing that this isn't the full picture, and we can have other reasons to balance, to appropriately handle cases with many statistical lives at stake but low individual risks. What additional problems did you have in mind?

Hmm I can't recall all its problems right now, but for one I think that the view is then not compatible anymore with ex ante Pareto - which I find the most attractive feature of the ex ante view compared to other views that limit aggregation. If it's necessary for justifiability that all the subsequent actions of a procedure are ex ante justifiable, then the initiating action could be in everyone's ex ante interest and still not justified, right?

Moreover, it’s common to assume that efforts to reduce the risk of extinction might reduce it by one basis point—i.e., 1/10,000. So, multiplying through, we are talking about quite low probabilities. Of course, the probability that any particular poor child will die due to malaria may be very low as well, but the probability of making a difference is quite high. So, on a per-individual basis, which is what matters given contractualism, donating to AMF-like interventions looks good.


It seems like a society where everyone took contractualism to heart might have a hard time coordinating on any large moral issues where the difference any one individual makes is small, including non-x-risk ones like climate change or preventing great power war. What does the contractualist position recommend on these issues?

(In climate change, it's plausibly the case that "every little bit helps," while in preventing war between great powers outcomes seem much more discontinuous — not sure if this matters.)

Good question, Eli. I think a lot here depends on keeping the relevant alternatives in view. The question is not whether it's permissible to coordinate climate change mitigation efforts (or what have you). Instead, the question is whether we owe it to anyone to address climate change relative to the alternatives. And when you compare the needs of starving children or those suffering from serious preventable diseases, etc., to those who might be negatively affected by climate change, it becomes a lot more plausible that we don't owe to anyone to address those things over more pressing needs (assuming we have a good chance of doing something about those needs / moving the needle significantly / etc.). 

Thanks for the reply.

I think "don't work on climate change[1] if it would trade off against helping one currently identifiable person with a strong need" is a really bizarre/undesirable conclusion for a moral theory to come to, since if widely adopted it seems like this would lead to no one being left to work on climate change. The prospective climate change scientists would instead earn-to-give for AMF.

  1. ^

    Or bettering relations between countries to prevent war, or preventing the rise of a totalitarian regime, etc.

I think this argument doesn't quite go through as stated, because AMF doesn't have an infinite funding gap. If everybody on Earth (or even, say, 10% of the richest 10% of people) acted on the version of contractualism that mandated donating significantly to AMF as a way to discharge their moral obligations, we'll be well-past the point where anybody who wants and needs a bednet can have one. 

That said, I think a slightly revised version of your argument can still work. In a contractualist world, people should be willing to give almost unlimited resources to a single identifiable victim than working on large-scale moral issues, or having fun. 

So, it may be true that some x-risk-oriented interventions can help us all avoid a premature death due to a global catastrophe; maybe they can help ensure that many future people come into existence. But how strong is any individual's claim to your help to avoid an x-risk or to come into existence? Even if future people matter as much as present people (i.e., even if we assume that totalism is true), the answer is: Not strong at all, as you should discount it by the expected size of the benefit and you don’t aggregate benefits across persons. Since any given future person only has an infinitesimally small chance of coming into existence, they have an infinitesimally weak claim to aid.


There's a Parfit thought experiment:

I go camping and leave a bunch of broken glass bottles in the woods. I realize that someone may step on this glass and hurt themselves, so perhaps I should bury it. I do not bury it. As it turns out, 20 years pass before someone is hurt. In 20 years, a young child steps on the glass and cuts their foot badly.

It seems like the contractualist principle above would say that there's no moral value to burying the glass shard, because for any given individual, the probability that they'll be the one to step on the glass shard is very low[1]. Is that right?

  1. ^

    I think you can sidestep issues with population ethics here by just restricting this to people already alive today (so replace "young child" in the Parfit example with "adult" I guess). Though maybe the pop ethics issues are the crux?

Thanks for your question, Eli. The contractualist can say that it would be callous, uncaring, indecent, or invoke any number of other virtue theoretic notions to explain why you shouldn't leave broken glass bottles in the woods. What they can't say is that, in some situation where (a) there's a tradeoff between some present person's weighty interests and the 20-years-from-now young child's interests and (b) addressing the present person's weighty interests requires leaving the broken glass bottles, the 20-years-from-now young child could reasonably reject a principle that exposed them to risk instead of the present person's. Upshot: they can condemn the action in any realistic scenario.   

[On mobile; sorry for the formatting]

Given my quick read and especially the bit below, it seems like the title is at least a bit misleading.

Quote: “To be clear: this document is not a detailed vindication of any particular class of philanthropic interventions. For example, although we think that contractualism supports a sunnier view of helping the global poor than funding x-risk projects, contractualism does not, for all our argument implies, entail that many EA-funded global poverty interventions are morally preferable to all other options (some of which are probably high-risk, high-reward longshots).”

I think a reasonable person would conclude from the title “If Contractualism, Then AMF” essentially the opposite of this more nuanced clarification.

Perhaps it’s reasonable to infer that “Then AMF” really means “then the cluster of beliefs that leads GiveWell to strongly recommend AMF are indeed true (even if ex post it turns out that deworming or something was better)” but even this doesn’t seem to be what you are arguing (given the quote above).

Thanks for this, Aaron. Fair point. A more accurate title would be something like: "If Scanlonian contractualism is true, then between Emma Curran's work on the ex post version of the view and this post's focus on the ex ante version, it's probably true that when we have duties to aid distant strangers, we ought to discharge them by investing in high impact, high confidence interventions like AMF." 

In case we're prioritizing fates worse than death (section 3.2), some other potentially promising interventions off the top of my head could be:

  1. Work against slavery and human trafficking
  2. Work reducing violence that leads to long-term trauma/PTSD, including sexual violence
  3. Rescuing factory farmed animals
    1. Most interventions to improve conditions on factory farms will be too late to help animals alive today, because they usually only live <2 years. If they're rescued, they can live a decade or two. It's still not clear the difference between a very short horrible life and 10 years of decent life as a nonhuman animal beats 50 extra life years for a human (or saving a parent's child) through AMF, though. EDIT: Inspired by Jakob's comment below, the ex ante risk of death by malaria is low, and we'd need to discount by that. On the other side, if we have a rule for picking animals to save, e.g. the ones in the worst health (but still savable) on a farm, then we can keep the ex ante chance of being saved relatively high for some animals. There might be surer ways to save a specific human life, though.
  4. Increasing access to medically assisted suicide
  5. Humane slaughter for animals (but would need quick implementation to help animals alive today)
  6. Increasing access to or better treatments for severe pain and mental health issues, e.g. cluster headaches, PTSD, severe depression
  7. Closing Guantanamo Bay or getting people released from it (people there now are already being tortured or otherwise subject to horrible conditions)
  8. Work to take in more refugees.

On the standard contractualist views, nonhuman animals don't count in themselves, so the nonhuman animal interventions plausibly wouldn't be very valuable. But then how old do humans have to be to count, too, or have large stakes? Children under 5 don't typically have full understandings of death. Still, maybe we can explain it to them well enough for them to understand, though, and we should consider such hypotheticals in deciding their stakes. And losing a child is (typically) a large burden for a parent.

This is helpful, Michael. I was holding the standard EA interventions fixed, but I agree that, given contractualism, there's a case to be made for other priorities. Minimally, we'd need to evaluate our opportunities in these and similar areas. It would be a bit surprising if EA had landed on the ideal portfolio for an aim it hasn't had in mind: namely, minimizing relevant strength-weighted complaints. 

Can you explain how, in practice, one would choose between similar interventions within global health under this lens?

Thanks for the question, John. I'm not sure how much weight to put on "similar" in your question. In general, you'd be looking to minimize the greatest strength-weighted complaint that someone might have. Imagine a simple case where all the individuals in two equally-sized populations you might help are at risk of dying, which means that the core content of the complaint would be the same. Then, we just have the strength-weighting to worry about. The two key parts of that (at least for present purposes) would be the probability of harm, your probability of impact, and the magnitude of the impact you can have. So, we multiply through to figure out who has the strongest claim. In a case like this, intervention prioritization looks very similar to what we already do in EA. However, in cases where the core contents of the complaints are different (death vs. quality of life improvements, say), the probabilities might not end up mattering. Or in cases where your action would have high EV but only because you're aggregating over a very large population where each individual has a very low chance of harm, it could easily work out that, according to EAC, you should get less EV by benefitting individuals who are exposed to much greater risk of harm. So the core process can sometimes be similar, but with these anti-aggregative (or partially-aggregative) side constraints.

A relevant GPI paper is Longtermism, aggregation, and catastrophic risk by Emma J. Curran.

I briefly summarised it here, also pasted below:

The bottom line: If one is sceptical about aggregative views, where one can be driven by sufficiently many small harms outweighing a smaller number of large harms, one should also be sceptical about longtermism.

My brief summary:

  • Longtermists generally prefer reducing catastrophic risk to saving lives of people today. This is because, even though you would be reducing probability of harm by a small amount if focusing on catastrophic risk, the expected vastness of the future means more good is done in expectation.
  • This argument relies on an aggregative view where we should be driven by sufficiently many small harms outweighing a smaller number of large harms. However there are some cases where we might say such decision-making is impermissible e.g. letting a man get run over by a train instead of pulling a lever to save the man but also make lots of people late for work. One argument for why it’s better to save the man from death is the separateness of persons - there is no actual person who experiences the sum of the individual harms of being late - so there can be no aggregate complaint.
  • The author shows that a range of non-aggregative views (where we are not driven by sufficiently many small harms outweighing fewer large ones), under different treatments of risk, undermine the case for longtermism. These views typically generate extremely weak claims of assistance from future people.

Great to see this out there - a very useful piece of work! 

I actually have another manuscript on an ex-ante/ex-post fairness argument against longterm interventions. Could I send it to you sometime? Would love to hear your thoughts. 

Since any given future person only has an infinitesimally small chance of coming into existence, they have an infinitesimally weak claim to aid.


I think this is confused. Imagine we consider each person different over time, a la personites, and consider the distribution of possible people I will be next year. There are an incredibly large number of possible changes which could occur which would change my mental state, and depending on what I eat, the physical composition of my body. Does each of these future me have only an infinitesimal claim, and therefore according to contractualism, have almost no importance compared to any claim that exists before that time - and therefore you can only care about the immediate future, and never prioritize what will affect me in a year over what will affect some other person in 10 minutes?

Hi David. It's probably true that if you accept that picture of persons, then the implications of contractualism are quite counterintuitive. Of course, I suspect that most contractualists reject that picture.

I don't see a coherent view of people that doesn't have some version of this. My firstborn child was not a specific person until he was conceived, even when I was planning with my wife to have a child. As a child, who he is and who he will be is still very much being developed over time. But who I will be in 20 years is also still very much being determined - and I hope people reason about their contractualist obligations in ways that are consistent with considering that people change over time in ways that aren't fully predictable in advance.

More to the point, the number of possible mes in 20 years, however many there are, should collapse to having a value exactly equal to me - possibly discounted into the future. Why is the same not true of future people, where the number of different possible people each have almost zero claim, and it doesn't get aggregated at all?

Hi David. There are two ways of talking about personal identity over time. There's the ordinary way, where we're talking about something like sameness of personality traits, beliefs, preferences, etc. over time. Then, there's "numerical identity" way, where we're talking about just being the same thing over time (i.e., one and the same object). It sounds to me like either (a) you're running these two things together or (b) you have a view where the relevant kinds of changes in personality traits, beliefs, preferences, etc. result in a different thing existing (one of many possible future Davids). If the former, then I'll just say that I meant only to be talking about the "numerical identity" sense of sameness over time, so we don't get the problem you're describing in the intra-individual case. If the latter, then that's a pretty big philosophical dispute that we're unlikely to resolve in a comment thread!

I don't necessarily care about the concept of personal identity over time, but I think there's a very strong decision-making foundation for considering uncertainty about future states. In one framing, I buy insurance because in some future states it is very valuable, and in other future states it was not. I am effectively transferring money from one future version of myself to another. That's sticking with a numerical identity view of my self, but it's critical to consider different futures despite not having a complex view of what makes me "the same person".

But I think that if you embrace the view you present as obvious for contractualists, where we view future people fundamentally differently than present people, and do not allow consideration of different potential futures, you end up with some very confused notions about how to plan under uncertainty, and can never prioritize any types of investments that pay off primarily in even the intermediate-term future. For example, mitigating emissions for climate change should be ignored, because we can do more good for current people by mitigating harms rather than preventing them, and should emit more and ignore the fact that this will, with certainty, make the future worse, because those people don't have much of a moral claim. And from a consequentialist viewpoint - which I think is relevant even if we're not accepting it as a guiding moral principle - we'd all be much, much worse off if this sort of reasoning had been embraced in the past.

It seems like even the AMF vs global catastrophic risk comparison on an ex ante greater burden principle will depend on how much we're funding them, how we individuate acts and the specifics of the risks involved. To summarize, if you invest enough in global catastrophic risk mitigation, you might be able to reduce the maximum risk of very early death for at least one individual by more than if you give the same amount all to AMF, because malaria mortality rates are <1% per year where AMF works (GiveWell's sheet), but extinction risk within the next few decades could be higher than that (mostly due to AI) and reducible in absolute terms by more than 1 percentage point with enough funding. On the other hand, some people may be identified as immunocompromised and so stand to gain much more than a 1 percentage point reduction in mortality risk from GiveWell recommendations.

I illustrate in more detail with the rest of this comment, but feel free to skip if this is already clear enough.


Where AMF works, the annual mortality rate by malaria is typically under 0.3% (see GiveWell's sheet) and the nets last about two years (again GiveWell), so we get a maximum of around 0.6% average risk reduction per distribution of bednets (and malaria medicine, from Malaria Consortium, say). Now, maybe there are people who are particularly prone to death if they catch malaria and are identifiable as such, e.g. the identified immunocompromised. How high can the maximum ex ante risk be across individuals? I don't know, but this could matter. Let's say it's 1%. I think it could be much higher, but let's go with that to illustrate here first.

With up to $100,000 dollars donated to AMF and Malaria Consortium, suppose then we can practically eliminate one such person's risk of death, dropping it from 1% to around 0% (if we know where AMF and MC will work with that extra funding). On the other hand, it seems hard to see how only $100,000 dollars targeted at catastrophic risks could reduce anyone's risk of death by 1 percentage point. That would fund at most something like 2 people working full-time for a year, and probably less than 1 at most organizations working on x-risk, given current salaries. That will be true separately for the next $100,000, and the next, and the next, and so on, probably up to at least the endowment of Open Phil.

However, what about all of what Open Phil is granting to GiveWell, $100 million/year (source), all together, rather than $100K at a time? That still, by assumption, only gives a 1 percentage point reduction in mortality across the beneficiaries of GiveWell recommendations, if malaria mortality rates are representative (it seems it could be somewhat higher for Helen Keller International, and somewhat higher for New Incentives, if we account for individuals at increase personal risk for those, too, and that covers the rest of GiveWell's top charities). Can we reduce global catastrophic risks by more than 1 percentage point with a $100 million? What about $100 million/year over multiple years? I think many concerned with AI risk would say yes. And it might even be better for those who would otherwise receive bednets to protect them from malaria.

Now, malaria incidence can be as high as around 300 cases per 1000 people a given year in some places where AMF works (Our World in Data). If the identified immunocompromised have a 50% chance of dying from malaria if they catch it, then a naive[1] risk reduction estimate could be something like 15 percentage points. It seems hard to reduce extinction risk or anyone's risk of death from a global catastrophe by that much in absolute terms (percentage points). For one, you need to believe the risk is at least 15%. And the ones with high risk estimates (>85%) from AI tend to be pessimistic about our ability to reduce it much. I'd guess only a minority of those working on x-risk believe we can reduce it 15 percentage points with all of Open Phil's endowment. You have to be in a sweet spot of "there's a good chance this won't go well by default, but ~most of that is avertable".

And, on the other hand, AI pause work in particular could mean some people will definitely die who would otherwise have had a chance of survival and a very long life through AI-aided R&D on diseases, aging and mind uploading.

  1. ^

    One might expect the immunocompromised to be extra careful and buy bednets themselves or have bednets bought for them by their family. Also, some of those 300 cases per 1000 could be multiple cases in the same person in a year.

This is the right place to press, Michael. These are exactly the probabilities that matter. Because I tend to be pretty pessimistic about our ability to reduce AI risk, I tend to think the numbers are going to break in favor of AMF. And on top of that, if you're worried that x-risk mitigation work might sometimes increase x-risk, even a mild level of risk aversion will probably skew things toward AMF more strongly. But it's important to bring these things out. Thanks for flagging.

(EDIT: It looks like the section 3.2. S-risk Interventions? is somewhat relevant, but I think the probabilities here for people living very long voluntarily aren't as small as those for people alive today being subject to extended torture, except for those already being tortured.)

I wonder if the possibility of people living extremely long lives, e.g. thousands of years via anti-aging tech or mind uploading, would change the conclusions here, by dramatically increasing the ex ante person-affecting stakes, assuming each person's welfare aggregates over time. Now, it's possible that AMF beneficiaries will end up benefitting from this tech and saving them increases their chances of this happening, so in fact the number of person-affecting years of life saved in expectation by AMF could be much larger. However, it's not obvious this beats x-risk work, especially aligning AI, which could help with R&D. Also, instead of either, there's direct work on longevity or mind uploading, or even accelerating AI (which would increase x-risk) to use AI for R&D to save more people alive now from death by aging.

See also:

  1. Gustafsson, J. E., & Kosonen, P. (20??). Prudential Longtermism.
  2. Carl Shulman. (2019). Person-affecting views may be dominated by possibilities of large future populations of necessary people.
  3. Matthew Barnett. (2023). The possibility of an indefinite AI pause, section The opportunity cost of delayed technological progress.
  4. Chad I. Jones. (2023). The A.I. Dilemma: Growth versus Existential Risk. (talk, slides).

I think most people discount their future welfare substantially, though (perhaps other than for meeting some important life goals, like getting married and raising children), so living so much longer may not be that valuable according to their current preferences. To dramatically increase the ex ante stakes, one of the following should hold:

  1. We need to not use their own preferences and say their stakes are higher than they would recognize them to be, which may seem paternalistic and will fail to respect their current preferences in other ways.
  2. The vast majority of the benefit comes from the (possibly small and/or atypical) subset of people who don't discount their future welfare much, which gets into objections on the basis of utility monsters, inequity and elitism (maybe only the relatively wealthy/educated have very low discount rates). Or, maybe these interpersonal utility comparisons aren't valid in the first place. It's not clear what would ground them.

Also, following up on 2, depending on how we make interpersonal utility comparisons, rather than focusing on those with low personal time discount rates, those with the largest preference-based stakes could be utilitarians, especially those with the widest moral circles, or people with fanatical views or absolutist deontological views.

Thanks for this, Michael. You're right that if people could be kept alive a lot longer (and, perhaps, made to suffer more intensely than they once could as well), this could change the stakes. It will then come down to the probability you assign to a malicious AI's inflicting this situation on people. If you thought it was likely enough (and I'm unsure what that threshold is), it could just straightforwardly follow that s-risk work beats all else. And perhaps there are folks in the community who think the likelihood is sufficiently high. If so, then what we've drafted here certainly shouldn't sway them away from focusing on s-risk.

Oh, sorry if I was unclear. I didn't have in mind torture scenarios here (although that's a possibility), just people living very long voluntarily and to their own benefit. So rather than AMF saving like 50 years of valuable life in expectation per life saved, it could save thousands or millions or more. And other work may increase some individual's life expectancy even more.

I think it’s not too unlikely that we'll cure aging or solve mind uploading in our lifetimes, especially if we get superintelligence.

I just read the summary but I want to disagree with:

Contractualism says: When your actions could benefit both an individual and a group, don't compare the individual's claim to aid to the group's claim to aid, which assumes that you can aggregate claims across individuals. Instead, compare an individual's claim to aid to the claim of every other relevant individual in the situation by pairwise comparison. If one individual's claim to aid is a lot stronger than any other's, then you should help them.

"Contractualism" is a broad family of theories, many of which don't entail this. (Indeed, some are equivalent to classical utilitarianism.) (And in particular, Scanlonian contractualism or (C1) don't entail this.)

Fair point about it being a broad family of theories, Zach. What's the claim that you take Scanlonian contractualism not to entail? The bit about not comparing the individual's claim to aid to the group's? Or the bit about who you should help?

Both. As you note, Scanlonian contractualism is about reasonable-rejection.

(Personally, I think it's kinda appealing to consider contractualism for deriving principles, e.g. via rational-rejection or more concretely via veil-of-ignorance. I'm much less compelled by thinking in terms of claims-to-aid. I kinda assert that deriving-principles is much more central to contractualism; I notice that https://plato.stanford.edu/entries/contractualism/ doesn't use "claim," "aid," or "assistance" in the relevant sense, but does use "principle.")

(Probably not going to engage more on this.)

Ah, I see. Yeah, we discuss this explicitly in Section 2. The language in the executive summary is a simplification.

Curated and popular this week
Relevant opportunities