Hide table of contents

Abstract

Consider longtermism: the view that, at least in some of the most important decisions facing agents today, which options are morally best is determined by which are best for the long-term future. Various critics have argued that longtermism is false—indeed, that it is obviously false, and that we can reject it on normative grounds without close consideration of certain descriptive facts. In effect, it is argued, longtermism would be false even if real-world agents had promising means of benefiting vast numbers of future people. In this paper, I develop a series of troubling impossibility results for those who wish to reject longtermism so robustly. It turns out that, to do so, we must incur severe theoretical costs. I suspect that these costs are greater than simply accepting longtermism. If so, the more promising route to denying longtermism would be by appeal to descriptive facts.

Introduction

Our future, in its entirety, may be far bigger than our present. As of the beginning of 2023, approximately 8 billion humans are going about their lives. Whereas, over the centuries and millennia ahead of us, vastly more human lives may be lived.[1] As long as we do not wipe ourselves out too soon—and even relative pessimists suggest that it is more likely than not that we survive (see, e.g., Ord, 2020, p. 167)—our descendants may well outnumber us by more than 100,000 to 1.[2]

This suggests that the moral stakes of decisions concerning the future are high. This need not be because it is important to ensure that a long future for humanity comes about in the first place, nor need it be morally important to make more people exist. No, the mere fact that there are likely to be vast numbers of future people, regardless of what we do, is enough to raise the stakes astronomically. After all, no life counts for more or less than another merely due to superficial characteristics like the person’s race, sex, nationality, or the circumstances of their birth. Merely whether someone is born in the year 1960 or 2960 does not change the moral value of their well-being, nor does it change the importance of aiding them if we can do so with equal ease and predictability.[3]So, if many more people are born in the period 2100-100,000 CE than in the period 2000-2100 CE, the stakes are potentially far higher when influencing the well-being of everyone in the first group than when only influencing the latter group.

This is one (partial) motivation for a view known as longtermism: roughly that, at least in some of the most important decisions facing agents today, which options are morally best is determined by which are best for the long-term future (from Greaves and MacAskill, 2021, p. 3). At least as I am interested in it, this is an axiological claim: one about which outcomes, or risky prospects, are better than others. It is not a deontic claim: it doesn’t state that we ought to do what is best for the long-term future, and certainly not that we ought to do so in every decision we face, nor to do so by whatever means are necessary.[4] And, crucially, it may well be a contingent claim: since it refers to the decisions we actually face and the options we actually have, it may well depend on what the world is actually like.

Is longtermism true? Various philosophers and other commentators have argued that it isn’t. Not only this; they have argued that its falsity does not depend on certain descriptive facts. They have argued that we need not look too closely at, say, descriptive features of the decisions that agents actually face, but can instead rule it out on (primarily) normative grounds. After all, many such critics suggest, longtermism is clearly false on any plausible moral view. Only if we endorse (the conjunction of) supposedly implausible views like consequentialism, utilitarianism, a total view of moral betterness, and expected value theory would longtermism be plausible. Simply deny any of these and longtermism is clearly false. Or so it is suggested (e.g., by Cremer and Kemp, 2021; Setiya, 2022; Stock, 2022; Lenman, 2022; Henning, 2022; Wolfendale, 2022; Walen, 2022; Crary, 2023; Adams et al., 2023; Plant, 2023).[5] But such suggestions understate the challenge faced by those who wish to deny longtermism. After all, it has already been demonstrated elsewhere that a variety of plausible normative views can just as easily lead us there.[6] So, to avoid longtermism, we must deny much more than total utilitarianism.

But just how much must we deny to avoid longtermism? That is the question I seek to answer in this paper. In particular, I am interested in how much we must deny to avoid longtermism in the manner desired by those critics—to reject it with confidence, without any need to examine certain descriptive facts such as just how many future people might exist, or just how much we could benefit them, or just how high the probabilities of doing so are. To put it differently, I am interested in what it takes to avoid longtermism robustly, even if the descriptive facts are somewhat different—if the practical circumstances of the decisions we face are allowed to vary somewhat. In effect, my goal here is to determine just how strong the case is for rejecting longtermism out of hand, without leaving the philosophical armchair to examine such descriptive facts.

Of course, if we let the descriptive facts vary enough, longtermism would be nigh inescapable. (Try denying longtermism if, for some bizarre reason, we simply had no practical means of benefiting anyone except those in the long-term future!) To robustly avoid longtermism in any interesting sense, we need not avoid it in circumstances quite so extreme. So, I will place some restrictions on the range of circumstances considered below. First, I will assume that in any given decision we have some option by which we can benefit present (or near future) people with high probability. Indeed, for simplicity, I will assume that we always have an option to benefit all present people, by just as much as we can otherwise benefit future people, and with probability 1. Although far from accurate, this assumption is a generous one—being able to benefit more present people, and with higher probability, can only make longtermism easier to avoid.

Second, I will assume that any option by which we might improve the lives that are lived in the long-term future has only a low probability (of at most, say, 1 in 1,000) of doing so.[7] As before, this assumption is a generous one. And it seems accurate of most agents’ real-world attempts to greatly improve or improve the long-term future: for instance, suppose an individual who donates a modest sum to an organisation that advocates against the proliferation of nuclear weapons due to the deleterious long-term effects of a potential nuclear war; their donation is unlikely to prevent such long-term effects, both because a modest donation rarely makes much difference to the activities of advocacy organisations, and because nuclear wars may be unlikely to occur in the first place. It seems at least plausible that the probability of their individual donation preventing a nuclear war is 1 in 1,000 or less.

Third, I assume that, in outcomes where we do improve the long-term future, the identities of all of our future beneficiaries are altered. After all, in practice, any action with so great an impact on human history that it changes the well-being of vastly many future people must also change many of the circumstances of many people’s lives, both future and present. These include the circumstances under which humans conceive children; change these even slightly and different combinations of sperm and egg will meet, and so different children will be born. So, in practice, any actions with long-lasting, widespread effects are guaranteed to be identity-affecting in this sense; resulting in an (almost) entirely different population of people alive in several centuries (and beyond).[8] If longtermism is to hold in practice, it must hold even with this phenomenon present. And to keep things simple we can assume (again, generously) that no future people obtain higher well-being without also having their identities changed.[9]

Read the rest of the paper

  1. ^

    An analogous claim holds for the even more numerous non-human animals with whom we share a planet—very many of them exist right now, but many times more will exist over the course of the future. Where I talk of ‘people’ throughout this paper, this may be interpreted either as including only humans or as including those within a much larger class of moral patients.

  2. ^

    Wolf and Toon (2015) give us one billion years until the Earth becomes uninhabitable—3,000 times longer than homo sapiens has existed to date. For arguments that this does not tell us that the expected number of future people is very large, see Thorstad (2023a,b). For what I take to be a persuasive counter-argument, see Chappell (2023).

  3. ^

    Such moral impartiality to such features position in time is defended by Sidgwick (1907, p. 414), Ramsey (1928, p. 541), Parfit (1984, §121 & Appendix F), and Cowen and Parfit (1992), among others.

  4. ^

    Indeed, even in the deontic version of longtermism offered by Greaves and MacAskill (2021, pp. 26-9), we are only required to do what is best for the long-term future in “...the most important decisions facing agents today, [in which] the axiological stakes are very high, there are no serious side-constraints, and the personal prerogatives are comparatively minor." In such decisions, which might even be limited to just those in which we decide how to allocate our charitable donations and other altruistic efforts, there is no clash with common non-consequentialist deontic principles.

  5. ^

    Representative quotes include: “Longtermists ... argue that it’s always better, other things equal, if another person exists, provided their life is good enough. That’s why human extinction looms so large." (Setiya, 2022); “Longtermists...[maintain] that any additional person who lives makes the world better, as long as the person enjoys adequate wellbeing." (Crary, 2023); “...longtermists adopt substantial utilitarian commitments in arguing for maximizing the well-being of all." (Adams et al., 2023); and “The non-utilitarian case for strong longtermism is, for now, weak." (Cremer and Kemp, 2021).

  6. ^

    For instance: Tarsney and Thomas (2020) make the case for longtermism based on averageism, on egalitarianism and on other non-totalist views of moral betterness; Thomas (2022a) shows how various theories of betterness upholding the well-known procreation asymmetry lead to longtermism; Buchak (2022); Pettigrew (2022) each show how riskweighted expected utility theory can lead to longtermist conclusions; and Greaves and MacAskill (2021, §6) note the variety of normative theories that lead us to longtermism.

  7. ^

    Perhaps this does not hold of all options by which we might improve the long-term future. One plausible exception is to engage in patient philanthropy: setting aside resources (most easily, money) to be used by future decision-makers when needed most, and to accrue in value in the meantime. For detailed discussion, see Trammell (2021).

  8. ^

    This observation was made by Parfit (1984, §119&123), and has featured heavily in recent discussions of cluelessness (Greaves, 2016; MacAskill and Mogensen, 2021). A similar claim was made much earlier, although with a theological rather than biological justification, by Leibniz (2005, pp. 101-7).

  9. ^

    A further realistic assumption that I won’t make just yet is that changing the well-being of many future people will inevitably also change the number of future people who exist (by much the same mechanism as it changes the identities of those future people). I’ll consider the case for In-Principle Longtermism under this assumption in Section 5.

21

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities