Let strong longtermism be the thesis that in a wide class of decision situations, the option that is ex ante best is contained in a fairly small subset of options whose ex ante effects on the very long-run future are best. If this thesis is correct, it suggests that for decision purposes, we can often simply ignore shorter-run effects: the primary determinant of how good an option is (ex ante) is how good its effects on the very long run are.

This paper sets out an argument for strong longtermism. We argue that the case for this thesis is quite robust to plausible variations in various normative assumptions, including relating to population ethics, interpersonal aggregation and decision theory. We also suggest that while strong longtermism as defined above is a purely axiological thesis, a corresponding deontic thesis plausibly follows, even by non-consequentialist lights.


A striking fact about the history of civilisation is just how early we are in it. There are 5000 years of recorded history behind us, but how many years are still to come? If we merely last as long as the typical mammalian species, we still have 200,000 years to go; there are a further one billion years until the Earth is sterilized by the Sun; and trillions of years until the last conventional star formations. Even on the most conservative of these timelines, we have progressed through a tiny fraction of recorded history. If humanity’s saga were a novel, we would still be on the very first page.

Normally, we pay scant attention to this fact. Political discussions are centered around the here and now, focused on the latest scandal or the next election. When a pundit takes a ‘long-term’ view, they talk about the next five or ten years. We essentially never think about how our actions today might influence civilisation in hundreds of thousands of years hence.

We believe that this neglect of the very long-term future is a grave moral error.[1] An alternative perspective is given by a burgeoning view called longtermism,[2] on which we should be particularly concerned with ensuring that the long-run future goes well. In this article we accept this view but go further, arguing that impacts on the long run are the most important feature of our actions. More precisely, we argue for two claims.

Axiological strong longtermism (AL): In a wide class of decision situations, the option that is ex ante best is contained in a fairly small subset of options whose ex ante effects on the very long-run future are best.

Deontic strong longtermism (DL): In a wide class of decision situations, the option one ought, ex ante, to choose is contained in a fairly small subset of options whose ex ante effects on the very long-run future are best.

By “the option whose effects on the very long-run future are best”, what we mean is “the option whose effects on the future from time t onwards are best”, where t is a surprisingly long time from now (say, 100 or even 1000 years). The idea, then, is that for the purposes of evaluating actions, we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1000) years, focussing primarily on the further-future effects. Short-run effects act as little more than tie-breakers.

Note that both AL and DL are phrased in ex ante terms. AL concerns ex ante axiology. If expected value theory is the correct account of how to order uncertain prospects in terms of their betterness then, given AL, the ex ante best action would be one whose possible effects on the very long-run future do most (or nearly the most) to increase expected value.

However, the longtermist claim does not essentially presuppose expected value theory; we briefly consider some alternatives in section 4. Similarly, for DL, the ‘ought’ in question is the ‘subjective’ ought: the one that is most relevant for action-guidance, and is relative, in some sense, to the beliefs that the decision-maker ought to have.[3]

Which decision situations fall within the scope of our claims? In the first instance, we argue that the following is one such case:[4]

The cause-neutral philanthropist: Shivani has $10,000. Her aim is to spend this money in whatever way would most improve the world, and she is open to considering any project as a means to doing this.

The bulk of the paper is devoted to defending the claim that this situation is within the scope of axiological strong longtermism; in the final two sections, we generalise this to a wider range of decision situations.

The structure of the paper is as follows. In section 2 we outline a plausibility argument for axiological strong longtermism. In our view, the most important respect in which the plausibility argument falls short of a proof is that it does not show that, as a matter of empirical fact, attempting to influence the course of the very long-run future is at all tractable. Section 3 is devoted to defending the crucial tractability claim.

In sections 2 and 3, we will at times help ourselves to some popular but controversial axiological and decision-theoretic assumptions (specifically, total utilitarianism and expected utility theory). This, however, is mainly for elegance of exposition. Section 4 conducts the corresponding sensitivity analyses, and argues that plausible ways of deviating from these assumptions are unlikely to undermine the argument. Section 5 argues that, while for concreteness we have focussed on the case of the cause-neutral philanthropist, if axiological strong longtermism is true of that decision context then it is also likely to be true of a fairly wide variety of other decision contexts (where cause-neutrality is absent, and/or where the decision is not one of how to spend money).

Thus far, our discussion will have been exclusively focussed on axiological strong longtermism. Section 6 turns to the question of deontic strong longtermism. There, we argue that according to any plausible non-consequentialist moral theory, our discussion of axiological strong longtermism also suffices to establish deontic strong longtermism. Section 7 summarises.

The argument in this paper has some precedent in the literature. Nick Bostrom (2003) has argued, on the basis of the vast number of people who would live in the future if civilisation settles the stars, that increasing the probability that such settlement occurs should be the top priority for total utilitarians. Nick Beckstead (2013) argues from a somewhat broader set of assumptions to a similar conclusion.[5]

Our aim in this paper is to expand on this prior work in four ways. First, whereas earlier work has focussed primarily on the examples of extinction risk mitigation and (sometimes) promotion of space settlement, we discuss a range of other “longtermist” interventions, and we argue that strong longtermism is true even if one sets aside the possibility of those (population-increasing) interventions. Second, we show that the argument goes through on a wide range of axiologies and decision theories, not only on the combination of total utilitarianism and expected utility theory. Third, we argue that insofar as strong longtermism is true of a decision context that involves allocating resources across cause areas, it is likely also to be true of various other decision contexts, including ones that do not involve cross-cause comparisons and ones that do not involve allocating money. Fourth, in addition to axiological strong longtermism, we also discuss the deontic claim: we argue that deontic strong longtermism is true, given any of a wide variety of plausible non-consequentialist theories.

We believe that axiological and deontic strong longtermism are of the utmost importance. If society came to adopt these views, much of what we would prioritise in the world today would change.

Read the rest of the paper

  1. It is useful to consider a prudential analogue: if we were to live until the Earth were no longer habitable, how much attention would it be prudentially rational for us to pay to ensure the very long-run future goes well? Presumably, far more than we would do now. ↩︎

  2. See MacAskill (2019) for a discussion of this idea. ↩︎

  3. It is widely agreed that either it is useful to distinguish between objective and subjective senses of ‘ought’ (Ewing 1948, pp.118-22; Brandt 1959, pp.360-7; Russell 1966; Parfit 1984, p.25; Portmore 2011; Dorsey 2012, Olsen 2017, Gibbard 2005, Parfit 2011), or ‘ought’ is univocal and subjective (Prichard 1932, Ross 1939 p.139, Howard-Snyder 2005, Zimmerman 2006, Zimmerman 2008, Mason 2013). Our discussion presupposes that one of these disjuncts is correct. A minority of authors holds that ‘ought’ is univocal and objective (Moore 1912 pp.88-9; Moore 1903 pp.199-200, 229-230; Ross 1930, p.32; Thomson 1986, pp. 177-9; Graham 2010; Bykvist 2011); according to this latter view, there is no coherent question of deontic strong longtermism in the vicinity of the thesis we attempt to discuss. Similarly (but less discussed), one might be skeptical of the notion of ex ante axiology; again, our discussion presupposes that any such skepticism is misguided. ↩︎

  4. Note that Shivani need not be a private philanthropist. She could equally be in charge of some governmental or intergovernmental pot of resources, provided that the remit of that pot is cause-neutral, i.e. the remit is simply to maximise the good, rather than (say) to optimise the health or transport system. Given our stipulation about the content of Shivani’s aim, it is almost trivially the case that if axiological strong longtermism is true of Shivani’s decision situation, then so also is deontic strong longtermism. We discuss cases in which the connection between axiological and deontic strong longtermism is less direct in section 5. ↩︎

  5. Beckstead’s “Main Thesis” is: “From a global perspective, what matters most (in expectation) is that we do what is best (in expectation) for the general trajectory along which our descendants develop over the coming millions, billions, and trillions of years” (ibid., p.1). ↩︎


1 comments, sorted by Highlighting new comments since Today at 1:47 PM
New Comment

Just in case people stumble upon this post in future: It's possible you'd be interested in some of the thoughts I wrote on this paper elsewhere on the Forum.