Abstract
Longtermists claim that what we ought to do is mainly determined by how our actions might affect the very long-run future. A natural objection to longtermism is that these effects may be nearly impossible to predict— perhaps so close to impossible that, despite the astronomical importance of the far future, the expected value of our present options is mainly determined by short-term considerations. This paper aims to precisify and evaluate (a version of) this epistemic objection to longtermism. To that end, I develop two simple models for comparing “longtermist” and “short-termist” interventions, incorporating the idea that, as we look further into the future, the effects of any present intervention become progressively harder to predict. These models yield mixed conclusions: If we simply aim to maximize expected value, and don’t mind premising our choices on minuscule probabilities of astronomical payoffs, the case for longtermism looks robust. But on some prima facie plausible empirical worldviews, the expectational superiority of longtermist interventions depends heavily on these “Pascalian” probabilities. So the case for longtermism may depend either on plausible but non-obvious empirical claims or on a tolerance for Pascalian fanaticism.
Introduction
If your aim is to do as much good as possible, where should you focus your time and resources? What problems should you try to solve, and what opportunities should you try to exploit? One partial answer to this question claims that you should focus mainly on improving the very long-run future. Following Greaves and MacAskill (2019) and Ord (2020), let’s call this view longtermism. The longtermist thesis represents a radical departure from conventional thinking about how to make the world a better place. But it is supported by prima facie compelling arguments, and has recently begun to receive serious attention from philosophers.[1]
The case for longtermism starts from the observation that the far future is very big. A bit more precisely, the far future of human-originating civilization holds vastly greater potential for value and disvalue than the near future. This is true for two reasons. The first is duration. On any natural way of drawing the boundary between the near and far futures (e.g., 1000 or 1 million years from the present), it is possible that our civilization will persist for a period orders of magnitude longer than the near future. For instance, even on the extremely conservative assumption that our civilization must die out when the increasing energy output of the Sun makes Earth too hot for complex life as we know it, we could still survive some 500 million years.[2] Second is spatial extent and resource utilization. If our descendants eventually undertake a program of interstellar settlement, even at a small fraction of the speed of light, they could eventually settle a region of the Universe and utilize a pool of resources vastly greater than we can access today. Both these factors suggest that the far future has enormous potential for value or disvalue.
But longtermism faces a countervailing challenge: The far future, though very big, is also unpredictable. And just as the scale of the future increases the further ahead we look, so our ability to predict the future—and to predict the e↵ects of our present choices—decreases. The case for longtermism depends not just on the intrinsic importance of the far future but also on our ability to influence it for the better. So we might ask (imprecisely for now): Does the importance of humanity’s future grow faster than our capacity for predictable influence shrinks?[3]
There is prima facie reason to be pessimistic about our ability to predict (and hence predictably influence) the far future. First, the existing empirical literature on political and economic forecasting finds that human predictors—even well-qualified experts—often perform very poorly, in some contexts doing little better than chance (Tetlock, 2005). Second, the limited empirical literature that directly compares the accuracy of social, economic, or technological forecasts on shorter and longer timescales consistently confirms the commonsense expectation that forecasting accuracy declines significantly as time horizons increase.[4] And if this is true on the modest timescales to which existing empirical research has access, we should suspect that it is all the more true on scales of centuries or millennia. Third, we know on theoretical grounds that complex systems can be extremely sensitive to initial conditions, such that very small changes produce very large di↵erences in later conditions (Lorenz, 1963; Schuster and Just, 2006). If human societies exhibit this sort of “chaotic” behavior with respect to features that determine the long-term e↵ects of our actions (to put it very roughly), then attempts to predictably influence the far future may be insuperably stymied by our inability to measure the present state of the world with arbitrary precision.[5] Fourth and finally, it is hard to find historical examples of anyone successfully predicting the future—let alone predicting the effects of their present choices—even on the scale of centuries, let alone millennia or longer.[6]
If our ability to predict the long-term effects of our present choices is poor enough, then even if the far future is overwhelmingly important, the main determinants of what we presently ought to do might lie mainly in the near future. The aim of this paper is to investigate this epistemic challenge to longtermism. Specifically, I will identify a version of the challenge that seems especially compelling, precisify that version of the challenge by means of two simple models, and assess the strength of the challenge by examining the results of those models for various plausible combinations of parameter values.
Since my goal is to assess whether the case for longtermism is robust to a particular kind of objection, I will make some assumptions meant to screen off other objections. In particular, I assume: (i) a total welfarist consequentialist normative framework (the prima facie most favorable setting for longtermism), setting aside axiological and ethical challenges to longtermism that are mostly orthogonal to the epistemic challenge;[7] (ii) a precise probabilist epistemic framework (i.e., that the rational response to uncertainty involves assigning precise probabilities to the possibilities over which one is uncertain), setting aside for instance the imprecise probabilist worries discussed in Mogensen (forthcoming); and (iii) the decision-theoretic framework of expected value maximization, setting aside worries arising from risk aversion or from “anti-fanaticism” considerations of the sort discussed in chapters 6–7 of Beckstead (2013a) (though we will take up the issue of fanaticism in 6.2).
On the other hand, when it comes to empirical questions (e.g., choosing values for model parameters), I will err toward assumptions unfavorable to longtermism, in order to test its robustness to the epistemic challenge.
The paper proceeds as follows: In 2, I attempt to state the longtermist thesis more precisely. In 3, I similarly attempt to precisify the epistemic challenge, and identify the version of that challenge on which I will focus. In 4, I describe the first model for comparing longtermist and short-termist interventions. The distinctive feature of this model is its assumption that humanity will eventually undertake an indefinite program of interstellar settlement, and hence that in the long run, growth in the potential value of human-originating civilization is a cubic function, reflecting our increasing access to resources as we settle more of the Universe. In 5, by contrast, I consider a simpler model which assumes that humanity remains Earth-bound and eventually reaches a “steady state” of zero growth. 6 considers the effect of higher-level uncertainties—both uncertainty about key parameter values and uncertainty between the two models. 7 takes stock, organizes the conclusions of the preceding sections, and surveys several other versions of the epistemic challenge that remain as questions for future research.
Read the rest of the paper
Proponents of longtermism include Bostrom (2003, 2013) (who focuses on the long-term value of reducing existential risks to human civilization), Beckstead (2013a, 2019) (who gives a general defense of longtermism and explores a range of potential practical implications), Cowen (2018) (who focuses on the long-term value of economic growth), Greaves and MacAskill (2019) (who, like Beckstead, defend longtermism generally), and Ord (2020) (who, like Bostrom, focuses mainly on existential risks). ↩︎
This is conservative as an answer to the question, “How long is it possible for human-originating civilization to survive?” It could of course be very optimistic as an answer to the question, “How long will human-originating civilization survive?” ↩︎
Versions of this epistemic challenge have been noted in academic discussions of longtermism (e.g. by Greaves and MacAskill (2019)), and are frequently raised in conversation, but have not yet been extensively explored. For expressions of epistemically-motivated skepticism toward longtermism in non-academic sources, see for instance Matthews (2015) and Johnson (2019). Closely related concerns about the predictability of long-run effects are frequently raised in discussions of consequentialist ethics—see for instance the recent literature on “cluelessness” (e.g. Lenman (2000), Burch-Brown (2014), Greaves (2016)). Going back further, there is this passage from Moore’s Principia: “[I]t is quite certain that our causal knowledge is utterly insufficient to tell us what different effects will probably result from two different actions, except within a comparatively short space of time; we can certainly only pretend to calculate the effects of actions within what may be called an ‘immediate’ future. No one, when he proceeds upon what he considers a rational consideration of effects, would guide his choice by any forecast that went beyond a few centuries at most; and, in general, we consider that we have acted rationally, if we think we have secured a balance of good within a few years or months or days” (Moore, 1903, 93). This amounts to a concise statement of the epistemic challenge to longtermism, though of course that was not Moore’s purpose. ↩︎
See for instance Makridakis and Hibon (1979) (in particular Table 10 and discussion on p. 115), Fye et al. (2013) (who even conclude that “there is statistical evidence that long-term forecasts have a worse success rate than a random guess” (p. 1227)), and Muehlhauser (2019) (in particular fn. 17, which reports unpublished data from Tetlock’s Good Judgment Project). Muehlhauser gives a useful survey of the extant empirical literature on “long-term” forecasting (drawing heavily on research by Mullins (2018)). For our purposes, though, the forecasts covered by this survey are better described as “medium-term”—the criterion of inclusion is a time horizon 10 years. To my knowledge, there is nothing like a data set of truly long-term forecasts (e.g., with time horizons greater than a century) from which we could presently draw conclusions about forecasting accuracy on these timescales. And as Muehlhauser persuasively argues, the conclusions we can draw from the current literature even about medium-term forecasting accuracy are quite limited for various reasons—e.g., the forecasts are often imprecise, non-probabilistic, and hard to assess for diculty. ↩︎
For discussions of extreme sensitivity to initial conditions in social systems, see for instance Pierson (2000) and Martin et al. (2016). Tetlock also attributes the challenges of long-term forecasting to chaotic behavior in social systems, when he writes: “[T]here is no evidence that geopolitical or economic forecasters can predict anything ten years out beyond the excruciatingly obvious— ‘there will be conflicts’—and the odd lucky hits that are inevitable whenever lots of forecasters make lots of forecasts. These limits on predictability are the predictable results of the butterfly dynamics of nonlinear systems. In my [Expert Political Judgment] research, the accuracy of expert predictions declined toward chance five years out” (Tetlock and Gardner, 2015). But Tetlock may be drawing too pessimistic a conclusion from his own data, which show that the accuracy of expert predictions declines toward chance, which remaining significantly above chance—for discussion, see 1.7 of Muehlhauser (2019). ↩︎
There are some arguable counterexamples to this claim—e.g., the founders of family fortunes who may predict with significantly-better-than-chance accuracy the effects of their present investments on their heirs many generations in the future. (Thanks to Philip Trammell for this point.) But on the whole, the history of thinking about the distant future seems more notable for its failures than for its successes. ↩︎
For discussion of these axiological and ethical challenges, see Beckstead (2013a) and Greaves and MacAskill (2019). ↩︎