Hide table of contents

Work for the elimination of concrete evils rather than for the realization of abstract goods... Do not allow your dreams of a beautiful world to lure you away from the claims of men who suffer here and now... no generation must be sacrificed for the sake of future generations, for the sake of an ideal of happiness that may never be realized. - Karl Popper, "Utopia and Violence"

[It] seemed to him now that for forty years he had been running amuck, the running-amuck of pure reason... What had he once written in his diary? "We have thrown overboard all conventions, our sole guiding principle is that of consequent logic; we are sailing without ethical ballast."- Arthur Koestler, Darkness at Noon

"Instead of giving civilization more time, give time the gift of civilization" - epitaph on the "Great Ravine" memorial in The Dark Forrest  

 

Introduction

This is a critique of strong longtermism. I call it “A Case Against Strong Longtermism” because it is not “the” case against strong longtermism. In many ways, it is a quite conservative critique. Unlike some other critics or possible critics, I think addressing existential risk is important, I believe that future people matter, my ethics are broadly consequentialist, and I don’t challenge the fundamental legitimacy of Bayesian epistemology. This is intended as an “internal” critique of strong longtermism from a set of premises that are widely accepted within the Effective Altruist community. It can be thought of as an attempt to “steelman” a critique of strong longtermism for an EA audience. 

I make no claims to true originality and this critique is informed by other critiques, though I believe it is a helpful (if perhaps too long) synthesis with some original framing, emphasis, and, occasionally, arguments. Influences include but are not limited to Holden Karnofsky (perhaps first and foremost), Luke Kemp and Carla Zoe Cremer, Ben Chugg and Vaden Masrani, and Emile P Torres. Their arguments are not quite mine and I don't agree with them on all points, but there are family resemblances. No doubt there are other people making similar arguments that I haven’t had time to read, but I think it is still worthwhile to add my voice to the chorus.

Strong longtermism rests on one strong argument and this critique is a series of many weak arguments. No one claim I make is fatal for strong longtermism and much of what I say is speculative or suggestive rather than certain. No doubt each of the points I raise could benefit from much more extensive research than I have been able to do. That said, collectively, I believe they add up.  

I’m worried that because my argument is multifaceted it is hard to summarize easily. Worse, it is hard to turn it into a meme. Strong longtermism is very easy to turn into a meme: “Future people matter, there could be a lot of them, we can make their lives go better.” That said, here’s a not so meme-able summary of my case:

Strong longtermism implies we should pursue even highly speculative interventions that have tiny probabilities of improving the far future over the most effective near termist interventions. This is a mistake. (1) Interventions backed by robust considerations with clear feedback mechanisms for determining success or failure are more likely to be impactful than speculative ones based on simple expected value calculations with little empirical grounding. (2) The expected value of the future is deeply uncertain and it is unclear how our visions of it should affect our actions. (3) Often the best way to improve the far future will actually be indirectly through pursuing near or medium term interventions. (4) Advocating for and basing deliberations on strong longtermism carry ethical costs that need to be weighed against any benefits doing so might bring. 

We should take advantage of robust, well supported interventions for reducing existential risk, but avoid highly speculative ones, cultivate worldview diversification and continue to allocate resources to near termist causes and maybe explore more “medium termist” ones, and build an overlapping consensus in favor of combatting existential risk that draws on a multiplicity of different values and doesn’t rest its case on speculation about the size of the long term future. 

The details of my case are below.

 

What is Strong Longtermism and Why Does it Matter?

Strong longtermism is a claim about what we should value and about what we should do. The clearest case for strong longtermism I’m aware of comes from "The Case for Strong Longtermism'' by William MacAskill and Hilary Greaves, though earlier statements can be found in Nick Bostrom’s work. It maintains that the best option is whatever makes the long term future (e.g., greater than 100 years out) go well and that expected long term consequences swamp any short term considerations. The argument for the claim rests on a simple expected value calculation. Proponents outline various scenarios for what the human future could look like, point out that the number of humans who could live good lives in those futures is very large (Bostrom suggests there could be over 10^54 human lives in the future and MacAskill and Greaves give a main expectation of 10^24 which they believe is conservative), and then note that even interventions that very slightly increase the probability of attaining those futures or of those futures being good have overwhelmingly high expected value, especially compared to ways we could try to improve the present. Given how much value the future might have, some longtermists suggest we have a strong moral obligation to do what is best for the long term future, setting aside more near term considerations. The best ways to impact the long term future are to reduce extinction risk, avert the possibility of bad value lock-in (e.g. due to misaligned AI or stable totalitarianism), or go meta by saving money that can be used to benefit the future at a later date or work on improving institutions that can reduce existential risk (e.g. by fostering international cooperation). 

Some, like Scott Alexander and Matthew Yglesias argue that strong longtermism has no real practical applications. The case for averting existential risk is strong whatever you think about the value of the long term future because we face a lot of existential risk in the near term and most people have ample reason to prevent the realization of risks that would affect them, their children, or their grandchildren.  

Eli Lifland argues that if we ignore the impact on future generations, reducing existential risk isn’t much more effective than the most effective near termist interventions and so we might need strong longtermism to justify focusing on reducing existential risks. I think this misses the point, though. The thing that is interesting and different about strong longtermism is the “cost effectiveness bar” it sets for the value of interventions meant to improve the long term future. If strong longtermism is right, we should be willing to pay a lot for even very small expected reductions in existential risk. For example, Greaves and MacAskill estimate that donating to GiveWell’s top charities can save 0.025 lives per $100 spent. If we assume that there are 10^24 expected future lives, reducing existential risk by 2.5*10^-26 per $100 would be as cost effective as donating to GiveWell’s top charities. Put differently, if donations to GiveWell’s top charities were our bar, it would be cost effective to donate the entire GDP of the United States of America to reduce existential risk by 0.0000000000005%. 

If we adopt strong longtermism as a guiding ideology, we essentially lower the bar for cost effectiveness such that even the most speculative, unlikely interventions aimed at reducing existential risk by even some meager amount seem more cost effective than donating to GiveWell’s top charities. I believe this is a mistake and I argue in depth below that if Effective Altruism embraces this implication of strong longtermism, it will fail to live up to its name. 

(1) The Epistemic Critique of Strong Longtermism

While strong longtermism’s proponents acknowledge that there is a lot of uncertainty both about the goodness and size of the long term future and about how effective attempts to actually influence that future might be, they presume that (a) the expected value of trying to improve the long term future is quite large and that (b) maximizing expected value is the best way to make decisions under conditions of uncertainty. In this section I argue that (b) is not necessarily always the case. 

Inside vs. outside views of expected value calculations: There are certain conditions under which maximizing expected value could lead to worse outcomes in expectation than other approaches. One way to conceptualize this is to apply the distinction between inside and outside views to approaches to decision making. If you only took the inside view, you would simply ask what the expected value of a given intervention or policy is, but taking the outside view involves asking whether taking actions on the basis of this sort of expected value calculation has high expected value. Even if a decision looks attractive on the inside view, you might be much more skeptical of its value once you consider the outside view. For example, it is plausible that making decisions on the basis of low probability odds of achieving very good outcomes or pursuing interventions without any clear feedback mechanisms tend not to lead to particularly good results, such that if you knew someone was making a decision on the basis of a low probability expectation of very good outcomes or was committed to interventions without clear feedback mechanisms, your expected value for that decision would be very low or even negative. This is in essence the claim of the epistemic critique of strong longtermism. Notice that this way of framing the epistemic concern does not involve rejecting the Bayesian mindset or the usefulness of expected value theory. Instead, it involves recognizing that to maximize expected value, we might in some cases want to not rely on expected value calculations.

Two examples help illustrate how it can be better (in expectations) not to rely on expected value calculation. The first comes from common sense investment advice and the second from Nassim Taleb (so sort of less common sense investing advice). First, while I have all sorts of opinions about the expected return of different assets, it would likely be a mistake for me to try to actively time markets and take advantage of mispricings. For example, I might think (I don’t necessarily and this should not be misconstrued as financial advice) that over the next twelve months US bonds will sell off or I might believe Indian equities will outperform all other countries’ stock markets. If I was trying to maximize expected wealth, I should try to short US bonds or concentrate my portfolio in Indian equities. But trying to time the market is incredibly difficult and I would likely do much better in the long run just passively holding a diversified bundle of assets than trying to actively time markets based on my views on mispricings and opportunities. Similarly, the essayist, risk analyst and aphorist (to borrow some of Wikipedia’s choice epithets) Nassim Taleb loves to rant about financial professionals who come up with finely tuned risk models and try to maximize expected return based on their parameters. He argues these parameters can fail spectacularly at the worst possible times due to Black Swan events and claims that risk managers would do better to identify robust or, better yet, “antifragile” strategies that are structured to benefit from risk. 

A committed Bayesian could contend that the issue isn’t really with expected utility theory, but with a naive application of expected utility theory. Both in the case of trying to time markets or in the case of trying to come up with over-optimized risk models, one’s priors should account for the difficulty of beating the market or the likelihood of black swan events. For example, if I started with a very strong prior that I probably can’t beat the market, then I’d need to be really confident in my view about Indian stocks before I reallocated much money to them. Once you have a strong prior that you’re a bad stock picker, just passively holding a diversified bundle of assets actually does maximize expected value. I think this is basically correct, though I might quibble that it’s often quite hard to know exactly how strong your prior should be about the possibility of black swan events or about how hard it is to time markets and following a rough heuristic might actually be easier than incorporating these priors into your expected value calculation. 

I think there’s a good case to be made that just as trying to beat the market is often a bad strategy for investors trying to maximize their wealth, making decisions based primarily on strong longtermism is likely a poor strategy for effective altruists trying to maximize their impact. As I alluded to above, there are two features of the strong longtermist argument that make it a questionable basis for decision making: (1) it depends on expected value calculations about low probability / high value outcomes and (2) the case for strong longtermism is “unfalsifiable” and there is no good feedback mechanism to judge the value of strong longtermist interventions. 

Sequence vs. cluster thinking: The basic case for (1) is discussed extensively in Holden Karnofsky’s classic essay “Sequence Thinking vs. Cluster Thinking.” The case for strong longtermism is a paradigmatic example of sequence thinking, as it “involves making a decision based on a single model of the world: breaking down the decision into a set of key questions, taking one’s best guess on each question, and accepting the conclusion that is implied by the set of best guesses.” For example, in Greaves and MacAskill’s paper, they evaluate a series of hypothetical longtermist interventions by coming up with an expected number of future people (10^24, 10^18, or 10^14), an assumption about the likelihood of extinction over the next century due to a particular risk, an assumption about how much a given intervention could reduce that risk, and an assumption about how much the intervention would cost. Then, they plug each assumption into an equation, simply multiply and divide where appropriate, and spit out an expected number of lives saved per dollar spent for the intervention. They then compare it to the expected effectiveness of the best near-termist interventions (proxied by GiveWell’s estimates for how much it costs to save a life by distributing insecticide treated bednets intended to prevent malaria) to justify their claim that longtermist interventions are much more effective. 

Karnofsky contrasts sequence thinking with an alternative approach he calls “cluster thinking,” which “ involves approaching a decision from multiple perspectives… observing which decision would be implied by each perspective, and weighing the perspectives in order to arrive at a final decision… the different perspectives are combined by weighing their conclusions against each other, rather than by constructing a single unified model that tries to account for all available information.” The conclusions of models based on sequence thinking are often dominated by a single parameter or consideration even if the value of that parameter is incredibly uncertain and only represents a “best guest.”. This is true in the argument for longtermism, where the number of expected future people can be used to justify an intervention with only a small probability of reducing risk by some tiny amount and I think MacAskill and Greaves would be the first to admit it represents only a best guess (though they think a relatively conservative one, on which, see more below). Conversely, cluster thinking doesn’t allow any single argument to dominate its conclusions. It can also count the uncertainty of an argument as a reason to discount it and put more weight on other more robust arguments. 

For example, let’s compare two interventions (this example is slightly adapted from one Karnofsky gives in his post). One is a grant to a consortium of  legal scholars working on a model international treaty that could reduce existential risks emerging from international conflict over space settlement (for the case for working on this sort of issue from a longtermist perspective see here). The other is a grant to a seasonal malaria chemoprevention intervention in Nigeria. A cluster thinking approach would consider a variety of factors, like the evidence behind the effectiveness of each intervention, the track record and capacities of the organization implementing the intervention, the expected cost per life saved, how certain, robust, or well-founded that estimate is, whether other funding sources are available (i.e., how neglected are the interventions), how influential the legal scholars are likely to be, and a number of other factors. Such an approach might decide against the grant to the legal scholars because there’s only highly speculative evidence drafting up a model constitution will have much of an impact, the legal scholars seem unlikely to have much influence with key governments, and the legal scholars have no or sparse track records of implementing model treaties. By contrast a sequence thinking approach would assign some probability or value to each of the above considerations and simply adjust the expected value of the interventions by the aggregate probability of success. It might recommend giving the money to the legal scholars simply because even when we adjust the expected value of the intervention to account for the unlikeliness it actually has an impact, the future number of people who could live if humanity isn’t wiped out due to international conflict over space governance is so high that it more than compensates for the uncertainty of the intervention. 

Karnofsky points out that cluster thinking has several advantages over sequence thinking such that the expected value of pursuing a cluster thinking approach to evaluating interventions (and perhaps cause prioritization more generally) is higher than applying a sequence thinking approach. Below, I outline the key points he raises that I believe are relevant to strong longtermism: 

(1) Sequence thinking is likely to reach significantly wrong conclusions based on a single missing or poorly estimated consideration. For instance, in the example given above, if a few of the probabilities I assigned to key considerations like how likely a model treaty is to influence policy, how likely this particular model treaty is to actually reduce international conflict if implemented, possible negative consequences due to the implementation of the treaty, etc, change, that might also change the conclusions I draw about the expected effectiveness of the program. More obviously, if I’m wrong about the expected size or goodness of the human future, my entire calculation will be badly off. 

(2) Adjusting for missing parameters or overestimated probabilities will likely result in regression to the norm. If Interventions look too good to be true because they are several orders of magnitudes more effective than others, it’s likely because they are in fact too good to be true.  I think this is connected to the Optimizer’s curse

(3) Regression to normality prevents some obvious mistakes people make when their judgment is impaired.  

(4) Similarly, beating “the market,” by which Karnofsky means the aggregate of people and institutions trying to have a positive impact on the world, is quite hard, making it likely that if you see an exceptional opportunity, you’re missing something. There are a lot of academics, politicians, NGOs, think tanks, foundations, etc. who are well funded and very knowledgeable and would likely seize on opportunities to reduce existential risks. If they aren’t doing so, such that important and tractable risks appear neglected, you should doubt your analysis. There are some areas where this is plausibly true of longtermist interventions, for example, risks relating to nuclear weapons or advocacy for policies to address climate change, where mainstream academics, politicians, journalists, and NGOs are quite aware of the risks and care deeply about solving them. Arguably, though, in cases where strong longtermism would justify an intervention that results in a very small probability of reducing existential risk by some small amount because the expected value of the future is quite high, it's entirely possible most people who do not share strong longtermist values wouldn’t focus on this type of intervention. As a result, it's more likely there’s more “market inefficiency” around highly speculative interventions or those only likely to help in the far future. That said, if trying to beat the market caused longtermists to focus only on the most speculative interventions justified by longtermist reasoning, that would likely compound the other issues discussed in this section that plague interventions justified by sequence thinking. 

(5) Empirically proven methods of forecasting rely on cluster thinking (or something very like it). Karnofsky cites the work of Philip Tetlock as proof. Again, this suggests that, taking an outside view of our decision making processes, we should have a stronger prior that interventions based on cluster thinking will be more effective than those based on sequence thinking. 

(6) Cluster thinking may be much better than sequence thinking at handling the impact of “unknown unknowns,” making it particularly valuable when uncertainty is high.  This is because conclusions arrived at through cluster thinking tend to be more “robust,” by which Karnofsky means that they are comparatively unlikely to change even if we gained more information, perspectives, or intelligence. They don’t depend on any one assumption or argument, but on several lowly correlated ones (see here for a more general discussion/examples). When it comes to trying to positively influence the long term future, “unknown unknowns” seem likely to dominate the value of any actions we take. Many aspects about longtermist interventions are highly speculative, most obviously (a) the size and goodness of the human future, but also, to give just a few more possible examples, (b) the likelihood of extinction from different risks, (c) how effective any intervention might be at reducing risk, and (d) the risk an intervention might actually exacerbate risk. For example, most of the existential risk Toby Ord attaches to climate change in The Precipice comes from model uncertainty: our best guess is climate change won’t cause human extinction, but our models could be wrong. Or another example, would developing asteroid deflection technology reduce existential risk from asteroids or would it increase the risk that people use the technology to deflect asteroids toward their enemies, increasing existential risk? Should NATO avoid supporting Ukraine because that might increase the risk of nuclear war with Russia or would backing down from supporting Ukraine just because Putin rattled his nuclear saber increase the incentive for dictators to do so in the future, making a nuclear war even more likely in the long run?

(7) Sequence thinking might “over-encourage” exploiting instead of exploring, causing us to miss out on important learning opportunities. Karnofsky gives several  reasons for this, including the fact that sequence thinking makes options seem to differ more in value and cluster thinking penalizes views more for uncertainty. Furthermore, in sequence thinking a few key assumptions dominate the model, so most of the value of exploring would come from learning more about those key assumptions, but learning more about those key assumptions might be particularly intractable. This seems glaringly true about longtermism, where most of the value comes from the expected size and goodness of the human future, an assumption where very little progress in resolving uncertainties seems likely (at least in the near future). 

There are also several reasons for worrying that strong longtermist interventions could be particularly “fragile” (by which I mean the opposite of “robust”) For one thing it might be particularly hard to be well calibrated when reasoning about existential risks, such that one’s subjective probabilities are relatively poor guides for how likely certain risks or how effective certain interventions actually are. I think a large part of this has to do with the complexity of questions about existential risk, the vast amount of uncertainty, and the lack of feedback mechanisms or empirical foundations that could help hone calibration. Also, while this might be more a confession of my own limitations, I worry that people are quite bad at having well calibrated subjective probabilities about very small numbers, largely due to something like scope insensitivity. It is almost impossible for me to mentally distinguish between a 0.0001% chance of something and a 0.000001% chance of it happening, such that it becomes all too easy to misplace orders of magnitude. Losing an order of magnitude here or there when thinking about small probabilities of large values, can lead to radically different conclusions. I think there also might be reasons to be biased in the direction of a slightly higher order of magnitude. The idea of a glorious intergalactic transhuman civilization is potent and inspiring and the idea that I could help bring it about is electrifying. And I could see that subconsciously leading me to some wishful thinking that I have a 0.01% chance of making a difference instead of 0.0001% chance of making a difference. 

The case for strong longtermism is “unfalsifiable” and there is no good feedback mechanism to judge the value of strong longtermist interventions: The second leg of the epistemic critique is quite straightforward. Many aspects of the case for strong longtermism are untestable. They rely on assumptions with weak empirical foundations and it is ultimately impossible to get strong confirmation of or conclusively falsify key claims. This is true of strong longermist assumptions about the size and  goodness of the human future, which will self-evidently be hard to verify until that future arrives, at which point longtermism will be a moot point. It is also true of assumptions about how high certain existential risks are and whether  certain interventions decrease existential risk and if so by how much. These assumptions can’t be empirically verified because extinction is the sort of thing that by definition only happens to a species once and if it does happen we won’t be around to interpret the data. 

Nick Bostrom has suggested that we could look for “signposts” that proxy whether existential risk is increasing or decreasing. For example, a sign post might be the degree of international cooperation. While still pretty difficult to judge whether an intervention decreases or increases international cooperation, even with the benefit of a lot of hindsight (did the Kellogg-Briand Pact reduce great power war?), it seems at least possible to get some empirical confirmation. That said, there’s no definite way to empirically confirm that a signpost is actually a reliable metric for existential risk, or to empirically test our estimates of how much of a decrease or increase in existential risk we should associate with a given change in one of our signposts. 

There are a few main drawbacks to the “unfalsifiability” of strong longermism. While this may seem a point too obvious to merit raising, feedback mechanisms are really important. Many interventions aren’t effective, despite there being plenty of reasons why someone might a priori think they would be. Being able to get empirical feedback on interventions allows for culling ineffective (or harmful) interventions, while investing more in more effective ones. It also makes it possible to tweak good interventions to make them better. This allows our understanding and effectiveness to compound over time. Being able to test things empirically also allows for serendipitous and unexpected discovery. The world can surprise you and contradict your assumptions. If it’s almost impossible to test interventions, the limit on our ability to improve those interventions will be set by the confining dimensions of the human imagination, further hampering our ability to improve them over time. To sum up, interventions that don’t rest on a strong empirical foundation are just more likely to be wrong in some unexpected way and those that don’t have feedback mechanisms might not allow for compounding improvements, making them less effective in the long run. As a result of these considerations, a strategy of pursuing testable interventions will, all else equal, yield much better returns over time than a strategy of pursuing interventions based on untestable speculation.

Because the case for strong longermist interventions depends on fragile sequence thinking and untestable assumptions, we should have a strongly skeptical prior about their effectiveness compared to nearer term robustly supported interventions whose effectiveness can be tested empirically. The arguments in this section could in theory be accommodated by simply incorporating a strong prior about the ineffectiveness of longtermist interventions into any cost benefit analysis (something MacAskill and Greaves don’t explicitly do). In practice, though, it might be hard to determine exactly how to set the prior and relying on heuristics (e.g., more cluster thinking instead of sequence thinking) to supplement cause prioritization frameworks might actually be reasonably expected to lead to better results than relying on expected value calculations about the size and goodness of the long term future and how likely certain interventions are to help us achieve it. 

 

(2) First Order Concerns About the Size and Goodness of the Far Future

So far, while I have mostly focused on the uncertainty and lack of empirical grounding of crucial considerations supporting strong longtermism, I haven’t directly challenged the idea that the human future will be very large and very good. In this section, though, I want to raise the question of whether we should at least regard it as plausible that the human future is small or bad. I will suggest that it is far from obvious that the future will be as good and as long as the proponents of strong longtermism claim. If the future might be relatively small or very bad, that would imperil the longtermist conclusion that the primary determinant of the value of our actions is their impact on the far future. 

Techno Utopian Whigs and the Philosophy of History: Strong longtermists seem to find most plausible what I call a “techno-utopian whig theory of history” (I’m hoping this term catches on, mostly for the t-shirts) The caricature of Whig historiography is the view that history inevitably progresses from a dark and unenlightened past to a glorious and flourishing future. While strong longtermists obviously see nothing inevitable about the progressive march toward a glorious human future, as their concern with existential risk and the possibilities of e.g. global totalitarianism makes clear, they do tend to view the expected value of the human future as incredibly long and densely populated and maybe inconceivably good (an excellent example of this vision can be found in Chapter 8, “Our Potential”, of The Precipice by Toby Ord). They see it as likely that humanity will colonize at least our solar system and maybe even other galaxies, harnessing the vast energy of countless stars to power simulations of digital people experiencing blissful lives, possibly until the heat death of the universe. This vision implicitly assumes some cluster of the following beliefs: that (1) existential risk is relatively low (at least compared to the likely growth of the human or post-human population), (2) that technological progress is likely to continue, giving humanity the ability to become a technologically mature space faring civilization, that (3) the technological advances are unlikely to significantly increase existential risk in a way that would offset their expected value in increasing humanity’s ability to exploit the cosmic cornucopia, (4) that the future wellbeing of sentient creatures is good on net in expectation and very possibly much better than at present. 

Each one of these assumptions seems highly speculative and basically unfounded. At best, longtermists seem to be extrapolating forward the past ~400 years of technological and economic progress associated with the Scientific and Industrial Revolutions and the Enlightenment. This period is exceptional in human history, though, and has no true parallels, making it impossible to extrapolate it forward with any certainty. If the optimistic assumptions that undergird the strong longermist worldview are wrong, the case for strong longtermism completely collapses because the expected value of the future shrinks or even starts to appear negative. And it seems at least plausible that these assumptions are wrong. 

For one thing, existential risk might be high, such that the “expected value” of the human future is much lower than strong longtermists think: If existential risk is high, then the expected duration of the human future is likely quite low and much lower than what strong longtermism’s proponents consider a reasonable lower bound. In fact, even relatively low levels of existential risk per century result in much lower expectations for the duration of human civilization than strong longtermists seem to think plausible. 

MacAskill and Greaves set a lower bound of 10,000 centuries on the expected length of the human future by assuming humanity exists as long as the typical mammalian species, which they actually think is far too pessimistic an expectation as it attaches zero credence to more optimistic scenarios in which humanity invents digital life and colonizes the Milky Way or even just survives in its present flesh and blood state until the Sun’s increased heat leaves the Earth uninhabitable. Also, humanity is hardly a “typical mammalian species.” More optimistic scenarios set the human duration at between 10^8 centuries (if we colonize the Solar system) and 10^11 centuries (if we invent digital life and colonize the Milky Way). 

If you assume just a 1% chance of existential risk per century, though, the odds humanity is still around in 1,000 centuries is virtually 0.00%. For humanity to even have a 1% chance of still existing in 10^8 centuries, existential risk per century would need to be less ~4.61 * 10^-8. By contrast, Toby Ord estimates that humanity’s risk of extinction in the 21st century is a whopping 1 in 6 (and that’s only because he thinks we’ll take steps to redress risks; otherwise, he thinks the risks could be as high as 1 in 3). If we assume an 8% chance of existential risk per century (half of what Ord assumes for the 21st century) the chances of humanity surviving even 100 centuries is just 0.02%. Given these considerations, far from representing a pessimistic lower bound, 10^4 centuries might represent an optimistic upper bound for the expected duration of humanity [Note: the odds of survival after x centuries equals (1-existential risk per century)^x].

It’s worth noting that if longer human durations are unlikely, this also means larger human population sizes per century are also vanishingly unlikely. For example, MacAskill’s and Greaves’s high end estimate for human population size per century comes from the scenario in which humanity settles the Milky Way at carrying capacity with digital lifeforms, enabling a population size of 10^34 per century. The farthest parts of the Milky Way are something like 75,000 lightyears away. Even if humanity achieves the technological capability to travel at the speed of light, it would still take 75 billion years to fully colonize the galaxy. If we assume even a 0.1% chance of existential risk per century, the odds of humanity surviving 75 billion years is a mere 2.58*10^-33. Assuming humanity will definitely invent digital life and settle the entire galaxy if we don’t suffer an existential risk (which seems like a very strong assumption) the expected value from our first century as a galactic civilization would be only ~25 lives.

MacAskill and Greaves think a reasonable expected value for the size (centuries of human existence* population size per century) of the human future would be at least 10^24 in large part because even attaching small credences to astronomically large human futures nets out to a very large, expected future. For example, even a 10^-10 chance of settling the Milky Way with digital life for a single century would mean 10^24 expected humans.  This contention collapses, though, if the probability of achieving astronomical futures with massive durations is highly implausible, as is suggested by my calculations. I will also note that this dovetails with the point I raised in my epistemic critique that people have a hard time thinking clearly about very small orders of magnitude. Saying “humanity has only a 0.1% chance of colonizing the Milky Way” sounds conservative because we just file “0.1% chance” under “very, very small number” in our heads, despite the fact that a 0.1% chance of humanity settling the Milky Way might seem absolutely, mind-numbingly large when viewed from a different perspective.

The concept of “the precipice” sketched by Toby Ord offers longtermists a counterargument against the back of the envelope calculations I’ve sketched out here. On Ord’s view, existential risk is likely just concentrated in the next few centuries, perhaps because humanity is likely to design technologies or institutions that provide existential security (e.g., aligned superintelligence, rapid vaccine development technology that neutralizes bioweapons, effective nuclear missile defense systems, world government that eliminates the risk of great power war, security state apparatus that minimizes the risk of omnicidal terrorists equipped with bio or nuclear weapons, etc.) or perhaps because leaving Earth and spreading out through space will make it less likely for any one risk to wipe out all human settlements. 

This argument is plausible, but certainly not a body blow to my objection. There are several responses worth contemplating:

  1. Even if existential risk is just high for the next few centuries, if it is high enough, the expected value of humanity might remain quite low. For example, if existential risk over the next five centuries is 1/2 (the odds Martin Rees gives humanity of surviving the 21th century), then the odds of humanity surviving even those five centuries are only 3%. It seems entirely plausible to think that risk over the next few centuries could be quite high. While 50% might be an overestimate (though perhaps some like Eliexer Yudkowsky, seemingly, would say it’s optimistic), other plausible numbers also seem frighteningly high. As I mentioned above, Ord estimates the risk this century at 1 in 6, and it is possible (likely?) that risks will continue to increase over the coming few centuries. Ord argues that existential risk before the middle of the 20th century was quite low and it was only with the advent of anthropogenic risks like nuclear war, pandemics in a globalized world, and climate change that risks increased. The vast majority of the risk that Ord identifies in the 21st century comes from the possibility of transformative artificial intelligence and engineered bioweapons, risks that were substantially lower in the 20th century. Extrapolating these trends forward, it seems likely that, at least over the next few centuries, existential risks will continue to rise due to the invention of new more powerful technologies like nanobots or atomic manufacturing or other transformative technologies that are as unimaginable to us today as nuclear weapons would have been to Jeremy Bentham.
  2. It might be very hard to attain existential security because it is cheaper and easier to produce destructive capabilities than it is to invent defensive technology. Pessimistically, the cost of destructive capabilities could decline and ease of access increase precipitously in coming decades. Even if humanity spreads out into space, risks might remain elevated due to the invention of destructive capabilities that could threaten even transplanetary civilization. Do we have strong reasons to believe this is unlikely? Could a technologically mature humanity gain the ability to e.g. trigger false vacuum decay? Similarly, it might be quite hard to design institutions that can effectively mitigate risks due to collective action problems and entrenched oppositional interests (as has been the case with climate change or nuclear proliferation, for example).
  3. Even more pessimistically, the very things humanity would need to do to achieve existential security might drastically increase existential risk, at least in the short term exacerbating (1). In other words, even if humanity achieves existential security, it could come with a very high “risk cost,” offsetting forward looking gains in the expected value of humanity. For example, Daniel Deudney argues that, at least in the near term, attempts to expand into space are unlikely to reduce existential risks and more likely to increase them. Carla Zoe Cremer and Luke Kemp suggest pursuing differential technological progress might be far more likely to increase existential risks than result in differential technological development and raise concerns that the type of surveillance state apparatus that could be needed to effectively police risks would also be likely to collapse into global totalitarianism.

Because a higher risk of extinction implies a lower expected size of the human future, the higher existential risk is, the less impactful a given reduction in risk. To give a simple example, say absent existential risk, there will be 10^10 human lives per century. If existential risk per century is 10%, the expected number of future human lives will equal 100 billion. If we reduce risk this century by one percentage point (to 9%), that will in expectation save 1 billion lives. If existential risk per century is 50%, the number of expected human lives is only 20 billion and a one percentage point reduction in risk this century (to 49%) saves only 200 million expected lives. 

If existential catastrophe in the long run is almost inevitable such that the expected size of the human future is several orders of magnitude smaller than Greaves and MacAskill for example believe, near termist interventions might actually be more cost effective than longtermist interventions. Note that in the examples MacAskill and Greaves give in their paper, if there are only 10^14 people in the future, distributing insecticide treated malaria nets would be more effective than the interventions they consider for reducing existential risk from asteroids and pandemics. 

Longtermists might respond that if risk is high it will likely be more tractable, such that it is much easier to reduce existential risk by one percentage point if risk per century is 50% than if it is 10%. This might be true if risk comes from one source and that source can be easily addressed. If the risk from that source is quite high, addressing it will lead to a big reduction in risk. For example, if there’s a 50% chance of an asteroid hitting Earth and wiping out humanity, but the asteroid is easy to deflect and we deflect it, we’ve cut a big chunk of risk (though not necessarily 50% of the risk that century as some risks might be lowly or negatively correlated with asteroid strike - e.g., if the asteroid kills us, AGI can’t). That said, existential risk might be high because it comes from many different sources or it might be very hard to effectively reduce the risk in question, for example, if the risk required international coordination to solve or if there were vested interests that opposed risk reduction efforts or if AI alignment is really complicated. 

Also, a lot hinges on how you think about what it means for a risk to be high. Under some ways of thinking about risk, how tractable a risk is factors into how high it is making it contradictory to talk about high and tractable risks. For example, if it is really easy and cheap to deflect asteroids, at least one government has incentive to unilaterally deflect the asteroid, and no vested interests would oppose deflecting the asteroid, it probably isn’t quite right to say it had a 50% chance of killing us. If we looked at 1,000 civilizations of comparable technological advancement, all about to get hit with asteroids of similar sizes, almost all of them would deflect the asteroid, suggesting the risk is actually quite low. Another way to think about this is high and tractable risks are unlikely to be neglected and so are probably not the best focus area for altruists trying to have a counterfactual impact. 

The Future Might be Bad in Expectation: It is possible that the future is actually bad in expectation, perhaps because it contains more suffering than pleasure. For example, Brian Tomasik suggests that if humanity spreads across space or creates digital life, it will likely increase total suffering by spreading wild animals to other planets or by running vast simulations of sentient beings who live lives filled with pain. Or what if stable global totalitarianism of the kind imagined by George Orwell dominates the set of possible futures, such that “if you want a picture of the future, imagine a boot stamping on a human face—forever”? It is possible that, for reasons we don’t yet understand, technologically mature space faring civilizations tend toward dystopian horror, perhaps because such arrangements give them an Darwinian edge (one way to read Liu Cixin’s The End of Death is as asking what humanity would need to become to successfully expand into space and whether it is worth it). If these scenarios are even possible, we should reduce the expected value of the future accordingly. 

There is arguably also an asymmetry between how good a universe filled with pleasure would be compared to how bad a universe filled with pain would be because it is possible for pain to be much worse than pleasure is good. As Schopenhauer put it “A quick test of the assertion that enjoyment outweighs pain in this world, or that they are at any rate balanced, would be to compare the feelings of an animal engaged in eating another with those of the animal being eaten.” If you buy this argument, then even say a 25% chance of the future being dominated by astronomical suffering could offset a 75% chance of utopia or, similarly, if the future will likely contain relatively small pockets of astronomical suffering, that could fully offset any value outside those pockets. 

Do we have any compelling reason to say the expected value of the future is positive?  MacAskill argues in chapter 9 of What We Owe the Future, that the future will probably be good because humans are mostly not inherently sadistic and by and large only cause terrible suffering as a means to other non-inherently-terrible ends. So we should expect the human future to converge toward the satisfaction of these non-inherently-terrible (I guess the preferred term would be “good”) ends. While I think this is plausible, there are at least three counter arguments worth considering. (1) As MacAskill acknowledges there are some sadists and they might be disproportionately likely to rise to power (and history is tragically replete with examples). (2) Humans are quite capable of tolerating deeply, horribly bad suffering as means to rather trivial “good” ends. Factory farming is a horrifying contemporary example of what MacAskill calls “anti-utopia”; farmed animals live brief lives of unimaginable misery, boredom, and pain. It is industrial scale torture. And the New York Times cooking section keeps writing about more ways to cook chicken. (3) Anti-utopia could come about because collective prisoner’s dilemma-like incentive structures lead people to take courses of action that result in outcomes no one wants (e.g. an AI arms race could lead to the creation of misaligned AI which could lead to astronomical suffering). 

Lastly, and perhaps most crucially, I would add that even if MacAskill is right and the expected value of the future is positive, it is certainly far less positive than it would be if not for the possibility of astronomical suffering and MacAskill and Greaves’s calculations in “The Case for Strong Longtermism” do not take that possibility into account. 

Strong longtermists typically respond that if we’re worried about the risk of astronomical suffering, we should still be longtermists, but should focus our efforts on preventing astronomical risks rather than preventing human extinction. While this is sensible, longtermists don’t seem to be about to stop funding efforts to reduce extinction risk. And, if the expected value of the future is bad, trying to prevent human extinction has negative value. Conversely, I haven’t offered any arguments that prove the future is bad in expectation and if the future is on net positive in expectation, failing to fund existential risk prevention would be a mistake. 

In summary, we face a lot more “complex cluelessness” about the effect of our actions to benefit the far future than strong longtermism’s proponents imply. It is entirely plausible the expected value of the future is far lower than they suggest because the human future is quite possibly much smaller than they believe and because its expected value is ambiguous. If the future is large, but bad in expectation, we should focus on reducing the risks of astronomical suffering and absolutely avoid trying to prevent extinction risks. If the future is small, but the risk of astronomical suffering is negligible, we might do better to  focus on near termist interventions. And if the future is large and good, we should focus on reducing extinction risk. We have reasons to think the future will be small and reasons to think it will be large and we have reasons to think it will be good and reasons to think it will be bad. It is very hard to know how to weigh these reasons against each other or develop a consistent set of credences we can apply to different possibilities. Given the complex cluelessness we face about the value of the far future, what actions have the best expected consequences and, a fortiori, what moral obligations we have, are hazy. 

That said, even if we set aside the value of the long term future, we have ample reasons to worry about and take steps to mitigate existential risks solely on the basis of less speculative near termist reasons, as argued by Scott Alexander, Matthew Yyglesias, and Carl Shulman. But mitigating those risks will need to meet a higher “cost effectiveness bar” than if we base our decisions on the astronomical expected value of the far future. Also, even if the considerations voiced above are completely accurate, we could still have good reason to grant representation to future people in our “moral parliaments”; they just wouldn’t be granted a supermajority. 

A note about living in a world with a small human future: When I presented the arguments in this section to a friend, he said he found this all very demotivating. If humanity faces near certain extinction and will never achieve a glorious space future, why should we even try? Perhaps saying “certain extinction” is too pessimistic. Maybe humanity will choose to avoid the existential risk that comes from blindly reaching into the urn of technological possibility by limiting technological advancement. This would also constitute an existential risk of sorts in that it would prevent humanity from achieving our intergalactic potential, and the human future that resulted would be “small,” but it could be long and it could be very good.  

That said, I understand where my friend is coming from. I find the idea of an intergalactic utopian civilization thrilling and deeply hope that is what the future holds for us. But even if humanity will inevitably go extinct, I think that just poses a question to humanity as a whole that each of us faces in our own lives. We are mortal and will almost certainly die, though many of those most drawn to longtermism also seem fairly keen on contesting that unfortunate facet of the human condition. For those of us who believe we will die, what reason do we have to do anything? “Merest breath” said the Preacher, “merest breath, all is mere breath.” Suppose I rent a cabin in the woods and spend a weekend with some friends. It is only one weekend and we will only stay in the cabin a short while, but does that give us any reason not to care about how we spend our time or about the quality of our accommodations? Shouldn’t we drink in the beauty of nature, enjoy each others’ company, and treat each other with kindness and affection? It matters how we spend what time we have even if it is short, maybe all the more so if it is short. If the human future is limited, we should still use the time we have to cultivate beauty, fight for justice, and achieve what value we can to make our brief visit better rather than worse. 

 

(3) Why Near or Medium Term Interventions Might be the Best Way to Impact the Far Future

Another important reason why we might want to abjure highly speculative interventions aimed at reducing existential risk or preventing the possibility of astronomical suffering is that, at some margin, the best way to have a positive impact on the far future is to do what is best for the near or medium term future. At this margin, acting on longtermist considerations seems to add nothing to our deliberations about the best course of action. In fact, they might even bias our thinking in ways that makes it more likely we will overspend on trying to directly reduce existential risk, missing out on better opportunities to improve the expected value of the long term future by improving the present.

It seems likely that at some margin, direct spending on interventions intended to reduce existential risk will hit diminishing marginal returns. Some existential risks might be more important or more tractable than others such that targeting them will result in greater reductions in existential risk. Some interventions might be more effective at reducing risks than others. Once we pick the low hanging fruit of the most promising interventions or successfully eliminate the most tractable sources of risk, or adequately address the largest sources of risk, the next best far future interventions will be far less impactful. 

At some point, near term or medium term interventions are likely to have higher expected impact. This would be trivially true if the expected reduction in risk from direct longtermist interventions drops so much that the expected long term impact of the interventions falls below even the expected near term impact of e.g. donating to AMF (this does assume that donating to AMF has no expected long term negative consequences that offset the near term positive expected impact). But near and medium term interventions are likely to be more attractive than longtermist interventions long before that point is reached because the long term positive impacts of near and medium term interventions could be significant. Below, I briefly explore three ways the long term impact of near and medium termist interventions could be high:

  1. Pursuing certain near or medium term interventions might be the best way to reduce existential risk (at least at some margin). In Liu Cixin’s Remembrance of Earth’s Past trilogy (note: very mild spoilers to follow; feel free to just skip to the next paragraph if you want, though), humanity faces an impending existential catastrophe. The world starts pouring all its resources into preparing to meet the threat and reorganize societies and governments to face it, such that people’s lives and goals and their sense of their future is dominated by the threat. The mass mobilization and single minded focus on preparing to meet the oncoming calamity, lead to deep economic deprivation and ecological collapse as all resources are devoted to the cause, resulting in mass death. Societal unrest is widespread and technological progress stagnates.  Only once humanity stops devoting all its resources toward averting the threat and allows people to focus more on cultivating the arts and culture, and on building a society people want to live in, does it start to make the technological progress necessary to save itself. Devoting all society’s resources toward directly averting the threat was counterproductive because it led to despair and hopelessness, mass deprivation, and social upheaval, which were not conducive to technological innovation. Similarly, I doubt mass mobilization would be the correct approach to dealing with the risks we face today. In fact, it seems plausible that we should start prioritizing near termist interventions long before we get to that point. For example, building an excellent education system might be crucial for allowing us to develop the technologies we need to avert existential risks or also help us avert the risk of economic stagnation. Similarly, ensuring equality of opportunity, improving the educational systems of developing countries, or making it easier for people to immigrate to developed countries on the technological frontier could massively increase the pool of scientific talent working on humanity’s most pressing problems, further improving our chances. Generous and well-designed welfare states might be important for maintaining the legitimacy of democracies and preventing demagogues and populists from seizing power and pursuing irresponsible policies that increase existential risks. Maintaining well functioning democracies, defending the rule of law, and opposing autocracy, could be crucial for preventing the lock-in of bad values and could also check the possible rise of leaders with dictatorial power who might take unilateral steps that increase existential risks (as Castro might have when he insisted that the Cuban people would be happy to sacrifice themselves in a nuclear war against imperialists). And there might be many more examples that I’m not thinking of.
  2. Locking in good values might be very important for ensuring humanity achieves its full potential and some of the best ways to maximize the chances of positive value lock-in are by pursuing near term interventions. For example, in What We Owe the Future, MacAskill suggests that feminism and abolitionism might have been some of the most impactful “longtermist” movements of past centuries because he thinks that e.g. the abolition of slavery was highly contingent and advanced industrial civilizations might still have slaves if not for the instiutition’s 18th and 19th century oponents. Similarly, if factory farming is not opposed and abolished the astronomical suffering it entails might become a long lasting feature of the human future and the lack of concern that humans show to animals might also justify mistreating other sentient beings (like digital lives) if it is convenient for us. It is also possible that one of the best ways to ensure humanity has good values is to create good institutions that instantiate those values because doing so makes those values seem natural, normal, and legitimate and helps refute conservative critics. At various times and in various places, people have found it all too easy to scoff at ideas that we now take for granted, like broad based democracy, progresive taxation, the idea that it is wrong to own human beings, or that women should be able to own property. But the worst outcomes predicted by the opponents of these ideas failed to materialize and now many of us can barely understand how anyone could think differently. Also, some people might have an incentive to support and advocate for some noxious ideas and values because they benefit from the institutions those values support. For example, racism was certainly a convenient ideology for slave holders in the American South and people who enjoy eating meat or who make money from factory farming are incentivized to hold speciesist beliefs.
  3. Institutions or interventions intended to serve near or medium termist ends can have persistent effects, allowing them to shape the long term future. For example, research by Acemoglu, Johnson, and Robinson suggests that extractive institutions built to support European colonialism hundreds of years ago explain a large part of underdevelopment in Africa today. Conversely, there might be some historical moments that might provide rare opportunities for particularly impactful neartermist interventions that “lock in” long term value and the need to capitalize on those moments might make the benefits of those interventions highly contingent. For example, the aftermath of WWII might have been a particularly good time to build welfare states in Europe when people craved safety and stability in the wake of war, societies were ethnically homogenous, economic growth was high and demographics favorable, and the threat of Communism provided additional incentive.

It is also worth noting that, if the human duration is actually relatively short, as I suggested above was possible, near term and medium term interventions that have impacts on the scale of decades or even a couple of centuries might actually be much more effective than very small reductions in existential risk. 

If the best long run effects can be produced by near termist or medium termist interventions, then longtermist reasons are superfluous. In fact, in some such cases, longtermism might even be indirectly self-defeating in the sense used by Derek Parfit in Reasons and Persons; if we try to directly achieve longtermist aims, those aims will be worse achieved. Longtermist reasoning might be biased in favor of interventions that directly reduce risk because the direct causal chain makes it easier and more natural to visualize possible long term impacts. That bias might lead longtermists (particularly strong longtermists) to continue to prioritize longtermist interventions past the margin where it is optimal to do so. So in many cases, we might actually do better if we are guided by near or medium term considerations. To paraphrase Bernard Williams, I leave it for discussion whether that shows that the idea that what is best is what is best for the far future, regardless of its effects on the present, is unacceptable or merely that no one ought to accept it. 

 

(4) A Note on Ethical Concerns about Strong Longtermism

Other critics of strong longtermism have shown extensively how it can violate many of our common sense moral intuitions. I tend to be somewhat more sympathetic to the idea that many of our commonsense moral views could be quite wrong and should be open to revision based on arguments. That said, I think there are concerning aspects of strong longtermism that suggest there might be an “ethical cost” to adopting it as a view, acting on it, and advocating for it as an ideology, even if one adopts a consequentialist framework (and even a narrowly utilitarian one).  

It might be the case that advocating for an alternative ideology might bring a lot of the same benefits as strong longtermism with a lower ethical cost, such that believing in, advocating for, and acting on the basis of this “weaker” longtermism could actually lead to better long run consequences. A convinced strong longtermist then should, on this basis, actually try to adopt this “weaker” longtermism, advocate for it, and deploy it in her deliberations. I think many longtermist advocates at least implicitly recognize this, as in their popular writings, they typically adopt less consequentialist, more pluralistic, and less fanatical arguments than those employed in, for example, “The Case for Strong Longtermism.” 

I think there are three broad sources of “ethical cost”:

  1. Making it harder to build a broad coalition: strong longermism doesn’t seem to have broad appeal because it comes off as “weird.” It violates common sense moral intuitions, rests on assumptions about the moral worth of potential lives that many people don’t necessarily share (even people firmly entrenched in the EA/rationalist world), and it almost inevitably involves talking about the deep future in a way that sounds like speculative science fiction. Concern about the human future will likely end up with far less broad based appeal if it grounds its case on strong longtermist reasoning and it even risks discrediting existential risk reduction by association. The movement will likely be more effective if it can command broad based legitimacy and if it can garner enthusiasm from elites.
  2. Motivated reasoning: because there is so much uncertainty around the key assumptions that underlie strong longtermism and because there are few feedback mechanisms that can serve as a check on speculation, it is very easy for strong longtermists to engage in self-serving motivated reasoning. While I am certainly not accusing anyone of actually engaging in motivated reasoning, the risk is very much there. As effective altruists continue to gain influence, power, and financial resources, this will become more and more of a risk, especially if the movement’s resources attract opportunists.
  3. Fanaticism: Because strong longtermism makes such large claims for the potential impact of longtermist interventions, it risks empowering fanatics. While the example some have thrown around of longtermism being used to justify nuking Germany to get at one possible omnicidal terrorist seems melodramatic and implausible (isn’t it pretty obvious how an unprovoked nuclear attack on Germany will actually increase risks?), relying on and elevating the status of ideologies that can justify fanaticism poses dangers and we probably want strong cultural antibodies against any such views. The most obvious risk of fanaticism is that longtermism is used by well-intentioned people to justify short term atrocities (or even just repressive measures like those suggested by Nick Bostrom) in the name of speculative reductions in existential risk. But fanaticism also carries other costs. For one thing, it tends to empower the people who feel the least compunction about violating common sense morality. These people often have multiple character flaws and, even if they really do believe the ideology they espouse, might be particularly sadistic and power hungry. Fanatics are able to dominate certain discursive spaces because people find it hard to refute the purest and most fanatical formulations of their professed values. Once fanatics take over, the reality they create often fails to live up to the ideals their movements were initially meant to represent. Accounts of how communism, despite the high ideals on which it was based, ended up creating dystopian societies as recounted in books like The God that Failed and Darkness at Noon are highly instructive.

 

What Should We Do Instead?

There is a positive program implicit in my critique, but I want to draw it out here. As I said at the start, this is a “conservative” critique in many ways. I believe the EA community already to a large extent practices what I suggest below. My case is by and large that we should continue on the path we are on, though I see significant risks that we are drifting in the wrong direction. That said, my solutions are phrased in the abstract and I’m not focused on the practices of particular institutions or organizations. Those critiques are for a different time and perhaps a different writer, though I hope this piece can help provide some ammunition. 

We should maintain a high “cost effectiveness bar” for longtermist interventions. The aim of this piece was not to suggest that future lives don’t matter or that we shouldn’t try to reduce existential risk. What I have argued is that we shouldn’t fund highly speculative interventions with tiny probabilities of reducing existential risk or the risk of astronomical suffering by some miniscule amount over robustly supported and high impact near termist interventions. Longtermist interventions we do pursue should be supported by cluster thinking style arguments - i.e., many lowly correlated, well founded considerations, such that our view on the best actions to take seem relatively unlikely to change even if we learned more. The expected size and goodness of the far future can be one consideration in support of longtermist arguments, but it should be only one and it certainly should not completely dominate the calculus. The most robust interventions to reduce existential risk should also look relatively attractive if we just think about those currently alive and maybe the next few generations, though I don’t think they should be justified if and only if they appear maximally cost-effective in the near term. 

We should practice worldview diversification. Because the size and goodness of the far future is highly uncertain, EAs should practice worldview diversification. On many plausible views of how the future could go or of the true impact of longtermist interventions, near termist interventions are comparatively cost effective. Just as holding a diversified portfolio of assets is likely a better way  for most investors to compound wealth over time than trying to actively time markets based on their views about the most or least attractive assets, a diversified portfolio of interventions that all look comparably high impact if the worldviews they rest on are true might be the best way to maximize impact over time in a world of uncertainty. Furthermore, at some margin, the best way to help the far future is through near term or medium term interventions and it is highly unclear where that margin is. Worldview diversification can also bring more people into EA, who might be alienated by or uninterested in more longtermist interventions, i.e. worldviews don’t necessarily compete with each other within EA. And there are important and valuable ideas within the broader EA worldview that we should want to spread to people who might not buy into longtermism. Lastly, worldview diversification is another way to set a minimal cost-effectiveness bar for longtermist interventions. If we’ve allocated some of our money to other worldviews, we can’t spend it all on longtermism. As a result the most speculative and very likely ineffective proposals will be less likely to get funding.

I do think there is some room for the community to do a better job of practicing worldview diversification. For example, what “medium term” interventions should we prioritize if we were trying to maximize impact over say the next ~200 years? It seems plausible that such medium term interventions could be comparably effective or even more effective than longtermist ones under certain assumptions (especially if there are more opportunities for funding cost-effective and robustly supported ones). This area could benefit from more research. For example, might some forms of political change look more impactful from a medium term perspective? Might prioritizing economic growth be more attractive? Might preventing global catastrophic risks that are unlikely to lead to extinction (like e.g. COVID-19) seem comparatively impactful? I’m also worried EA career advice is too concentrated on longtermism. 80,000 Hours is the dominant EA career advice organization and it has bought into (strong?) longtermism to a greater degree than other institutions like Open Philanthropy. I’d like to see more thinking about careers based on other plausible worldviews. 

Lastly, we should build a moral case for reducing existential risk and positively shaping the future that doesn’t hinge on the expected value of an overwhelmingly large human future. We should reject attempts to set prioritizations within the EA community or at the largest and most influential EA organizations solely on the basis of strong longermist considerations and make sure those considerations can be defended on a multiplicity of worldviews and values. Doing so will make it easier to build broad based coalitions, maintain the movement’s legitimacy, reduce the risk of takeover by fanatics, and make it more difficult to engage in motivated reasoning. 

I believe following these recommendations is the best way to ensure the EA movement truly lives up to its potential, including its long run potential. 

 


 

10

0
0

Reactions

0
0

More posts like this

Comments4
Sorted by Click to highlight new comments since:

Thank you for writing this post. I agree with many of your arguments and criticisms like yours deserve to get more attention. Nevertheless, I still call myself a longtermist; mainly for the following reasons:

  • There exist longtermist interventions that are good with respect to a broad range of ethical theories and views about the far future, e.g. searching the waste water for unknown pathogens.
  • Sometimes it is possible to gather further evidence for counter-intuitive claims. For example, you could experiment with existing large language models and search for signs of misaligned behaviour.
  • There may exist unknown longtermist interventions that satisfy all of our criteria. Therefore,  a certain amount of speculative thinking is OK as long as you keep in mind that most speculative theories will  die.   

All in all, you should keep the balance between too conservative and too speculative thinking. 

Thanks for the thoughts. I basically agree with you. I'd consider myself a "longtermist," too, for similar reasons. I mainly want to reject the comparatively extreme implications of "strong longtermism" as defended by Greaves and MacAskill that extremely speculative and epistemically fragile longtermist interventions are more cost effective than even the most robust and impactful near termist ones. 

I think there's likely a lot of  steps we could and should be taking that could quite reasonably be expected to reduce real and pressing risks.    

I would add to your last bullet, though, that speculative theories will only die if there's some way to falsify  them or at least seriously call them into question. Strong longtermism is particularly worrying because it is an unfalsifiable theory. For one thing, too much weight is placed on one fundamentally untestable contention: the size and goodness of the far future. Also, it's basically impossible to actually test whether speculative interventions intended to very slightly reduce existential risk actually are successful (how could we possibly tell if risk was actually reduced by 0.00001%? or increased by 0.00000001%?). As a result, it could survive forever, no matter how poor a job it's doing. 

Longtermist interventions (even speculative ones) supported by "cluster thinking" styles that put more weight on more testable assumptions (e.g. about the neglectedness  or tractability of some particular issue, about the effect an intervention could have on certain "signposts" like international coordination, rate of near misses, etc.) or are intended to lead to more significant reductions in existential risk (which could be somewhat easier to measure than very small ones) are likely easier to reject if they prove ineffective. 

This is in essence the claim of the epistemic critique of strong longtermism. Notice that this way of framing the epistemic concern does not involve rejecting the Bayesian mindset or the usefulness of expected value theory. Instead, it involves recognizing that to maximize expected value, we might in some cases want to not rely on expected value calculations.

 

Hmm. I get what you mean. To make the best decision that I can, I might not use expected value calculations to compare alternative actions when for example there's no feedback on my action or the probabilities are very low and so hard to guess because of my own cognitive limitations.

An outside view applies heuristics  (for example, the heuristic "don't do EV when a subjective probability is below 0.0001") to my decision of whether use EV calculations, but it doesn't calculate EV. I would consider it a belief. 

Belief: "Subjective probabilities below 0.0001% attached to values in an EV calculation are subject to scope insensitivity and their associated EV calculations contain errors."

Rule: "If an EV calculation and comparison effort relies on a probability below 0.0001%, then cease the whole effort."

Bayesian: "As an EV calculation subjective probability drops below 0.0001%, the probability that the EV calculation probability is actually unknown increases past 80%."

I can see this issue of the EV calculation subjective probability being so small as similar to a visual distinction between tiny and very tiny. You might be able to see something, but it's too small to tell if it's 1/2 the size of something bigger, or 1/10. All you know is that you can barely see it and anything 10X bigger than it.

The real question for me is whether the Bayesian formulation is meaningful.  Is there another formulation that a Bayesian could make that is better suited, involving priors, different assertions, probabilities, etc?

I tried imagining how this might go if it were me answering the question.

Me:

Well, when I think someone's wrong, I pull out a subjective probability that lets me communicate that. I like 80% because it makes people think of the 80/20 rule and then they think I'm really smart and believe what I'm telling them. I could list a higher probability, but they would actually quibble with me about it, and I don't want that.  Also, at that percentage, I'm not fully disagreeing with them. They like that.

I say stuff like, "I estimate an 80% probability that you're wrong."when I think they're wrong. 

And here's the problem with the "put some money behind that probability" thing. I really think they're wrong, but I also know that this is a situation in which verifying the truth to both side's satisfaction is tricky, and because the verification is over money, there's all kinds of distortion that's likely to occur if money enters into it. It might actually be impossible to verify the truth. Me and the other side both know that. 

That's the perfect time to use a probability and really make it sound carefully considered, like I really believe it's 80%, and not 98%. 

It's like knowing when to say "dibs" or "jinx". You have got to understand context.

I'm joking. I don't deceive like that. However, how do you qualify a Bayesian estimate as legitimate or not, in general?

You have gone partway here, rejecting EV calculations in some circumstances. You have also said that you still believe in probability estimates and expected value theory, and are instead just careful about when to use them. 

So do you or can you use expected value calculations or subjective probabilities to decide when to use either?

I enjoyed skimming your post, and appreciate many of your points.

One concept particularly struck me, "It’s worth noting that if longer human durations are unlikely, this also means larger human population sizes per century are also vanishingly unlikely. "

I often hear the classic argument that "there is a possibility human populations are really really big in the future, and the future is so long that their wellbeing matters really quite a lot." I've never played around with the idea that the is a lot of doubt over large populations for long times. Which must be accounted for and would lessen their importance for me taking action now.

Arguing "this is all the more reason to work harder to make their lives come to pass" strikes me as slightly dishonest, since the opposite is equally true: maybe we do our best make that giant future possible, and for some unforeseen reason people still don't (want to?) proliferate.

Thank you for provoking some reflection on my assumptions, and thanks for such a comprehensive post with accessible bolded topic sentences.

Curated and popular this week
Relevant opportunities