Note: The Global Priorities Institute (GPI) has started to create summaries of some working papers by GPI researchers with the aim to make our research more accessible to people outside of academic philosophy (e.g. interested people in the effective altruism community). We welcome any feedback on the usefulness of these summaries.
Summary: The Case for Strong Longtermism
This is a summary of the GPI Working Paper "The case for strong longtermism" by Hilary Greaves and William MacAskill. The summary was written by Elliott Thornley.
In this paper, Greaves and MacAskill make the case for strong longtermism: the view that the most important feature of our actions today is their impact on the far future. They claim that strong longtermism is of the utmost significance: that if the view were widely adopted, much of what we prioritise would change.
The paper defends two versions of strong longtermism. The first version is axiological, making a claim about the value of our actions. The second version is deontic, making a claim about what we should do. According to axiological strong longtermism (ASL), far-future effects are the most important determinant of the value of our actions. According to deontic strong longtermism (DSL), far-future effects are the most important determinant of what we should do. The paper argues that both claims are true even when we draw the line between the near and far future a surprisingly long time from now: say, a hundred years.
Axiological strong longtermism
The argument for ASL is founded on two key premises. The first is that the expected number of future lives is vast. If there is even a 0.1% probability that humanity survives until the Earth becomes uninhabitable – one billion years from now – with at least ten billion lives per century, the expected future population is at least 100 trillion (1014). And if there is any non-negligible probability that humanity spreads into space or creates digital sentience, the expected number of future lives is larger still. These kinds of considerations lead Greaves and MacAskill to conclude that any reasonable estimate of the expected future population is at least 1024.
The second key premise of the argument for ASL is that we can predictably and effectively improve the far future. We can have a lasting impact on the future in at least two ways: by reducing the risk of premature human extinction and by guiding the development of artificial superintelligence.
Take extinction first. Both human survival and human extinction are persistent states. They are states which – upon coming about – tend to persist for a long time. These states also differ in their long-run value. Our survival through the next century and beyond is, plausibly, better than our extinction in the near future. Therefore, we can have a lasting impact on the future by reducing the risk of premature human extinction.
Funding asteroid detection is one way to reduce this risk. Newberry (2021) estimates that spending $1.2 billion to detect all remaining asteroids with a diameter greater than 10 kilometres would decrease the chance that we go extinct within the next hundred years by 1-in-300-billion. Given an expected future population of 1024, the result would be approximately 300,000 additional lives in expectation for each $100 spent. Preventing future pandemics is another way to reduce the risk of premature human extinction. Drawing on Millet and Snyder-Beattie (2017), Greaves and MacAskill estimate that spending $250 billion strengthening our healthcare systems would reduce the risk of extinction within the next hundred years by about 1-in-2,200,000, leading to around 200 million extra lives in expectation for each $100 spent. By contrast, the best available near-term-focused interventions save approximately 0.025 lives per $100 spent (GiveWell 2020). Further investigation may reveal more opportunities to improve the near future, but it seems unlikely that any near-term-focused interventions will match the cost-effectiveness of pandemic-prevention in the long-term.
Of course, the case for reducing extinction risk hangs on our moral view. If we embrace a person-affecting approach to future generations (see Greaves 2017, section 5) – where we care about making lives good but not about making good lives – then a lack of future lives would not be such a loss, and extinction would not be so bad. Alternatively, if we expect humanity’s long-term survival to be bad on balance, we might judge that extinction in the near-term is the lesser evil.
Nevertheless, the case for strong longtermism holds up even on these views. That is because reducing the risk of premature human extinction is not the only way that we can affect the far future. We can also affect the far future by (for example) guiding the development of artificial superintelligence (ASI). Since ASI is likely to be influential and long-lasting, any effects that we have on its development are unlikely to wash out. By helping to ensure that ASI is aligned with the right values, we can decrease the chance that the far future contains a large number of bad lives. That is important on all plausible moral views.
While there is a lot of uncertainty in the above estimates of cost-effectiveness, this uncertainty does not undermine the case for ASL because we also have ‘meta’ options for improving the far future. For example, we can conduct further research into the cost-effectiveness of various longtermist initiatives and we can invest resources for use at some later time.
Greaves and MacAskill then address two objections to their argument. The first is that we are clueless about the far-future effects of our actions. They explore five ways of making this objection precise – by appeal to simple cluelessness, conscious unawareness, arbitrariness, imprecision, and ambiguity aversion – and conclude that none undermines their argument. The second objection is that the case for ASL hinges on tiny probabilities of enormous values, and that chasing these tiny probabilities is fanatical. For example, it might seem fanatical to spend $1 billion on ASI-alignment for the sake of a 1-in-100,000 chance of preventing a catastrophe, when one could instead use that money to help many people with near-certainty in the near-term. Greaves and MacAskill take this to be one of the most pressing objections to strong longtermism, but make two responses. First, denying fanaticism has implausible consequences (see Beckstead and Thomas 2021, Wilkinson 2022) so perhaps we should be fanatical on balance. Second, the probabilities in the argument for strong longtermism might not be so small that fanaticism becomes an issue. They thus tentatively conclude that the fanaticism objection does not undermine the case for strong longtermism.
Deontic strong longtermism
Greaves and MacAskill then argue for deontic strong longtermism: the claim that far-future effects are the most important determinant of what we should do. Their ‘stakes-sensitivity argument’ employs the following premise:
In situations where (1) some actions have effects much better than all others, (2) the personal cost of performing these actions is comparatively small, and (3) these actions do not violate any serious moral constraints, we should perform one of these actions.
Greaves and MacAskill argue that each of (1)-(3) is true in the most important decision situations facing us today. Actions like donating to prevent pandemics and guide ASI development meet all three conditions: their effects are much better than all others, their personal costs are small, and they violate no serious moral constraints. Therefore, we should perform these actions. Since axiological strong longtermism is true, it is the far-future effects of these actions that make their overall effects best, and deontic strong longtermism follows.
The paper concludes with a summary of the argument and its practical implications. Humanity’s future could be vast, and we can influence its course. That suggests the truth of strong longtermism: impact on the far future is the most important feature of our actions today.
References
Nicholas Beckstead and Teruji Thomas (2021). A paradox for tiny probabilities and enormous values. GPI Working Paper No. 7-2021
GiveWell (2020). GiveWell’s Cost-Effectiveness Analyses. Accessed 26 January 2021.
Hilary Greaves (2017). Population axiology. Philosophy Compass.
Piers Millett and Andrew Snyder-Beattie (2017). Existential Risk and Cost-Effective Biosecurity. Health Security 15(4):373–383.
Toby Newberry (2021). How cost-effective are efforts to detect near-Earth-objects? Global Priorities Institute Technical Report T1-2021.
Hayden Wilkinson (2022). In defense of fanaticism. Ethics 132(2):445–477
I'm super excited for you to continue making these research summaries! I have previously written about how I want to see more accessible ways to understand important foundational research - you've definitely got a reader in me.
I also enjoy the video summaries. It would be great if GPI video and written summaries were made as standard. I appreciate it's a time commitment, but in theory there's quite a wide pool of people who could do the written summaries and I'm sure you could get funding to pay people to do them.
As a non-academic I don't think I can assist with writing any summaries but if a bottleneck is administrative resource let me know and I may be happy to volunteer some time to help with this.
I haven't read the paper, but if we accept fanaticism shouldn't we be chasing the highest probability of infinite utility? That seems pretty inconsistent with how longtermists seem to reason (though it probably still leads to similar actions like reducing x-risk, since we probably have to be around in order to affect the world and increase the probability of infinite utility).
Can you give some examples of infinite utility?
None of these seem particularly likely, but I'm not literally certain that they can't happen / that I can't affect their probability, and if you accept fanaticism then you should be striving to increase the probability of making something like this happen. (Which, to be clear, could be the right thing to do! But it's not how longtermists tend to reason in practice.)
As you said in your previous comment we essentially are increasing the probability of these things happening by reducing x-risk. I'm not convinced we don't tend to reason fanatically in practice - after all Bostrom's astronomical waste argument motivates reducing x-risk by raising the possibility of achieving incredibly high levels of utility (in a footnote he says he is setting aside the possibility of infinitely many people). So reducing x-risk and trying to achieve existential security seems to me to be consistent with fanatical reasoning.
It's interesting to consider what we would do if we actually achieved existential security and entered the long reflection. If we take fanaticism seriously at that point (and I think we will) we may well go for infinite value. It's worth noting though that certain approaches to going for infinite value will probably dominate other approaches by having a higher probability of success. So we'd probably decide on the most promising possibility and run with that. If I had to guess I'd say we'd look into creating infinitely many digital people with extremely high levels of utility.
I'm not sure whether you are disagreeing with me or not. My claims are (a) accepting fanaticism implies choosing actions that most increase probability of infinite utility, (b) we are not currently choosing actions based on how much they increase probability of infinite utility, (c) therefore we do not currently accept fanaticism (though we might in the future), (d) given we don't accept fanaticism we should not use "fanaticism is fine" as an argument to persuade people of longtermism.
Is there a specific claim there you disagree with? Or were you riffing off what I said to make other points?
Yes I disagree with b) although it's a nuanced disagreement.
I think the EA longtermist movement is currently choosing the actions that most increase probability of infinite utility, by reducing existential risk.
What I'm less sure of is that achieving infinite utility is the motivation for reducing existential risk. It might just be that achieving "incredibly high utility" is the motivation for reducing existential risk. I'm not too sure on this.
My point about the long reflection was that when we reach this period it will be easier to see the fanatics from the non-fanatics.
This is not in conflict with my claim (b). My claim (b) is about the motivation or reasoning by which actions are chosen. That's all I rely on for the inferences in claims (c) and (d).
I think we're mostly in agreement here, except that perhaps I'm more confident that most longtermists are not (currently) motivated by "highest probability of infinite utility".
Yeah that's fair. As I said I'm not entirely sure on the motivation point.
I think in practice EAs are quite fanatical, but only to a certain point. So they probably wouldn't give in to a Pascal's mugging but many of them are willing to give to a long-term future fund over GiveWell charities - which is quite a bit fanaticism! So justifying fanaticism still seems useful to me, even if EAs put their fingers in their ears with regards to the most extreme conclusion...
It really doesn't seem fanatical to me to try to reduce the chance of everyone dying, when you have a specific mechanism by which everyone might die that doesn't seem all that unlikely! That's the right action according to all sorts of belief systems, not just longtermism! (See also these posts.)
Hmm I do think it's fairly fanatical. To quote this summary:
The probability that any one longtermist's actions will actually prevent a catastrophe is very small. So I do think longtermist EAs are acting fairly fanatically.
Another way of thinking about it is that, whilst the probability of x-risk may be fairly high, the x-risk probability decrease any one person can achieve is very small. I raised this point on Neel's post.
By this logic it seems like all sorts of ordinary things are fanatical:
Generally I think it's a bad move to take a collection of very similar actions and require that each individual action within the collection be reasonably likely to have an impact.
I don't know of anyone who (a) is actively working reducing the probability of catastrophe and (b) thinks we only reduce the probability of catastrophe by 1-in-100,000 if we spend $1 billion on it. Maybe Eliezer Yudkowsky and Nate Soares, but probably not even them. The summary is speaking theoretically; I'm talking about what happens in practice.
Probabilities are on a continuum. It’s subjective at what point fanaticism starts. You can call those examples fanatical if you want to, but the probabilities of success in those examples are probably considerably higher than in the case of averting an existential catastrophe.
I think the probability that my personal actions avert an existential catastrophe is higher than the probability that my personal vote in the next US presidential election would change its outcome.
I think I'd plausibly say the same thing for my other examples; I'd have to think a bit more about the actual probabilities involved.
That's fair enough, although when it comes to voting I mainly do it for personal pleasure / so that I don't have to lie to people about having voted!
When it comes to something like donating to GiveWell charities on a regular basis / going vegan for life I think one can probably have greater than 50% belief they will genuinely save lives / avert suffering. Any single donation or choice to avoid meat will have far lower probability, but it seems fair to consider doing these things over a longer period of time as that is typically what people do (and what someone who chooses a longtermist career essentially does).
Why consider only a single longtermist career in isolation, but consider multiple donations in aggregate?
Given that you seem to agree voting is fanatical, I'm guessing you want to consider the probability that an individual's actions are impactful, but why should the locus of agency be the individual? Seems pretty arbitrary.
If you agree that voting is fanatical, do you also agree that activism is fanatical? The addition of a single activist is very unlikely to change the end result of the activism.
A longtermist career spans decades, as would going vegan for life or donating regularly for decades. So it was mostly a temporal thing, trying to somewhat equalise the commitment associated with different altruistic choices.
Hmm well aren't we all individuals making individual choices? So ultimately what is relevant to me is if my actions are fanatical?
Pretty much yes. To clarify - I have never said I'm against acting fanatically. I think the arguments for acting fanatically, particularly the one in this paper, are very strong. That said, something like a Pascal's mugging does seem a bit ridiculous to me (but I'm open to the possibility I should hand over the money!).
We're all particular brain cognitions that only exist for ephemeral moments before our brains change and become a new cognition that is similar but not the same. (See also "What counts as death?".) I coordinate both with the temporally-distant (i.e. future) brain cognitions that we typically call "me in the past/future" and with the spatially-distant brain cognitions that we typically call "other people". The temporally-distant cognitions are more similar to current-brain-cognition than the spatially-distant cognitions but it's fundamentally a quantitative difference, not a qualitative one.
By "fanatical" I want to talk about the thing that seems weird about Pascal's mugging and the thing that seems weird about spending your career searching for ways to create infinitely large baby universes, on the principle that it slightly increases the chance of infinite utility.
If you agree there's something weird there and that longtermists don't generally reason using that weird thing and typically do some other thing instead, that's sufficient for my claim (b).
Certainly agree there is something weird there!
Anyway I don't really think there was too much disagreement between us, but it was an interesting exchange nonetheless!
I appreciated this summary
I appreciated this summary
10^24 population expectation seems like the key assumption here. It’s easy to get that wrong by several orders of magnitude. All other assumptions are irrelevant if you assume that.
Perhaps we could work with probability distributions instead of point estimates.
What does 'longtermism' add beyond the standard EA framework of maximizing cost-effectiveness? It seems like a regular EA would support allocating funding to the intervention that saves more lives per dollar.
Valuing “saves” lives that are already exist/likely to exist versus creating (or making it possible for others to create more lives)?
Perhaps that’s the main distinction in the deep assumptions/values.
Although, they argue that longtermism goes through even if you accept person-affecting views:
How much funding would it take to fully fund all extinction risk projects?