Hide table of contents

Note: The Global Priorities Institute (GPI) has started to create summaries of some working papers by GPI researchers with the aim to make our research more accessible to people outside of academic philosophy (e.g. interested people in the effective altruism community). We welcome any feedback on the usefulness of these summaries.

Summary: The Epistemic Challenge to Longtermism

This is a summary of the GPI Working Paper "The epistemic challenge to longtermism" by Christian Tarsney. The summary was written by Elliott Thornley.

According to longtermism, what we should do mainly depends on how our actions might affect the long-term future. This claim faces a challenge: the course of the long-term future is difficult to predict, and the effects of our actions on the long-term future might be so unpredictable as to make longtermism false.  In “The epistemic challenge to longtermism”, Christian Tarsney evaluates one version of this epistemic challenge and comes to a mixed conclusion. On some plausible worldviews, longtermism stands up to the epistemic challenge. On others, longtermism’s status depends on whether we should take certain high-stakes, long-shot gambles.

Tarsney begins by assuming expectational utilitarianism: roughly, the view that we should assign precise probabilities to all decision-relevant possibilities, value possible futures in line with their total welfare, and maximise expected value. This assumption sets aside ethical challenges to longtermism and focuses the discussion on the epistemic challenge.

Persistent-difference strategies

Tarsney outlines one broad class of strategies for improving the long-term future: persistent-difference strategies. These strategies aim to put the world into some valuable state S when it would otherwise have been in some less valuable state ¬S, in the hope that this difference will persist for a long time. Epistemic persistence skepticism is the view that identifying interventions likely to make a persistent difference is prohibitively difficult — so difficult that the actions with the greatest expected value do most of their good in the near-term. It is this version of the epistemic challenge that Tarsney focuses on in this paper.

To assess the truth of epistemic persistence skepticism, Tarsney compares the expected value of a neartermist benchmark intervention to the expected value of a longtermist intervention L. In his example, is spending $1 million on public health programmes in the developing world, leading to 10,000 extra quality-adjusted life years in expectation. L  is spending $1 million on pandemic-prevention research, with the aim of preventing an existential catastrophe and thereby making a persistent difference.

Exogenous nullifying events

Persistent-difference strategies are threatened by what Tarsney calls exogenous nullifying events (ENEs), which come in two types. Negative ENEs are far-future events that put the world into the less valuable state ¬S. In the context of the longtermist intervention L,  in which the valuable target state S is the existence of an intelligent civilization in the accessible universe, negative ENEs are existential catastrophes that might befall such a civilization. Examples include self-destructive wars, lethal pathogens, and vacuum decay. Positive ENEs, on the other hand, are far-future events that put the world into the more valuable state S. In the context of L, these are events that give rise to an intelligent civilization in the accessible universe where none existed previously. This might happen via evolution, or via the arrival of a civilization from outside the accessible universe. What unites negative and positive ENEs is that they both nullify the effects of interventions intended to make a persistent difference. Once the first ENE has occurred, the state of the world no longer depends on the state that our intervention put it in. Therefore, our intervention stops accruing value at that point.

Tarsney assumes that the annual probability r  of ENEs is constant in the far future, defined as more than a thousand years from now. The assumption is thus compatible with the time of perils hypothesis, according to which the risk of existential catastrophe is likely to decline in the near future. Tarsney makes the assumption of constant partly for simplicity, but it is also in line with his policy of making empirical assumptions that err towards being unfavourable to longtermism. Other such assumptions concern the tractability of reducing existential risk, the speed of interstellar travel, and the potential number and quality of future lives. Making these conservative assumptions lets us see how longtermism fares against the strongest available version of the epistemic challenge.

Models to assess epistemic persistence scepticism

To compare the longtermist intervention L to the neartermist benchmark intervention N, Tarsney constructs two models: the cubic growth model and the steady state model. The characteristic feature of the cubic growth model is its assumption that humanity will eventually begin to settle other star systems, so that the potential value of human-originating civilization grows as a cubic function of time. The steady-state model, by contrast, assumes that humanity will remain Earth-bound and eventually reach a state of zero growth. 

The headline result of the cubic growth model is that the longtermist intervention has greater expected value than the neartermist benchmark intervention N  just so long as is less than approximately 0.000135 (a little over one-in-ten-thousand) per year. Since, in Tarsney’s estimation, this probability is towards the higher end of plausible values of r, the cubic growth model suggests (but does not conclusively establish) that longtermism stands up to the epistemic challenge. If we make our assumptions about tractability and the potential size of the future population a little less conservative, the case for choosing L  over N  becomes much more robust.

The headline result of the steady state model is less favourable to longtermism. The expected value of exceeds the expected value of N  only when r  is less than approximately 0.000000012 (a little over one-in-a-hundred-million) per year, and it seems likely that an Earth-bound civilization would face risks of negative ENEs that push r over this threshold. Relaxing the model’s conservative assumptions, however, makes longtermism more plausible. If L  would reduce near-term existential risk by at least one-in-ten-billion and any far-future steady-state civilization would support at least a hundred times as much value as Earth does today, then r  need only fall below about 0.006 (six-in-one-thousand) to push the expected value of above N.

The case for longtermism is also strengthened once we account for uncertainty, both about the values of various parameters and about which model to adopt. Consider an example. Suppose that we assign a probability of at least one-in-a-thousand to the cubic growth model. Suppose also that we assign probabilities – conditional on the cubic growth model – of at least one-in-a-thousand to values of no higher than 0.000001 per year, and at least one-in-a-million to a ‘Dyson spheres’ scenario in which the average star supports at least 1025 lives at a time. In that case, the expected value of the longtermist intervention L  is over a hundred billion times the expected value of the neartermist benchmark intervention N. It is worth noting, however, that in this case L’s greater expected value is driven by possibly minuscule probabilities of astronomical payoffs. Many people suspect that expected value theory goes wrong when its verdicts hinge on these so-called Pascalian probabilities (Bostrom 2009, Monton 2019, Russell 2021), so perhaps we should be wary of taking the above calculation as a vindication of longtermism.

Tarsney concludes that the epistemic challenge to longtermism is serious but not fatal. If we are steadfast in our commitment to expected value theory, longtermism overcomes the epistemic challenge. If we are wary of relying on Pascalian probabilities, the result is less clear.

References

Bostrom, N. (2009). Pascal’s mugging. Analysis 69 (3), 443–445.

Monton, B. (2019). How to avoid maximizing expected utility. Philosophers’ Imprint 19 (18), 1–25. 

Russell, J. S. (2021). On two arguments for fanaticism. Global Priorities Institute Working Paper Series. GPI Working Paper No. 17-2021.

39

0
0

Reactions

0
0

More posts like this

Comments5
Sorted by Click to highlight new comments since: Today at 12:08 AM

See also Michael Aird's comments on this Tarsney (2020) paper. His main points are:

  • 'Tarsney's model updates me towards thinking reducing non-extinction existential risks should be a little less  of a priority than I previously thought.' (link to full comment)
  • 'Tarsney seems to me to understate the likelihood that accounting for non-human animals would substantially affect the case for longtermism.' (link)
  • 'The paper ignores 2 factors that could strengthen the case for longtermism - namely, possible increases in how efficiently resources are used and in what extremes of experiences can be reached.' (link)
  • 'Tarsney writes "resources committed at earlier time should have greater impact, all else being equal". I think that this is misleading and an oversimplification. See Crucial questions about optimal timing of work and donations and other posts tagged Timing of Philanthropy.' (link)
  • 'I think it'd be interesting to run a sensitivity analysis on Tarsney's model(s), and to think about the value of information we'd get from further investigation of: 
    • how likely the future is to resemble Tarsney's cubic growth model vs his steady model
    • whether there are other models that are substantially likely, whether the model structures should be changed
    • what the most reasonable distribution for each parameter is.' (link)

On the summary: I'd have found this summary more useful if it had made the ideas in the paper simpler, so it was easier to get an intuitive grasp on what was going on. This summary has made the paper shorter, but (as far as I can recall) mostly by compressing the complexity, rather than lessening it!

On the paper itself: I still find Tarsney's argument hard to make sense of (in addition to the above, I've read the full paper itself a couple of times).
AFAIT, the set up is that the longtermist wants to show that there are things we can do now that will continually make the future better than it would have been ('persistent-difference strategies'). However, Tarnsey takes the challenge to be that there are things that might happen that would stop these positive states happening ('exogenously nullifying events'). And what does all the work is that if the human population expands really fast ('cubic growth model'), that is, because it's fled to the stars, but the negative events should happen at a constant rate, then longtermism looks good.

I think what bothers me about the above is this: why think that we could ever identify and do something that would, in expectation, make a persistent positive difference, i.e. a difference for ever and ever and ever? Isn't Tarsney assuming the existence of the thing he seeks to prove, ie 'begging the question'? I think the sceptic is entitled to respond with a puzzled frown - or an incredulous stare - about whether we can really expect to knowingly change the whole trajectory of the future - that, after all, presumably is the epistemic challenge. That challenge seems unmet.

I've perhaps misunderstood something. Happy to be corrected!

Thanks! This is valuable feedback.

By 'persistent difference', Tarsney doesn't mean a difference that persists forever. He just means a difference that persists for a long time in expectation: long enough to make the expected value of the longtermist intervention greater than the expected value of the neartermist benchmark intervention.

Perhaps you want to know why we should think that we can make this kind of persistent difference. I can talk a little about that in another comment if so.

If I understand the proposed model correctly (I haven't read thoroughly, so apologies if not): The model basically assumes that "longtermist interventions" cannot cause accidental harm. That is, it assumes that if a "longtermist intervention" is carried out, the worst-case scenario is that the intervention will end up being neutral (e.g. due to an "exogenous nullifying event") and thus resources were wasted.

But this means assuming away the following major part of complex cluelessness: due to an abundance of crucial considerations, it is usually extremely hard to judge whether an intervention that is related to anthropogenic x-risks or meta-EA is net-positive or net-negative. For example, such an intervention may cause accidental harm due to:

  1. Drawing attention to dangerous information (e.g. certain exciting approaches for AGI development / virology experimentation).
    • If a researcher believes they came up with an impressive insight, they will probably be biased towards publishing it, even if it may draw attention to potentially dangerous information. Their career capital, future compensation and status may be on the line.
    • Alexander Berger (co-CEO of OpenPhil) said in an interview :

      I think if you have the opposite perspective and think we live in a really vulnerable world — maybe an offense-biased world where it’s much easier to do great harm than to protect against it — I think that increasing attention to anthropogenic risks could be really dangerous in that world. Because I think not very many people, as we discussed, go around thinking about the vast future.

      If one in every 1,000 people who go around thinking about the vast future decide, “Wow, I would really hate for there to be a vast future; I would like to end it,” and if it’s just 1,000 times easier to end it than to stop it from being ended, that could be a really, really dangerous recipe where again, everybody’s well intentioned, we’re raising attention to these risks that we should reduce, but the increasing salience of it could have been net negative.

  2. "Patching" a problem and preventing a non-catastrophic, highly-visible outcome that would have caused an astronomically beneficial "immune response".
    • Nick Bostrom said in a talk ("lightly edited for readability"):

      Small and medium scale catastrophe prevention? Also looks good. So global catastrophic risks falling short of existential risk. Again, very difficult to know the sign of that. Here we are bracketing leverage at all, even just knowing whether we would want more or less, if we could get it for free, it’s non-obvious. On the one hand, small-scale catastrophes might create an immune response that makes us better, puts in place better safeguards, and stuff like that, that could protect us from the big stuff. If we’re thinking about medium-scale catastrophes that could cause civilizational collapse, large by ordinary standards but only medium-scale in comparison to existential catastrophes, which are large in this context, again, it is not totally obvious what the sign of that is: there’s a lot more work to be done to try to figure that out. If recovery looks very likely, you might then have guesses as to whether the recovered civilization would be more likely to avoid existential catastrophe having gone through this experience or not.

  3. Causing decision makers to have a false sense of security.
    • For example, perhaps it's not feasible to solve AI alignment in a competitive way without strong coordination, etcetera. But researchers are biased towards saying good things about their field, their colleagues and their (potential) employers.
  4. Causing progress in AI capabilities to accelerate in a certain way.
  5. Causing the competition dynamics among AI labs / states to intensify.
  6. Decreasing the EV of the EA community by exacerbating bad incentives and conflicts of interest, and by reducing coordination.
    • For example, by creating impact markets.
  7. Causing accidental harm via outreach campaigns or regulation advocacy (e.g. by causing people to get a bad first impression of something important).
  8. Causing a catastrophic leak from a virology lab, or an analogous catastrophe involving an AI lab.

All good points, but Tarsney's argument doesn't depend on the assumption that longtermist interventions cannot accidentally increase x-risk. It just depends on the assumption that there's some way that we could spend $1 million  that would increase the epistemic probability that humanity survives the next thousand years by at least 2x10^-14.

Curated and popular this week
Relevant opportunities