Note: The Global Priorities Institute (GPI) has started to create summaries of some working papers by GPI researchers with the aim to make our research more accessible to people outside of academic philosophy (e.g. interested people in the effective altruism community). We welcome any feedback on the usefulness of these summaries.

**Summary: The Epistemic Challenge to Longtermism**

*This is a summary of the GPI Working Paper **"The epistemic challenge to longtermism" by Christian Tarsney**. The summary was written by Elliott Thornley.*

According to *longtermism*, what we should do mainly depends on how our actions might affect the long-term future. This claim faces a challenge: the course of the long-term future is difficult to predict, and the effects of our actions on the long-term future might be so unpredictable as to make longtermism false. In “The epistemic challenge to longtermism”, Christian Tarsney evaluates one version of this epistemic challenge and comes to a mixed conclusion. On some plausible worldviews, longtermism stands up to the epistemic challenge. On others, longtermism’s status depends on whether we should take certain high-stakes, long-shot gambles.

Tarsney begins by assuming *expectational utilitarianism*: roughly, the view that we should assign precise probabilities to all decision-relevant possibilities, value possible futures in line with their total welfare, and maximise expected value. This assumption sets aside ethical challenges to longtermism and focuses the discussion on the epistemic challenge.

**Persistent-difference strategies**

Tarsney outlines one broad class of strategies for improving the long-term future: *persistent-difference strategies*. These strategies aim to put the world into some valuable state S when it would otherwise have been in some less valuable state ¬S, in the hope that this difference will persist for a long time. *Epistemic persistence skepticism *is the view that identifying interventions likely to make a persistent difference is prohibitively difficult — so difficult that the actions with the greatest expected value do most of their good in the near-term. It is this version of the epistemic challenge that Tarsney focuses on in this paper.

To assess the truth of epistemic persistence skepticism, Tarsney compares the expected value of a neartermist benchmark intervention *N * to the expected value of a longtermist intervention *L*. In his example, *N *is spending $1 million on public health programmes in the developing world, leading to 10,000 extra quality-adjusted life years in expectation. *L* is spending $1 million on pandemic-prevention research, with the aim of preventing an existential catastrophe and thereby making a persistent difference.

**Exogenous nullifying events**

Persistent-difference strategies are threatened by what Tarsney calls *exogenous nullifying events *(ENEs), which come in two types. Negative ENEs are far-future events that put the world into the less valuable state ¬S. In the context of the longtermist intervention *L*, in which the valuable target state S is the existence of an intelligent civilization in the accessible universe, negative ENEs are existential catastrophes that might befall such a civilization. Examples include self-destructive wars, lethal pathogens, and vacuum decay. Positive ENEs, on the other hand, are far-future events that put the world into the more valuable state S. In the context of *L*, these are events that give rise to an intelligent civilization in the accessible universe where none existed previously. This might happen via evolution, or via the arrival of a civilization from outside the accessible universe. What unites negative and positive ENEs is that they both nullify the effects of interventions intended to make a persistent difference. Once the first ENE has occurred, the state of the world no longer depends on the state that our intervention put it in. Therefore, our intervention stops accruing value at that point.

Tarsney assumes that the annual probability *r* of ENEs is constant in the *far future*, defined as more than a thousand years from now. The assumption is thus compatible with the *time of perils *hypothesis, according to which the risk of existential catastrophe is likely to decline in the near future. Tarsney makes the assumption of constant *r * partly for simplicity, but it is also in line with his policy of making empirical assumptions that err towards being unfavourable to longtermism. Other such assumptions concern the tractability of reducing existential risk, the speed of interstellar travel, and the potential number and quality of future lives. Making these conservative assumptions lets us see how longtermism fares against the strongest available version of the epistemic challenge.

**Models to assess epistemic persistence scepticism**

To compare the longtermist intervention *L *to the neartermist benchmark intervention *N*, Tarsney constructs two models: the cubic growth model and the steady state model. The characteristic feature of the cubic growth model is its assumption that humanity will eventually begin to settle other star systems, so that the potential value of human-originating civilization grows as a cubic function of time. The steady-state model, by contrast, assumes that humanity will remain Earth-bound and eventually reach a state of zero growth.

The headline result of the cubic growth model is that the longtermist intervention *L * has greater expected value than the neartermist benchmark intervention *N* just so long as *r * is less than approximately 0.000135 (a little over one-in-ten-thousand) per year. Since, in Tarsney’s estimation, this probability is towards the higher end of plausible values of r, the cubic growth model suggests (but does not conclusively establish) that longtermism stands up to the epistemic challenge. If we make our assumptions about tractability and the potential size of the future population a little less conservative, the case for choosing *L* over *N* becomes much more robust.

The headline result of the steady state model is less favourable to longtermism. The expected value of *L * exceeds the expected value of *N* only when *r* is less than approximately 0.000000012 (a little over one-in-a-hundred-million) per year, and it seems likely that an Earth-bound civilization would face risks of negative ENEs that push *r* over this threshold. Relaxing the model’s conservative assumptions, however, makes longtermism more plausible. If *L* would reduce near-term existential risk by at least one-in-ten-billion and any far-future steady-state civilization would support at least a hundred times as much value as Earth does today, then *r* need only fall below about 0.006 (six-in-one-thousand) to push the expected value of *L * above *N*.

The case for longtermism is also strengthened once we account for uncertainty, both about the values of various parameters and about which model to adopt. Consider an example. Suppose that we assign a probability of at least one-in-a-thousand to the cubic growth model. Suppose also that we assign probabilities – conditional on the cubic growth model – of at least one-in-a-thousand to values of *r * no higher than 0.000001 per year, and at least one-in-a-million to a ‘Dyson spheres’ scenario in which the average star supports at least 10^{25} lives at a time. In that case, the expected value of the longtermist intervention *L* is over a hundred billion times the expected value of the neartermist benchmark intervention *N*. It is worth noting, however, that in this case *L*’s greater expected value is driven by possibly minuscule probabilities of astronomical payoffs. Many people suspect that expected value theory goes wrong when its verdicts hinge on these so-called *Pascalian probabilities* (Bostrom 2009, Monton 2019, Russell 2021), so perhaps we should be wary of taking the above calculation as a vindication of longtermism.

Tarsney concludes that the epistemic challenge to longtermism is serious but not fatal. If we are steadfast in our commitment to expected value theory, longtermism overcomes the epistemic challenge. If we are wary of relying on Pascalian probabilities, the result is less clear.

**References**

Bostrom, N. (2009). Pascal’s mugging. *Analysis *69 (3), 443–445.

Monton, B. (2019). How to avoid maximizing expected utility. *Philosophers’ Imprint* 19 (18), 1–25.

Russell, J. S. (2021). On two arguments for fanaticism. *Global Priorities Institute Working Paper Series*. GPI Working Paper No. 17-2021.

See also Michael Aird's comments on this Tarsney (2020) paper. His main points are:

efficientlyresources are used and in whatextremesof experiences can be reached.' (link)On the summary: I'd have found this summary more useful if it had made the ideas in the paper simpler, so it was easier to get an intuitive grasp on what was going on. This summary has made the paper shorter, but (as far as I can recall) mostly by compressing the complexity, rather than lessening it!

On the paper itself: I still find Tarsney's argument hard to make sense of (in addition to the above, I've read the full paper itself a couple of times).

AFAIT, the set up is that the longtermist wants to show that there are things we can do now that will continually make the future better than it would have been ('persistent-difference strategies'). However, Tarnsey takes the challenge to be that there are things that might happen that would stop these positive states happening ('exogenously nullifying events'). And what does all the work is that if the human population expands really fast ('cubic growth model'), that is, because it's fled to the stars, but the negative events should happen at a constant rate, then longtermism looks good.

I think what bothers me about the above is this: why think that we could ever identify and do something that would, in expectation, make a persistent positive difference, i.e. a difference for ever and ever and ever? Isn't Tarsney assuming the existence of the thing he seeks to prove, ie 'begging the question'? I think the sceptic is entitled to respond with a puzzled frown - or an incredulous stare - about whether we can really expect to knowingly change the whole trajectory of the future - that, after all, presumably is the epistemic challenge. That challenge seems unmet.

I've perhaps misunderstood something. Happy to be corrected!

Thanks! This is valuable feedback.

By 'persistent difference', Tarsney doesn't mean a difference that persists forever. He just means a difference that persists for a long time in expectation: long enough to make the expected value of the longtermist intervention greater than the expected value of the neartermist benchmark intervention.

Perhaps you want to know why we should think that we can make this kind of persistent difference. I can talk a little about that in another comment if so.

If I understand the proposed model correctly (I haven't read thoroughly, so apologies if not):

The model basically assumes that "longtermist interventions" cannot cause accidental harm.That is, it assumes that if a "longtermist intervention" is carried out, the worst-case scenario is that the intervention will end up being neutral (e.g. due to an "exogenous nullifying event") and thus resources were wasted.But this means assuming away the following major part of complex cluelessness: due to an abundance of crucial considerations, it is usually extremely hard to judge whether an intervention that is related to anthropogenic x-risks or meta-EA is net-positive or net-negative. For example, such an intervention may cause accidental harm due to:

All good points, but Tarsney's argument doesn't depend on the assumption that longtermist interventions cannot accidentally increase x-risk. It just depends on the assumption that there's some way that we could spend $1 million that would increase the epistemic probability that humanity survives the next thousand years by at least 2x10^-14.