Hide table of contents

Cross-posted to my website.

Summary: We should reduce existential risk in the long term, not merely over the next century. We might best do this by developing longtermist institutions[1] that will operate to keep existential risk persistently low.

Confidence: Unlikely

This essay was inspired by a blog post and paper[2] by Tom Sittler on long-term existential risk.

Civilization could continue to exist for billions (or even trillions) of years. To achieve our full potential, we must avoid existential catastrophe not just this century, but in all centuries to come. Most work on x-risk focuses on near-term risks, and might not do much to help over long time horizons. Longtermist institutional reform could ensure civilization continues to prioritize x-risk reduction well into the future.

This argument depends on three key assumptions, which I will justify in this essay:

  1. The long-term probability of existential catastrophe matters more than the short-term probability.
  2. Most efforts to reduce x-risk will probably only have an effect on the short term.
  3. Longtermist institutional reform has a better chance of permanently reducing x-risk.

For the sake of keeping this essay short, I will gloss over a lot of complexity and potential caveats. Suffice it to say that this essay's thesis depends on a lot of assumptions, and I'm not convinced that they're all true. This essay is intended more as a conversation-starter than a rigorous analysis.

Long-term x-risk matters more than short-term risk

People often argue that we urgently need to prioritize reducing existential risk because we live in an unusually dangerous time. If existential risk decreases over time, one might intuitively expect that efforts to reduce x-risk will matter less later on. But in fact, the lower the risk of existential catastrophe, the more valuable it is to further reduce that risk.

Think of it like this: if we face a 50% risk of extinction per century, we will last two centuries on average. If we reduce the risk to 25%, the expected length of the future doubles to four centuries. Halving risk again doubles the expected length to eight centuries. In general, halving x-risk becomes more valuable when x-risk is lower.

Perhaps we expect x-risk to substantially decline in future centuries. In that case, given the choice between reducing x-risk this century and reducing it in the future, we prefer to reduce it in the future.

This argument depends on certain specific claims about how x-risk reduction works. But the basic result (that we care more about x-risk in the long term than in the short term) holds up across a variety of assumptions. See Sittler (2018) section 3 for a more rigorous justification and an explanation of under which precise conditions this result holds.[3]

Current x-risk reduction efforts might only work in the short term

If we look at current efforts to reduce the probability of existential catastrophe, it seems like most of them will only have relatively short-term effects. For example, nuclear disarmament treaties probably reduce x-risk. But treaties don't last forever. We should expect disarmament treaties to break down over time. Most efforts to reduce x-risk seem like this: they will reduce risk temporarily, but their effects will diminish over the next few decades. (AI safety might be an exception to this if a friendly AI can be expected to minimize all-cause existential risk.)

It seems likely to me that most x-risk reduction efforts will only work temporarily. That said, this belief is more intuitive than empirical, and I do not have a strong justification for it (and I'd only put maybe 60% confidence in this belief). Other people might reasonably disagree.

Longtermist institutional reform could permanently reduce x-risk

John & MacAskill's recent paper Longtermist Institutional Reform proposes developing institutions with incentives that will ensure the welfare of future generations. The notion of long-term vs. short-term existential risk appears to provide a compelling argument for prioritizing longtermist institutional reform over x-risk reduction.

The specific institutional changes proposed by John & MacAskill might not necessarily help reduce long-term x-risk. But the general strategy of longtermist institutional reform looks promising. If we can develop stable and rational longtermist institutions, those institutions will put effort into reducing existential risk, and will continue doing so into the long-term future. This seems like one of the most compelling ways for us to reduce long-term x-risk. And as discussed in the previous sections, this probably matters more than reducing x-risk in the short term.

Counter-arguments

I have argued that we might want to prioritize longtermist institutional reform over short-term existential risk reduction. This result might not hold up if:

  • In future centuries, civilization will reduce x-risk to such a low rate that it will become too difficult to reduce any further.
  • Short-term x-risk reduction efforts can permanently reduce risk, more so than longtermist institutional reform would.
  • Longtermist institutional reform is too intractable.

Maybe some other intervention would do a better job of mitigating long-term x-risk—for example, reducing risks from malevolent actors. Or we could work on improving decision-making in general. Or we might simply prefer to invest our money to be spent by future generations.

Subjective probability estimates

The EA community generally underrates the significance of long-term x-risk reduction: 3 in 4

Marginal work on (explicit) long-term x-risk reduction is more cost-effective than marginal work on short-term x-risk reduction: 1 in 3

Longtermist institutional reform is the best way to explicitly reduce long-term x-risk: 1 in 3

Acknowledgments

Thanks to Sofia Fogel, Ozzie Gooen, and Kieran Greig for providing feedback on this essay.


  1. John, T. & MacAskill, W. (2020, forthcoming). Longtermist Institutional Reform. In Natalie Cargill (ed.), The Long View. London, UK: FIRST. ↩︎

  2. Sittler, T. (2018). The expected value of the long-term future. ↩︎

  3. Using Sittler's model in section 3.1.1, "Diminishing returns on risk reduction", under the assumptions that (1) one can only reduce x-risk for the current century and (2) efforts can reduce x-risk by an amount proportional to the current risk, it follows that x-risk reduction efforts are equally valuable regardless of the level of x-risk. Therefore, it's better to reduce x-risk in future centuries than now, because you can invest your money at a positive rate of return and thus spend more on x-risk reduction in the future than you can now. ↩︎

28

0
0

Reactions

0
0

More posts like this

Comments17
Sorted by Click to highlight new comments since:
People often argue that we urgently need to prioritize reducing existential risk because we live in an unusually dangerous time. If existential risk decreases over time, one might intuitively expect that efforts to reduce x-risk will matter less later on. But in fact, the lower the risk of existential catastrophe, the more valuable it is to further reduce that risk.
Think of it like this: if we face a 50% risk of extinction per century, we will last two centuries on average. If we reduce the risk to 25%, the expected length of the future doubles to four centuries. Halving risk again doubles the expected length to eight centuries. In general, halving x-risk becomes more valuable when x-risk is lower.

This argument starts with assumptions implying that civilization has on the order of a 10^-3000 chance of surviving a million years, a duration typical of mammalian species. In the second case it's 10^-1250. That's a completely absurd claim, a result of modeling as though you have infinite certainty in a constant hazard rate.

If you start with some reasonable credence that we're not doomed and can enter a stable state of low risk, this effect becomes second order or negligible. E.g. leaping off from the Precipice estimates, say there's expected 1/6 extinction risk this century, and 1/6 for the rest of history. I.e. probably we stabilize enough for civilization to survive as long as feasible. If the two periods were uncorrelated, then this reduces the value of preventing an existential catastrophe this century by between 1/6 and 1/3rd compared to preventing one after the risk of this century. That's not negligible, but also not first order, and the risk of catastrophe would also cut the returns of saving for the future (your investments and institution/movement-building for x-risk 2 are destroyed if x-risk 1 wipes out humanity).

[For the Precipice estimates, it's also worth noting that part of the reason for risk being after this century is credence on critical tech developments like AGI happening after this century, so if we make it through that this century, then risk in the later periods is lower since we've already passed through the dangerous transition and likely developed the means for stabilization at minimal risk.]

Scenarios where we are 99%+ likely to go prematurely extinct, from a sequence of separate risks that would each drive the probability of survival low, are going to have very low NPV of the future population, but we should not be near-certain that we are in such a scenario, and with uncertainty over reasonable parameter values you wind up with the dominant cases being those with substantial risk followed by substantial likelihood of safe stabilization, and late x-risk reduction work is not favored over reduction soon.

The problem with this is similar to the problem with not modelling uncertainty about discount rates discussed by Weitzman. If you project forward 100 years, scenarios with high discount rates drop out of your calculation, while the low discount rates scenarios dominate at that point. Likewise, the longtermist value of the long term future is all about the plausible scenarios where hazard rates give a limited cumulative x-risk probability over future history.


This result might not hold up if:
In future centuries, civilization will reduce x-risk to such a low rate that it will become too difficult to reduce any further.

It's not required that it *will* do so, merely that it may plausibly go low enough that the total fraction of the future lost to such hazard rates doesn't become overwhelmingly high.

The passage you quoted was just an example, I don't actually think we should use exponential discounting. The thesis of the essay can still be true when using a declining hazard rate.

If you accept Toby Ord's numbers of a 1/6 x-risk this century and a 1/6 x-risk in all future centuries, then it's almost certainly more cost-effective to reduce x-risk this century. But suppose we use different numbers. For example, say 10% chance this century and 90% chance in all future centuries. Also suppose short-term x-risk reduction efforts only help this century, while longtermist institutional reform helps in all future centuries. Under these conditions, it seems likely that marginal work on longtermist institutional reform is more cost-effective. (I don't actually think these conditions are very likely to be true.)

(Aside: Any assumption of fixed <100% chance of existential catastrophe runs into the problem that now the EV of the future is infinite. As far as I know, we haven't figured out any good way to compare infinite futures. So even though it's intuitively plausible, we don't know if we can actually say that an 89% chance of extinction is preferable to a 90% chance (maybe limit-discounted utilitarianism can say so). This is not to say we shouldn't assume a <100% chance, just that if we do so, we run into some serious unsolved problems.)

Thanks for writing this, I like that it's short and has a section on subjective probability estimates. 

  1. What would you class as longterm x-risk (reduction) vs. nearterm? Is it entirely about the timescale rather than the approach? Eg. hypothetically very fast institutional reform could be nearterm, and doing AI safety field building research in academia could hypothetically be longterm if you thought it would pay off very late. Or do you think the longterm stuff necessarily has to be investment or intitutional reform?
  2. Is the main crux for 'Long-term x-risk matters more than short-term risk' around how transformative the next two centuries will be? If we start getting technologically mature, then x-risk might decrease significantly. Or do you think we might reach technological maturity, and x-risk will be low, but we should still work on reducing it?
  3. What do you think about the assumption that 'efforts can reduce x-risk by an amount proportional to the current risk'? That seems maybe appropriate for medium levels of risk eg. 1-10%, but if risk is small, like 0.01-1%, it might get very difficult to halve the risk. 

Thanks for the questions!

  1. I don't have strong beliefs about what could reduce long-term x-risk. Longtermist institutional reform just seemed like the best idea I could think of.
  2. As I said in the essay, the lower the level of x-risk, the more valuable it is to reduce x-risk by a fixed proportion. The only way you can claim that reducing short-term x-risk matters more is by saying that it will become too intractable to reduce x-risk below a certain level, and that we will reach that level at some point in the future (if we survive long enough). I think this claim is plausible. But simply claiming that x-risk is currently high is not sufficient to prioritize reducing current x-risk over long-term x-risk, and in fact argues in the opposite direction.
  3. I mentioned this in my answer to #2—I think it's more likely that reducing x-risk by a fixed proportion becomes more difficult as x-risk gets lower. But others (e.g., Yew-Kwang Ng and Tom Sittler) have used this assumption that reducing x-risk by a fixed proportion has constant difficulty.

(Thanks for the post, I found it interesting.)

the lower the level of x-risk, the more valuable it is to reduce x-risk by a fixed proportion

Do you mean "the lower the level of x-risk per century, the more valuable it is to reduce x-risk in a particular  century by a fixed proportion"? And this is in a model where the level of existential risk per century is the same across all centuries, right? Given that interpretation and that model, I see how your claim is true.

But the lower the total level of x-risk (across all time) is, the less valuable it is to reduce it by a fixed portion, I think. E.g., if the total risk is 10%, that probably reduces the expected value of the long-term future by something like 10%. (Though it also matters what portion of the possible good stuff might happen already before a catastrophe happens, and I haven't really thought about this carefully.) If we reduce the risk to 5%, we boost the EV of the long-term future by something like 5%. If the total risk had been 1%, and we reduced the risk to 0.5%, we'd have boosted the EV of the future by less. Would you agree with that?

Also, one could contest the idea that we should assume the existential risk level per century "starts out" the same in each century (before we intervene). I think people like Ord typically believe that:

  1. existential risk is high over the next century/few centuries due to particular developments that may occur (e.g., transition to AGI)
  2. there's no particular reason to assume this risk level means there'll be a similar risk level in later centuries
  3. at some point, we'll likely reach technological maturity
  4. if we've gotten to that point without a catastrophe, existential risk from then on is probably very low[1], and very hard to reduce

Given beliefs 1 and 2, if we learn the next few centuries are less risky than we thought, that doesn't necessarily affect our beliefs about how risky later centuries will be. Thus, it doesn't necessarily increase how long we expect civilisation to last (without catastrophe) conditional on surviving these centuries, or how valuable reducing the x-risk over these next few centuries is. Right?

And given beliefs 3 and 4, we have the idea that reducing existential risk is much more tractable now than it will be in the far future. 

A plausible longtermist argument for prioritizing short-term risk is the "punting to the future" approach to dealing with radical cluelessness. On this approach, we should try to reduce only those risks which only the present generation can influence, and let future generations take care of the remaining risks. (In fact, the optimal strategy is probably one where each generation pursues this approach, at least for sufficiently many generations.)

It seems to me that this line of reasoning more favors investing rather than trying to reduce short-term x-risk. If we expect long-term x-risk reduction is more cost-effective but we don't know how to do it, then the best thing to do is to invest so that future generations can use our resources to reduce long-term x-risk once they figure it out.

I do agree that investing is a promising way to punt to the future. I don't have strong views on whether at the current margin one empowers future generations more by trying to reduce risks that threaten their existence or their ability to reduce x-risk, or by accumulating financial resources and other types of capacity that they can either deploy to reduce x-risk or continue to accumulate. What makes you favor the capacity-building approach over the short-term x-risk reduction approach?

Suppose you think efforts to reduce long-term risk are more effective than reducing short-term risk, but you don't know what to do. Then it makes more sense to invest rather than spending your money on the less effective cause, because future people will probably figure out what to do, and then they can spend your investment on the more effective cause.

If insufficient efforts are made to reduce short-term x-risk, there may not be future generations to spend your investment.

The notion of long-term vs. short-term existential risk appears to provide a compelling argument for prioritizing longtermist institutional reform over x-risk reduction.

I think a portfolio approach is helpful here. Obviously the overall EA portfolio is going to assign nonzero shares to both short-term and long-term risks (with shares determined by equalizing marginal utility per dollar across causes). This framing avoids fights over which cause is the "top priority".

The EA community generally underrates the significance of long-term x-risk reduction: 3 in 4
Marginal work on (explicit) long-term x-risk reduction is more cost-effective than marginal work on short-term x-risk reduction: 1 in 3

I'm probably being dumb here, but I don't understand this conjunction of probabilities. What does it mean for EAs to underrate something that's not worth doing on the margin? Is the idea that there are increasing returns to scale, such that even though marginal work on explicit long-term x-risk reduction is not worthwhile, if EAs correctly rated long-term x-risk reduction, this will no longer be true? Or is this a complicated "probability over probabilities" question about value of information?

I think your comment highlights interesting questions about precisely what the first statement means, and whether or not any of these estimates are conditioning on one of the other statements being true. 

But I think one can underrate the significance of something even if that thing is less cost-effective, on the margin, than something else. Toy example: 

  • Marginal short-term x-risk reduction work turns out to score 100 for cost-effectiveness, while marginal (explicit) long-term x-risk reduction work turns out to score on 80. (So Michael's statement 2 is true.)
  • But EAs in general explicitly believe the latter scores 50, or seem to act/talk/think as though they do. (So Michael's statement 1 is true.)
  • This matters because:
    • This means the cost-effectiveness ordering might later reverse without EAs realising this, because they're updating from the wrong point or just not paying attention.
    • Long-term x-risk work might be better on the margin for some people, due to personal fit, even if it’s less good on average. If we have the wrong belief about its marginal cost-effectiveness on average, then we might not notice when this is the case.
    • Having more accurate beliefs about this may help us have more accurate beliefs about other things.

(It's also possible that Michael had in mind a distinction between the value of long-term x-risk reduction and the value of explicit long-term x-risk reduction. But I'm not sure precisely what that distinction would be.)

This pretty much captures what I was thinking.

In addition to what Michael A. said, a 1 in 3 chance that cause A is more effective than cause B means even though we should generally prefer cause B, there could be high value to doing more prioritization research on A vs. B, because it's not too unlikely that we decide A > B. So "The EA community generally underrates the significance of long-term x-risk reduction" could mean there's not enough work on considering the expected value of long-term x-risk reduction.

Got it, thanks! Yeah this is what I meant by "probabilities over probabilities."

if we face a 50% risk of extinction per century, we will last two centuries on average. If we reduce the risk to 25%, the expected length of the future doubles to four centuries. Halving risk again doubles the expected length to eight centuries. In general, halving x-risk becomes more valuable when x-risk is lower.

Presumably the marginal cost is increasing as the level of risk falls. So I don't think this is true in general.

Curated and popular this week
Relevant opportunities