Edit: As a commenter pointed out, I mean extinction risk rather than x-risk in this post. Double edit: I'm not even sure exactly what I meant, and I think the whole x-risk terminology needs to be cleaned up alot.

There have been a string  of recent posts about X-risk extinction risk reduction and longtermism. Why they are basically the same. Why they are different. I tried to write up a more formal outline that generalizes the problem (crossposted from a previous comment)

Confidence: Moderate. I can't identify specific parts where I could be wrong (though ironing out a definition of surviving would be good), but I also haven't talked to many people about this. 


  • EV[lightcone] is the current expected utility in our lightcone.
  • EV[survivecone]  is the expected utility in our lightcone if we “survive”[1] as a society.
  • EV[deathcone]  is the expected utility in our lightcone if we “die”.
  • P(survive) + P(die) = 1
  • Take x-risk extinction risk reduction to mean increasing P(survive)



  • EV[lightcone]=P(survive)EV[survivecone] + P(die)EV[deathcone]


  • EV[survivecone] = EV[lightcone | survive]
  • EV[deathcone] = EV[lightcone | death]

(thanks kasey)


  • If EV[survivecone] < EV[deathcone], x-risk  extinction risk reduction is negative EV.[2]
  • If EV[survivecone] > EV[deathcone], then x-risk extinction risk reduction is positive EV.


  • If Derivative[3](p(survive)) x  EV_future < p(survive) x Derivative(EV_future), it’s more effective to work on improving EV[survivecone].[4]
  • If Derivative(p(survive)) x  EV_future > p(survive) x Derivative(EV_future), it’s more effective to reduce existential extinction risks.


  1. ^

    I like to think of surviving as meaning becoming a grabby civilization, but maybe there is a better way to think of it.

  2. ^

    Here I'm just assuming x-risk reduction doesn't affect EV's, obviously not true but for simplicity. 

  3. ^

    Where we are differentiating with respect to effort put into each respective cause

  4. ^

    This could be true even if the future was in expectation positive although it would be a very peculiar situation if that were the case (which is sort of the reason we ended up on x-risk reduction).




Sorted by Click to highlight new comments since:

Might be better to be more explicit about extinction risk reduction vs existential risk reduction. If EV[survivecone] < EV[deathcone], then extinction risk reduction seems negative EV (ignoring acausal stuff), but increasing the probability of extinction would plausibly reduce existential risk and be positive EV according to your simplified model, and there may be other ways (non-extinction-related ways) to reduce s-risks that are existential while also being positive EV to pursue.

Yep completely agree, good catch. 

Curated and popular this week
Relevant opportunities