Hide table of contents

Summary

The thinking behind the title:

  • Achieving longtermist goals depends on avoiding extinction in the next 100 years.
  • Extinction risk in the next 100 years is sufficiently high that the longtermist movement should optimize almost entirely for reducing it.
  • Reducing extinction risk is a subset of longtermism, so more people will be convinced by an argument for reducing extinction risk than an argument for longtermism.
  • Why not focus on the 'reduce extinction risk' argument that more people will be convinced by, and that is almost all of what longtermism is concerned about anyway?

Example: Will MacAskill and Tyler Cowen

This post was inspired by recently listening to Will MacAskill on Tyler Cowen's podcast (link). Tyler raised a few reservations about longtermism (phrasing below is my own):

  • Does consequentialist reasoning break down on questions about extremely large populations?
  • Similarly, isn't consequentialism inadequate for explaining the difficult-to-measure value of things like arts funding?

If instead of defending longtermism, Will was just defending reducing extinction risk, I don't think a single one of Tyler's reservations would have even existed in the first place.

Recent press around the movement is focused more on longtermism than reducing extinction risk

Here are a few examples: NYTimes, BBC, Time, Vox

If longtermism cares about making a case that is compelling to a larger number of people, I think it should emphasise 'reducing extinction risk' in its public relations.

For what it's worth, I suspect that the majority of readers will find the framing of longtermism in the above articles either interesting but forgettable, or abstract and offputting. 

Admittedly, I am not a journalist, and maybe framing things in terms of the enormous potential for future life in the universe is actually a better way of getting people inspired and excited than talking about extinction (i.e. point 3 of my summary is false). I can at least observe that this isn't a typical journalist's strategy for convincing readers an issue is important. I would be interested to hear more qualified opinions on this question.

Possible concern: do people say 'longtermism' and not just 'extinction risk' because it's more interesting?

I think philosophical writing on longtermism is valuable, and I also value intellectual curiosity for its own sake. But I think promoting discussion of 'reducing extinction risk' over 'longtermism' in public spaces is a more pragmatic way of both getting new people involved with longtermist issues and achieving longtermist goals.

5

0
0

Reactions

0
0
Comments4
Sorted by Click to highlight new comments since: Today at 1:34 PM

Suffering risks have the potential to be far, far worse than the risk of extinction.  Negative utilitarians and EFILists may also argue that human extinction and biosphere destruction may be a good thing or at least morally neutral, since a world with no life would have a complete absence of suffering. Whether to prioritize extinction risk depends on the expected value of the far future. If the expected value of the far future is close to zero, it could be argued that improving the quality of the far future in the event we survive is more important than making sure we survive.

One potential argument against(your first bullet point): reducing human civilization extinction risk increases the expected influence humans have over the future. If human influence over the future is expected to make the future better (worse) we want to increase(decrease) it. 

A post with some potential reasons as to why the future might not be so great (+ a fermi estimate).

 

A more formalized version below (which probably doesn't add anything substantive beyond what I already said). 

Definitions

  • EV[lightcone] is the current expected utility in our lightcone.
  • EV[survivecone]  is the expected utility in our lightcone if we “survive” as a species.
  • EV[deathcone]  is the expected utility in our lightcone if we “die”.
  • P(survive) + P(die) = 1
  • Take x-risk reduction to mean increasing P(survive)

*I like to think of surviving as meaning becoming a grabby civilization, but maybe there is a better way to think of it.

 

Lemma

  • EV[lightcone]=P(survive)EV[survivecone] + P(die)EV[deathcone]


equivalently

  • EV[survivecone] = EV[lightcone | survive]
  • EV[deathcone] = EV[lightcone | death]

(thanks kasey)

 

Theorem

  • If EV[survivecone] < EV[deathcone], x-risk reduction is negative EV.
  • If EV[survivecone] > EV[deathcone], then x-risk reduction is positive EV.

 

Corollary 

  • If Derivative(p(survive)) x  EV_future < p(survive) x Derivative(EV_future), it’s more effective to work on improving EV[survivecone].
  • If Derivative(p(survive)) x  EV_future < p(survive) x Derivative(EV_future), it’s more effective to reduce existential risks.

And this could be true even if the future was in expectation positive although it would be a very peculiar situation if that were the case (which is sort of the reason we ended up on x-risk reduction).

Curated and popular this week
Relevant opportunities