Hide table of contents

This part of Sam Bankman-Fried's interview on the 80K Podcast interview stood out to me. He's asked about some of his key uncertainties, and one that he offers is:

Maybe a bigger core thing is, as long as we don’t screw things up, [if] we’re going to have a great outcome in the end versus how much you have to actively try as a world to end up in a great place. The difference between a really good future and the expected future — given that we make it to the future — are those effectively the same, or are those a factor of 10 to the 30 away from each other? I think that’s a big, big factor, because if they’re basically the same, then it’s all just about pure x-risk prevention: nothing else matters but making sure that we get there. If they’re a factor of 10 to the 30 apart, x-risk prevention is good, but it seems like maybe it’s even more important to try to see what we can do to have a great future.

What are the best available resources on comparing "improving the future conditional on avoiding x-risk" vs. "avoiding x-risk"? 

8

0
0

Reactions

0
0
New Answer
New Comment

1 Answers sorted by

I asked a similar question before: Is existential risk more pressing than other ways to improve the long-term future?

As your question states, there are two basic types of trajectory changes:

  • increasing our chance of having control over the long-term future (reducing x-risks); and
  • making the future go better conditional on us having control over it.

You might think reducing x-risks is more valuable if you think that:

  • reducing x-risk will greatly increase the expected lifespan of humanity (for example, halving x-risk at every point in time doubles humanity's expected lifespan); and
  • conditional on there being a future, the future is likely to be good without explicit interventions by us, or such interventions are unlikely to improve the future.

On the other hand, if you think that the future is unlikely to go well without intervention, then you might want to focus on the second type of trajectory change.

For example, I think there is a substantial risk that our decisions today will perpetuate astronomical suffering over the long-term future (e.g. factory farming in space, artificial minds being mistreated), so I prioritize s-risks over extinction risks.

On the other hand, I think economic growth is less valuable than x-risk reduction because there's only room for a few more millennia of sustained economic growth, whereas humanity could last millions of years if we avoid x-risks.

Comments2
Sorted by Click to highlight new comments since: Today at 7:47 PM

I would replace "avoiding x-risk" with "avoiding stuff like extinction" in this question. SBF's usage is nonstandard -- an existential catastrophe is typically defined as something that causes us to be able to achieve at most a small fraction of our potential. Events which cause us to achieve only 10^-30 of our potential are an existential catastrophe. If we avoid existential catastrophe, the future is great by definition.

Regardless, I'm not aware of much thought on how to improve the future conditional on avoiding stuff like extinction (or similar questions, like how to improve the future conditional on achieving aligned superintelligence).

Regardless, I'm not aware of much thought on how to improve the future conditional on avoiding stuff like extinction (or similar questions, like how to improve the future conditional on achieving aligned superintelligence).

Most work on s-risks, such as the work done by the Center for Reducing Suffering and the Center on Long-Term Risk, is an example of this type of research, although restricted to a subset of ways to improve the future conditional on non-extinction.

If we avoid existential catastrophe, the future is great by definition.

(Note that this only follows if you assume that humanity has the potential for greatness.)

Curated and popular this week
Relevant opportunities