Hide table of contents

In this question, I am assuming ethical longtermism – that our objective is to maximize total well-being over the long term. It seems like many longtermist EAs believe that the most high-impact way to improve the far future is to reduce existential risks to humanity. However, there are other ways to improve the far future: speeding up technological progress, speeding up moral progress, improving institutions, settling space, and so on. (I think of these as improving the quality of the far future, conditioned on not having an existential catastrophe.) What are some arguments for why existential risk is more pressing than these other levers, or vice versa?

[Edit: I'm especially interested in which lever is most pressing when we take the welfare of non-human animals into account.]

New Answer
New Comment

1 Answers sorted by

Pedro Oliboni wrote a paper that addresses one aspect of my question, the tradeoff between existential risk reduction and economic growth: On The Relative Long-Term Future Importance of Investments in Economic Growth and Global Catastrophic Risk Reduction.

Curated and popular this week
Relevant opportunities