Part 5: Existential Risk

“If we drop the baton, succumbing to an existential catastrophe, we would fail our ancestors in a multitude of ways. We would fail to achieve the dreams they hoped for; we would betray the trust they placed in us, their heirs; and we would fail in any duty we had to pay forward the work they did for us. To neglect existential risk might thus be to wrong not only the people of the future, but the people of the past.”

- Toby Ord

 

We are capable of helping to build a better future for trillions of people. But the loss of human civilization could obliterate that potential.

If we want to do as much good as we can, and create better lives for our descendants, we should consider ways we could destroy ourselves — and figure out how to stop them from happening.

In this sequence, we'll define "existential risk", explore strategies for addressing it, and examine why this work might be both important and neglected.

Two ways to read

There are two ways to get started, depending on whether you have access to Toby Ord's The Precipice — and if you don't, we'll send you a free copy!

First option (no book): Read the sequence as written (click on "Start reading").

Second option (book): Read chapters 2 and 4 of The Precipice, as well as 80,000 Hours' "Policy and research ideas to reduce existential risk".

 

Start reading 

<— Part 4: Longtermism

—> Part 6: Emerging Technologies

Organization Spotlight: Future of Humanity Institute

The Future of Humanity Institute (FHI) is a multidisciplinary research institute working on big picture questions for human civilisation and exploring what can be done now to ensure a flourishing long-term future.

Currently, their four main research areas are:

  • Macrostrategy - investigating which crucial considerations are shaping what is at stake for the future of humanity
  • Governance of AI - examining how geopolitics, governance structure, and strategic trends will affect the development of advanced AI
  • AI Safety - researching computer science techniques for building safer artificially intelligent systems
  • Biosecurity - working with institutions around the world to reduce risks from especially dangerous pathogens

Photo credit: Tim Rüßmann