This is a special post for quick takes by Guillaume Corlouer. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since: Today at 4:44 PM

The definition of existential risk as ‘humanity losing its long term potential’ in Toby Ord precipice could be specified further. Without (perhaps) loss of generality, assuming finite total value in our universe, one could specify existential risks into two broad categories of risks such as:

  • Extinction risks (X-risks): Human share of total value goes to zero. Examples could be extinction from pandemics, extreme climate change or some natural event.
  • Agential risks (A-risks): Human share of total value could be  greater than in the X-risks scenarios but keeps being strictly dominated by the share of total value holded by misaligned agents. Examples could be misaligned institutions, AIs or loud aliens controlling most of the value in the universe and with whom there would be  little gain from trade to be hoped for.
Curated and popular this week
Relevant opportunities