[ Question ]

What actions would obviously decrease x-risk?

by reallyeli 13d6th Oct 201926 comments

20


Consider all the actions possible tomorrow, which individuals or groups could take. Are there any which would "obviously" (meaning: you believe it with high probability, and you expect that belief to be uncontroversial) result in decreased x-risk?

(For example, consider reducing the size of Russia and the US's nuclear stockpiles. I'm curious if this is on the list.)

(I include "which individuals or groups" could take because I am interested in what actions we could take if we all coordinated perfectly. For example, neither Russia nor the US can unilaterally reduce both their stockpiles, and perhaps it would increase x-risk for one of them to lower only theirs, but the group consisting of US and Russia's government could theoretically agree to lower both stockpiles.)

New Answer
Ask Related Question
New Comment
Write here. Select text for formatting options.
We support LaTeX: Cmd-4 for inline, Cmd-M for block-level (Ctrl on Windows).
You can switch between rich text and markdown in your user settings.

6 Answers

ALLFED and related projects like seed banks seem pretty uncontroversially likely to reduce the risk of human extinction.

Develop and deploy a system to protect Earth from impacts from large asteroids, etc.

I'd suggest work that would allow vaccines to be developed much more quickly falls into this category - it was mentioned in the 80,000 Hours podcast with Tom Kalil.

"I was able to get some additional funding for this new approach [to develop vaccines more quickly] and my primary motivation for it was, maybe it’ll help in Ebola, but almost certainly if it works it will improve our ability to respond to future emerging infectious diseases, or maybe even a world of engineered pathogens."

https://80000hours.org/podcast/episodes/tom-kalil-government-careers/

In this talk on 'Crucial considerations and wise philanthropy', Nick Bostrom tentatively mentions some actions that appear to be robustly x-risk reducing, including promoting international peace and cooperation, growing the effective altruism movement, and working on solutions to the control problem.

Increasing the ease/decreasing the formality of world leaders talking to each other as per the Red Phone. World leaders mostly getting educated at the same institutions helps enormously with communication as well, though it does increase other marginal risks due to correlated blind spots.

Biorisk mitigation becoming much higher status a field and thus attracting more top talent.

Pakistan not having nukes.

  • Most actions that seem to make arms races or war more unlikely, e.g. the world's major powers committing to strengthening international institutions and multilateralism.
  • Any well-connected and well-resourced actor dedicating themselves to research ways to improve decision-making that affects the long-term in large institutions.
  • Everyone in the AI research community taking a few weeks to engage deeply with AI risk arguments.