I'm dissatisfied with my explanation of why there is not more attention from EAs and EA funders on nuclear safety and security, especially relative to e.g. AI safety and biosecurity. This has come up a lot recently, especially after the release of Oppenheimer. I'm worried I'm not capturing the current state of affairs accurately and consequently not facilitating fully contextualized dialogue.
What is your best short explanation?
(To be clear, I know many EAs and EA funders are working on nuclear safety and security, so this is more so a question of resource allocation, rather than inclusion in the broader EA cause portfolio.)
It seems very unlikely that a nuclear war will kill all of us, compared to biorisk where this seems more possible.
Not sure this should affect funding in general, but explicitly longtermist funders will therefore weight biorisk more.
A nuclear war happening at the same time as a supervolcano is very unlikely. However, it could take a hundred thousand years to recover population, so if the frequency of supervolcanic eruptions is roughly every 30,000 years, it's quite likely there would be one before we recover.
The scenario I'm talking about is one where the worsening climate and loss of techno... (read more)