I'm dissatisfied with my explanation of why there is not more attention from EAs and EA funders on nuclear safety and security, especially relative to e.g. AI safety and biosecurity. This has come up a lot recently, especially after the release of Oppenheimer. I'm worried I'm not capturing the current state of affairs accurately and consequently not facilitating fully contextualized dialogue.
What is your best short explanation?
(To be clear, I know many EAs and EA funders are working on nuclear safety and security, so this is more so a question of resource allocation, rather than inclusion in the broader EA cause portfolio.)
My argument does say something about how nuclear risk shoud be prioritised. It is a lower priority if both existed. Maybe much lower.
The complicated thing is that nuclear risks do exist whereas biorisk and AI risk are much more speculative in terms of actually existing. In this sense I can believe nuclear should be funded more.