I'm dissatisfied with my explanation of why there is not more attention from EAs and EA funders on nuclear safety and security, especially relative to e.g. AI safety and biosecurity. This has come up a lot recently, especially after the release of Oppenheimer. I'm worried I'm not capturing the current state of affairs accurately and consequently not facilitating fully contextualized dialogue.
What is your best short explanation?
(To be clear, I know many EAs and EA funders are working on nuclear safety and security, so this is more so a question of resource allocation, rather than inclusion in the broader EA cause portfolio.)
This characterization seems pretty at odds to me with recent EA work, e.g. from Longview but also my colleague Christian Ruhl at FP, who tend to argue that the philanthropic space on nuclear risk is very funding-constrained and there are plenty of good funding margins left unfilled.