Data scientist working on AI governance at MIRI, previously forecasting at Epoch and the Stanford AI Index. GWWC pledge member since 2017. Formerly social chair at Harvard Effective Altruism, facilitator for Arete Fellowship, and founder of the DC Slate Star Codex meetup.
By "greater threat to AI safety" you mean it's a bigger culprit in terms of amount of x-risk caused, right? As opposed to being a threat to AI safety itself, by e.g. trying to get safety researchers removed from the industry/government (like this).
"Individual donors shouldn't diversify their donations"
Arguments in favor:
Arguments against:
Moral uncertainty is completely irrelevant at the level of individual donors.