[This post was written quickly and presents the idea in broad strokes. I hope it prompts more nuanced and detailed discussions in the future.]
In recent years, many in the Effective Altruism community have shifted to working on AI risks, reflecting the growing consensus that AI will profoundly shape our future.
In response to this significant shift, there have been efforts to preserve a "principles-first EA" approach, or to give special thought into how to support non-AI causes. This has often led to discussions being framed around "AI Safety vs. everything else". And it feels like the community is somewhat divided along the following lines:
- Those working on AI Safety, because they believe that transformative AI is coming.
- Those focusing on other causes, implicitly acting as if transformative AI is not coming.[1]
Instead of framing priorities this way, I believe it would be valuable for more people to adopt a mindset that assumes transformative AI is likely coming and asks: What should we work on in light of that?
If we accept that AI is likely to reshape the world over the next 10–15 years, this realisation will have major implications for all cause areas. But just to start, we should strongly ask ourselves: "Are current GHW & animal welfare projects robust to a future in which AI transforms economies, governance, and global systems?" If they aren't, they are unlikely to be the best use of resources.
Importantly, this isn't an argument that everyone should work on AI Safety. It's an argument that all cause areas need to integrate the implications of transformative AI into their theory of change and strategic frameworks. To ignore these changes is to risk misallocating resources and pursuing projects that won't stand the test of time.
- ^
Important to note: Many people believe that AI will be transformative, but choose not to work on it due to factors such as (perceived) lack of personal fit or opportunity, personal circumstances, or other practical considerations.
A lot of EAs do think AI safety could become ridiculously important (i.e. some probability mass of very short timelines) but are not in the position to do anything, which is why they focus on more tractable areas (i.e. global health, animal welfare, EA building) under the assumption of longer AI timelines. Especially because there's a lot of uncertainty about when AGI would come.
My internal view is 25% of TAI by 2040 and 50% of TAI by 2060, where I define TAI as an AI with the ability to autonomously perform AI research. They may have shifted in light of DeepSeek but what am I supposed to do? I'm just a freshman college student at a non-prestigious university. Am I supposed to drop all commitments I have, speed-run my degree, get myself to work in a highly competitive AI lab which would probably require a Ph. D., work on technical alignment hoping to get a breakthrough? If TAI comes within 5 years, it would be the right move, but if I'm wrong then I would end up with very shallow skills without much experience.
We have the following Pascal matrix (drafted my GPT):
I know the decision is not binary, but I am definitely willing to forfeit 25% of my impact by betting on the AGI comes late scenario. I do think non-AI cause areas should use AI projection in their deliberation and ToC but I think it is silly to cut out everything that happens after 2040 with respect to the cause area.
However, I do think EAs should have a contingency plan where they should speedrun to AI safety if and only if (one of multiple conditions occur; i.e. even conservative superforecastors project AGI before 2040, or a national emergency is declared). And we can probably hedge against the AGI comes soon scenario by buying long-term NVIDIA call options.