[This post was written quickly and presents the idea in broad strokes. I hope it prompts more nuanced and detailed discussions in the future.]
In recent years, many in the Effective Altruism community have shifted to working on AI risks, reflecting the growing consensus that AI will profoundly shape our future.
In response to this significant shift, there have been efforts to preserve a "principles-first EA" approach, or to give special thought into how to support non-AI causes. This has often led to discussions being framed around "AI Safety vs. everything else". And it feels like the community is somewhat divided along the following lines:
- Those working on AI Safety, because they believe that transformative AI is coming.
- Those focusing on other causes, implicitly acting as if transformative AI is not coming.[1]
Instead of framing priorities this way, I believe it would be valuable for more people to adopt a mindset that assumes transformative AI is likely coming and asks: What should we work on in light of that?
If we accept that AI is likely to reshape the world over the next 10–15 years, this realisation will have major implications for all cause areas. But just to start, we should strongly ask ourselves: "Are current GHW & animal welfare projects robust to a future in which AI transforms economies, governance, and global systems?" If they aren't, they are unlikely to be the best use of resources.
Importantly, this isn't an argument that everyone should work on AI Safety. It's an argument that all cause areas need to integrate the implications of transformative AI into their theory of change and strategic frameworks. To ignore these changes is to risk misallocating resources and pursuing projects that won't stand the test of time.
- ^
Important to note: Many people believe that AI will be transformative, but choose not to work on it due to factors such as (perceived) lack of personal fit or opportunity, personal circumstances, or other practical considerations.
Strong upvote! I want to say some stuff particularly within the context of global development:
The intersection of AI and global development seems surprisingly unsaturated within EA, or to be more specific, I think a surprisingly few number of EAs think about the following questions:
i) How to leverage AI for development (e.g. AI tools for education, healthcare)
ii) What interventions and strategies should be prioritized within global health and development in the light of AI developments? (basically the question you ask)
There seems to be a lot of people thinking about the first question outside of EA, so maybe that explains this dynamic, but I have the "hunch" that the primary reason why people don't focus on the first question too much is people deferring too much and selection effects, rather lack of any high-impact interventions. If you care about TAI, you are very likely to work on AI alignment & governance, if you don't want to work on TAI-related things (due to risk-aversion or any other argument/value), you just don't update that much based on AI developments and forecasts. This may also have to do with EA's ambiguity-averse/risk-averse attitude towards GHD characterized by exploiting evidence-based, interventions rather than exploring new highly promising interventions. I think if a student/professional were to come to an EA community-builder and asked "How can I pursue a high-impact career in/upskill in global health R&D or AI-for-development", number of community-builders that can give a sufficiently helpful answer is likely very few to none, I also likely wouldn't be able to give a good answer and point to communities/resources outside of the EA community.
(Maybe EAs in London or SF people discuss these, but I don't see any discussion of it online, neither do I see any spaces where people who could be discussing these can network/discuss together. If there is anyone who'd like to help create or run an online or in-person AI-for-development or global health R&D fellowship, feel free to shoot a message)