[This post was written quickly and presents the idea in broad strokes. I hope it prompts more nuanced and detailed discussions in the future.]
In recent years, many in the Effective Altruism community have shifted to working on AI risks, reflecting the growing consensus that AI will profoundly shape our future.
In response to this significant shift, there have been efforts to preserve a "principles-first EA" approach, or to give special thought into how to support non-AI causes. This has often led to discussions being framed around "AI Safety vs. everything else". And it feels like the community is somewhat divided along the following lines:
- Those working on AI Safety, because they believe that transformative AI is coming.
- Those focusing on other causes, implicitly acting as if transformative AI is not coming.[1]
Instead of framing priorities this way, I believe it would be valuable for more people to adopt a mindset that assumes transformative AI is likely coming and asks: What should we work on in light of that?
If we accept that AI is likely to reshape the world over the next 10–15 years, this realisation will have major implications for all cause areas. But just to start, we should strongly ask ourselves: "Are current GHW & animal welfare projects robust to a future in which AI transforms economies, governance, and global systems?" If they aren't, they are unlikely to be the best use of resources.
Importantly, this isn't an argument that everyone should work on AI Safety. It's an argument that all cause areas need to integrate the implications of transformative AI into their theory of change and strategic frameworks. To ignore these changes is to risk misallocating resources and pursuing projects that won't stand the test of time.
- ^
Important to note: Many people believe that AI will be transformative, but choose not to work on it due to factors such as (perceived) lack of personal fit or opportunity, personal circumstances, or other practical considerations.
I suspect this might be two distinct uses of "AI" as a term. While GPT-type chatbots can be helpful (such as in the educational examples you refer to), they are very different from artificial general intelligence of the type that most AI alignment/safety work is expecting to happen.
To paraphrase AI Snake Oil,[1] it is like one person talking about vehicles while discussing about how improved spacecraft will open up new possibilities for humanity, and a second person mentions how vehicles are also helping his area because cars are becoming more energy efficient. While they do both fall under the category of "vehicles," they are quite different concepts. So I'm wondering if this might be verging near to talking past each other territory.
The full quote is this: "Imagine an alternate universe in which people don’t have words for different forms of transportation—only the collective noun “vehicle.” They use that word to refer to cars, buses, bikes, spacecraft, and all other ways of getting from place A to place B. Conversations in this world are confusing. There are furious debates about whether or not vehicles are environmentally friendly, even though no one realizes that one side of the debate is talking about bikes and the other side is talking about trucks. There is a breakthrough in rocketry, but the media focuses on how vehicles have gotten faster—so people call their car dealer (oops, vehicle dealer) to ask when faster models will be available. Meanwhile, fraudsters have capitalized on the fact that consumers don’t know what to believe when it comes to vehicle technology, so scams are rampant in the vehicle sector. Now replace the word “vehicle” with “artificial intelligence,” and we have a pretty good description of the world we live in."