[This post was written quickly and presents the idea in broad strokes. I hope it prompts more nuanced and detailed discussions in the future.]
In recent years, many in the Effective Altruism community have shifted to working on AI risks, reflecting the growing consensus that AI will profoundly shape our future.
In response to this significant shift, there have been efforts to preserve a "principles-first EA" approach, or to give special thought into how to support non-AI causes. This has often led to discussions being framed around "AI Safety vs. everything else". And it feels like the community is somewhat divided along the following lines:
- Those working on AI Safety, because they believe that transformative AI is coming.
- Those focusing on other causes, implicitly acting as if transformative AI is not coming.[1]
Instead of framing priorities this way, I believe it would be valuable for more people to adopt a mindset that assumes transformative AI is likely coming and asks: What should we work on in light of that?
If we accept that AI is likely to reshape the world over the next 10–15 years, this realisation will have major implications for all cause areas. But just to start, we should strongly ask ourselves: "Are current GHW & animal welfare projects robust to a future in which AI transforms economies, governance, and global systems?" If they aren't, they are unlikely to be the best use of resources.
Importantly, this isn't an argument that everyone should work on AI Safety. It's an argument that all cause areas need to integrate the implications of transformative AI into their theory of change and strategic frameworks. To ignore these changes is to risk misallocating resources and pursuing projects that won't stand the test of time.
- ^
Important to note: Many people believe that AI will be transformative, but choose not to work on it due to factors such as (perceived) lack of personal fit or opportunity, personal circumstances, or other practical considerations.
Animal welfare guy tuning in. My own take is that the majority of the world actually is almost entirely indifferent about animal suffering, so if AI tries to reflect global values (not just the values of the progressive, elite silicon valley bubble) there is a real risk that it will be indifferent to animal suffering. Consider how Foie Gras is still legal in most countries, or bullfighting, both of which are totally unnecessary. And those are just examples from western countries.
I think it's very likely that TAI will lock in only a very mild concern for animal welfare. Or perhaps, concern for animal welfare in certain contexts (e.g. pets), and none in others (e.g. chicken). Maybe that will lead to a future without factory farming, but it will lead to a future with unnecessary animal suffering nonetheless.
What I'm not sure about is: how do we ensure that TAI locks in a strong valuation of animal welfare? One route is to try to change how much society cares about animal welfare, and hope that TAI then reflects that. I guess this is the hope of many animal advocates. But I admit that seems too slow to work at this stage, so I agree that animal advocates should probably prioritize trying to influence those developing AI right now.