Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer.
Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked.
Is AGI coming in the next 5-10 years?
This is very well covered elsewhere but basically it looks increasingly likely, e.g.:
- The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively.
- The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively.
- A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey.
- These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years).
- Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously.
What could AGI mean for animals?
AGI’s implications for animals depend heavily on who controls the AGI models. For example:
- AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition.
- For example, maybe two government-owned companies separately develop AGI then restrict others from developing it.
- These actors’ use of AGI might be driven by a desire for profit, recognition, absolute control, world peace, cessation of suffering of all sentient beings, or propagation of specific values or religions.
- AGI might be controlled by lots of people.
- For example, AGI models might be open-sourced and efficient enough to run on a standard computer.
- These users might be driven by the same desires as listed in the bullet above, plus things like entertainment, satisfaction of curiosity, immortality, malice, or something more obscure.
- The AGI might control itself.
- For example, we might fail to properly put in place measures ensuring that AGI models follow our instructions after deployment.
- These models might remain largely aligned with whatever goals their users or developers programmed them to fulfil, or they might prioritise goals that ensure their own survival, or goals serving humanity or all sentient beings, or totally alien goals that only make sense if you’re an AGI.
Outcomes will vary wildly depending on the values their controllers instil in them.
- For example, it’s sometimes assumed that AGI will lead to the replacement of factory farming with alternative proteins, given that alternative proteins are so much more efficient against a range of metrics (in terms of calories and protein provided relative to land use, water use, carbon emissions, amount of unnecessary suffering, etc.) than factory farming.
- However, this assumes a specific definition of “efficiency” that prioritises global outcomes rather than individual stakeholder interests. In reality, AGI systems might optimise for narrower definitions of efficiency that serve their direct users (e.g. they might be controlled by governments or corporate actors that are heavily influenced by animal agriculture lobbyists, or a government that values conservatism and therefore maintenance of the status quo).
- Even if an AGI did aim to optimise the entire food system rather than specific stakeholders, literature on food systems contains a range of different models for optimal food production that could influence the AGI’s values, some of which would be very damaging for animals.
- For example, if the AGI's embedded values dictate that animal protein is the optimal protein source, rather than optimise alternative protein production it might find creative ways to farm animals more intensively or replace less ‘efficient’ animals, like cows, with more efficient ones, like fish.
AGI's initial conditions and values might quickly amplify into dramatic consequences for billions of animals before meaningful course corrections become possible.
- This would make early intervention in shaping AGI's values extremely important. For example, if AGI systems optimise factory farming for efficiency without welfare considerations, they might rapidly deploy sophisticated livestock management systems across millions of farms that dramatically increase both production and animal suffering before welfare advocates can meaningfully respond.
- More broadly, if early AGI systems absorb and amplify existing human perceptions of animals as resources, they could quickly entrench this perspective in newly developed technologies, legal frameworks, and cultural narratives.
Different values could lead to vastly different outcomes for wild animals as well as for farmed ones.
- If AGI systems are directed to care about the welfare of individual wild animals, they could help end an unimaginable amount of suffering caused by disease, parasitism, starvation, predation, etc, while also finding ways to facilitate sustainable urban development while minimising harm to animals.
- On the other hand, if they’re solely directed to preserve nature as it is and avoid directly intervening in wild animals’ lives (which is likely to be a pretty common view among those who control them), this would forgo enormous opportunities, such as large-scale targeted efforts to prevent painful diseases among certain wild animal populations.
- Or if millions of people have access to AGI, there’s a decent chance of harming many wild animals if people use AGI in sadistic or thoughtless ways, building on humans’ current tendency to exterminate any animals that we deem inconvenient.
There are many ways that AGI could affect humans that would have significant knock-on effects on animals.
- For example, if it leads to mass unemployment, even if only temporarily, animal advocacy is unlikely to be a cultural or political priority amidst the social upheaval that would create.
- At the same time, if it leads to huge economic growth and expansion of access to healthcare, this might leave people with the financial, physical, and psychological comfort to engage with animals’ interests.
- Alternatively, it might lead people to neglect animals even more than they do currently. For example, rapid economic growth could lead to much higher salaries, making people less likely to spend time on unpaid activities as the opportunity cost is much greater; companies and governments might also become even more focussed on productivity, to avoid their rivals harnessing AGI to massively outcompete them. More simply, maybe people will just become too distracted pursuing all the crazy hedonistic pleasures this new abundant world has to offer.
- Generally speaking, any AGI outcomes that end up being catastrophic for humans would probably also be catastrophic for animals. For example, nuclear war and bio-engineered pandemics would probably be terrible for animals, and it’s unlikely that a global authoritarian dictatorship would devote many resources to improving animal wellbeing.
Animal advocacy itself would need to transform in a rapidly changing, AI-dominated landscape.
- Even before AGI emerges, early adopters of AI-powered advocacy tools might enjoy a brief window of opportunity for unprecedented impact with limited resources (e.g. by using automated lobbying systems to identify and engage potentially receptive governments and corporations on welfare improvements).
- However, this advantage will likely evaporate as every other interest group deploys similar technologies, potentially overwhelming democratic and legal systems and forcing animal advocates to compete for attention in increasingly chaotic information environments.
What should we do about it?
Reflect AGI in our goal as a movement
I’ve generally been doing/supporting animal advocacy with the implicit goal of ‘help end factory farming and create a robust community of wild animal welfare advocates by 2100’.
But if we assume a 50% probability of AGI in the next 5-10 years, this goal should probably be more like ‘ensure that advanced AI and the people who control it are aligned with animals’ interests by 2030, still do some other work that will help animals if AGI timelines end up being much further off, and align those two strands of work as much as possible’.
Act now, rather than wait until it’s too late
It doesn’t seem good enough to wait until AGI is here, then lobby for it to prioritize animals’ interests.
- For one thing, AGI offers an unprecedented opportunity to change our food systems and the other ways we exploit animals. Every year that AGI isn’t prioritising animals’ interests is another year that trillions of animals suffer needlessly in factory farms and many more suffer needlessly in the wild.
- The actual transition to an AGI world is likely to be chaotic, which will probably leave people and institutions less receptive to non-essential concerns like animal advocacy.
- By the time AGI is deployed, it might already be too late; maybe its controllers shut themselves off from outside influences, or it’s controlled by a vast range of people who are impossible to coordinate and influence in any meaningful way, or it’s already pursuing its own goals and resistant to any human influence.
- This problem already exists, with current AI systems already exhibiting significant biases against animals.
Support the work that best fulfils our AGI-aligned goals
Strategic individual and geographic targeting could dramatically increase our impact in short AGI timelines. For example, rather than broad public education campaigns, it could be most impactful to direct resources to ensuring that influential AGI decision-makers in specific strategic locations (e.g. the Bay Area or Beijing) incorporate basic moral consideration of animals in their work.
With this in mind, the most important kinds of work I should support right now might include:[1]
- Collaboration between the animal and AI spaces (e.g. AI for Animals, Electric Sheep). This includes convincing AI decision-makers that they should care about animals, working with those who are already convinced to find ways to put that care into action, and helping ensure that there are animal-friendly people in the room when it comes to big decisions about AGI. This could include demonstrating that AI systems respecting all sentient beings, regardless of intelligence, are less likely to develop biases that could harm both animals and humans.
- AI/animals ethics research (e.g. the Moral Alignment Center, Yip Fai Tse, Leonie Bossert, Soenke Ziesche). Figuring out what we need to do in this space, and ensuring that animals’ interests are seen as a legitimate consideration in AI ethics, could influence governments and AI companies that take AI ethics seriously.
- Technical AI/animal alignment work (e.g. CaML, Open Paws, AI for Animals). This would involve identifying technical approaches to instill animal-friendly values in current AI models (such as Reinforcement Learning from Human Feedback (RLHF) with animal-welfare-aligned feedback providers).
- Government outreach around AI and animals (e.g. EU AI Act Code of Practice Stakeholder Advisory Group). Specifically, ensuring animals’ interests are represented in the regulations that will govern the creation and deployment of AGI.
- General AI safety work (e.g. Center for AI Safety). Ensuring AI is safe for humans seems like an essential (though not sufficient) step towards making it safe for animals.
- Increasing the amount of animal-friendly content that is likely to feature in AI training data (e.g. Open Paws and CaML have large animal-aligned datasets on HuggingFace they are making freely available for AI training). Assuming that AGI models will be trained on/influenced by this kind of data, this seems like a promising way to slightly shift the needle to more animal-friendly AI values.
- Meta-fundraising and talent recruitment for work in the AI/Animals space (e.g. AI for Animals). This entails supporting the individuals and organizations that can effectively scale all these efforts.
- Alternative proteins outreach and regulation targeted at governments and individuals who are likely to control AGI (e.g. Good Food Institute). This is likely to be a good bet because if the decision-makers controlling AGI are fundamentally opposed to the idea of alternative proteins, this could significantly reduce or delay AGI’s potential to replace factory farming.
- Wild animal welfare outreach targeted at governments and individuals who are likely to control AGI (e.g. Wild Animal Initiative). Right now, very few people take the idea of individual wild animal welfare seriously, so this is unlikely to feature in the guiding values of AGI models, which would be a huge wasted opportunity. Rapidly getting people to care about wild animals is also potentially an easier lift than getting them to care about farmed animals, given that it doesn’t entail going vegan and totally restructuring our food systems.[2]
Most of these projects will be highly useful for animals no matter when, or even whether, AGI is developed. For example, building relationships with AI labs establishes credibility and communication channels that will be valuable even if AGI takes several decades to develop. Likewise, investing in alternative protein infrastructure now prepares for a future where AGI can rapidly scale its adoption, and also provides a long-term solution to factory farming even without AGI.
Conclusion
Overall, this currently seems like a critical, time-sensitive, and massively overlooked element of our animal advocacy strategy. It’s right to acknowledge the uncertainty around AGI timelines and the unpredictability of post-AGI futures, but that’s an argument for investing many more resources into thinking about what this means in practice for our day-to-day work, not for maintaining the status quo until it’s too late. What do others think?
- ^
- ^
This goes beyond the focus on AI and animals, but AI sentience work (e.g. the NYU Center for Mind, Ethics, and Policy) also seems important: getting a better handle on the possibility of sentient AI and mitigating the possibility of its creation until those risks have been addressed
Thank you for writing this up, Max! The more I dive into AI for Animals, the more it seems to be just about the most important (and drastically underdiscussed) topic within the farmed animal movement, both in terms of risks and opportunities.
My understanding is that current AIs' (professed) values are largely determined by RLHF, not by training data. Therefore it would be more effective to persuade the people in charge of RLHF policies to make them more animal-friendly.
But I have no idea whether RLHF will continue to be relevant as AI gets more powerful, or if RLHF affects AI's actual values rather than merely its professed values.
I agree, it is crucial that the animal advocacy movement learn research and prepare a wise and informed strategy for pre AGI and post AGI times.
"Act now, rather than wait until it’s too late" -> well put.
Glad to see the good work of CaML and others highlighted. Positively influencing the models as much as possible right now seems vital.
A random one - is AI for inter-species communication emerging as a thing? Is it viable in the short term, are there promising projects working on it with a view to bringing it to the masses via mobile apps etc?
Hi Simon! You can find out more about the latest development at Earth Species Project here and Project CETI here. There have been some recent breakthroughs with detecting and classifying animal bioacoustic signals through LLM-type models.
Thanks Simon! Yes, AI for inter-species communication is underway. The main organisations working on this at the moment are Earth Species Project (who just received a $17 million grant) and Project CETI. So far as I can tell, work is still in its early stages and mainly focussed on gathering and cleaning audiovisual data and getting a better sense for different species' portfolio of sounds, rather than actual communication.
I'm still unsure how good this will be for animals. I wrote a brief post on this for the AI for Animals newsletter if you're interested, but the upshot is that I can see plenty of ways for this technology to be abused (e.g. used for hunting, fishing, exploitation of companion animals for entertainment purposes, co-option by the factory farming industry, etc.). I also think there's a risk that we only use it for communication with a handful of popular species (e.g. dogs, cats, whales, dolphins), and don't consider what this means for other less popular species (like farmed chickens).
The most promising project I've seen so far is the partnership between Project CETI and the More Than Human Life (MOTH) Project at New York University, which is focussed on the ethical implications of interspecies communication. I hope that these kinds of guidelines will end up driving progress on this rather than corporate interests... and that we focus on using AI to understand animals better on their own terms, rather than trying to communicate with them purely for our own curiosity and entertainment.
Fantastic post, very clear. This is a very important topic.
Great piece Max! I feel very similarly.