Hide table of contents


 

I’ve been working in animal advocacy for two years and have an amateur interest in AI. All corrections, points of disagreement, and other constructive feedback are very welcome. I’m writing this in a personal capacity, and am not representing the views of my employer. 

Many thanks to everyone who provided feedback and ideas. 

Introduction

In a previous post, I set out some of the positive and negative impacts that AI could have on animals. The present post sets out a few ideas for what an animal-inclusive AI landscape might look like: what efforts would we need to see from different actors in order for AI to be beneficial for animals? This is just a list of high-level suggestions, and I haven’t tried to prioritize them, explore them in detail, or suggest practical ways to bring them about. I also haven’t touched on the (potentially extremely significant) role of alternative proteins in all this. 

We also have a new landing page for people interested in the intersection of AI and animals: www.aiforanimals.org. It’s still fairly basic at this stage but contains links to resources and recent news articles that you might find helpful if you’re interested in this space. Please feel free to provide feedback to help make this a more useful resource. 

Why do we need animal-inclusive AI?

As described in the previous post, future AI advances could further disempower animals and increase the depth and scale of their suffering. However,  AI could also help bring about a radical improvement in human-animal relations and greatly facilitate efforts to improve animals’ wellbeing.

For example, just in the last month, news articles have covered potential AI risks for animals including AI’s role in intensive shrimp farming, the EU-funded ‘RoBUTCHER’ that will help automate the pig meat processing industry, (potentially making intensive animal agriculture more profitable), and the potential of Large Language Models to entrench speciesist biases. On the more positive side, there were also articles covering the potential for AI to radically improve animal health treatmentsupport the expansion of alternative protein companies, reduce human-animal conflictsfacilitate human-animal communication, and provide alternatives to animal testing. These recent stories are just the tip of the iceberg, not only for animals that are being directly exploited – or cared for – by humans, but also for those living in the wild.    

AI safety for animals doesn’t need to come at the expense of AI safety for humans. There are bound to be many actions that both the animal advocacy and AI safety communities can take to reinforce each others’ work, given the considerable overlap in our priorities and worldviews. However, there are also bound to be some complex trade-offs, and we can’t assume that efforts to make AI safe for humans will inevitably also benefit all other species. In fact, it seems feasible that AI could bring about huge improvements to human wellbeing (e.g., by accelerating economic growth and advances in healthcare) while being a disaster for other animals. Targeted efforts are needed to prevent that happening, including (but definitely not limited to):

  • Explicitly mentioning animals in written commitments, both non-binding and binding;
  • Using those commitments as a stepping stone to ensure animal-inclusive applications of AI;
  • Representing animals in decision-making processes;
  • Conducting and publishing more research on AI’s potential impacts on animals; and
  • Building up the ‘AI safety x animal advocacy’ community.

The rest of this post provides some information, examples, and resources around those topics.

Moving towards animal-inclusive AI

Explicitly mention animals in non-binding commitments

Governmental commitments

On November 1, 2023, 29 countries signed the Bletchley Declaration at the AI Safety Summit hosted by the UK. This Declaration begins:

Artificial Intelligence (AI) presents enormous global opportunities: it has the potential to transform and enhance human wellbeing, peace and prosperity. To realise this, we affirm that, for the good of all, AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible. We welcome the international community’s efforts so far to cooperate on AI to promote inclusive economic growth, sustainable development and innovation, to protect human rights and fundamental freedoms, and to foster public trust and confidence in AI systems to fully realise their potential.

This seems like a significant positive contribution towards global AI safety for humanity. However, safety for non-human animals was (predictably) absent, both from the Declaration and the Summit more broadly. While these kinds of non-binding agreements don’t actually commit signatories to take action, they do coordinate governments around a shared intention and send a public message about the kinds of things we should value as a global community. Even if humans manage to keep AI systems totally under our control, this is unlikely to benefit animals without some meaningful display of political will[1]. And even for governments who don’t actually sign any such treaty, just its existence and international recognition can help solidify global norms around a particular issue. 

review of public AI materials [2]found that 0 out of 77 AI/computer ethics courses, 7 out of over 200 papers and books on AI ethics and alignment, and 1 out of 73 AI ethics principles or statements included a discussion of animals. The AI ethics statement that included animals was that of the Serbian Government, which states in its first paragraph that the development of artificial intelligence systems must be in line with the well-being of humans, animals and the environment, and which includes animals’ capacity for pain, suffering, fear, and stress in its definition of ethics. 

Animals are also mentioned in the Ethics Guidelines for Trustworthy Artificial Intelligence, published in 2019 by the EU’s High-Level Expert Group on AI. The guidelines’ ‘Trustworthy AI Assessment List’ for consideration by AI practitioners includes the question ‘Did you consider the potential impact or safety risk to the environment or to animals?’ While truly animal-inclusive guidelines would need to have animal interests more firmly embedded throughout, rather than in a single bullet point, the fact that these guidelines explicitly reference animals at all could serve as a helpful precedent[3]. However, these guidelines were produced by an advisory group rather than the EU Commission, and there do not seem to be any references to animals in the current draft of the EU’s AI Act.

Company commitments

Most AI companies have themselves published some sort of non-binding mission statement setting out their commitment to ensuring that AI is a net-positive for humanity, such as the OpenAI Charter and Anthropic’s in-house constitution (which includes principles based on the Universal Declaration of Human Rights and various other sources)[4].  Again, animals are notably absent from these mission statements and again, while we should be very wary of corporate lip service, their inclusion seems like a necessary first step to ensuring that the AI systems they develop respect animals’ interests. Ideally, these efforts would be motivated both by a genuine understanding of the ethical grounds for animal-inclusive policies and by a desire to shore up their ethical credentials.

Depending on the nature of the company and its mission statements, the approach might go beyond just including animals at relevant points in the text. For example, companies who take a similar ‘constitutional’ approach to Anthropic could also include declarations concerning animal rights alongside those on human rights. While there are currently no international treaties of this sort, other models exist such as the ​​academic-led Montreal Declaration on Animal Exploitation (not to be confused with the also relevant Montreal Declaration for the Responsible Development of AI). Moves in this direction by the more ethically oriented developers like Anthropic and OpenAI could help to put animal-inclusive AI on the radar of other developers, as well as governments, investors, and the general public. Further down the line, CEOs of leading AI companies might also put their names to statements including AI risks to animals, similar to the Statement on AI Risk signed by the CEOs of OpenAI, Anthropic, Google DeepMind, among others[5].   

Representation of animals in AI decision-making

Another opportunity to represent animals in these kinds of mission statements comes from efforts to ensure democratic decision-making in AI. For example, Anthropic’s second subsequent public constitution was drawn up following a public consultation process with the Collective Intelligence Project (who are piloting structures for collective governance of generative AI and exploring how to ensure AI brings about public goods). Relatedly, OpenAI announced earlier this year that they are launching a program to award grants to experiments in setting up a democratic process (involving a ‘broadly representative group of people’) for deciding what rules AI systems should follow so that AGI can benefit all of humanity. Clearly, these kinds of democratic decision-making processes could actually run counter to animals’ interests, if the people consulted don’t themselves view animals as morally relevant. This highlights the need to ensure that the consulted group is at least somewhat representative of all sentient beings that are likely to be impacted by AI systems. Incorporating the views of vegans and animal activists would be a start, but not sufficient, as this only accounts for the secondhand benefits that humans get from seeing animals well treated, rather than acknowledging the direct benefits to the animals themselves.  

Some have argued that representation of non-human animals in these kinds of decision-making process is inherently anti-democratic, based on the assumption that most humans wouldn’t opt for a framework that extends some degree of moral agency to non-human animals, and that only humans are entitled to having their interests represented in democratic decision-making processes. However, if you think that at least some non-human animals are sentient, that this affords them some degree of moral patienthood, and that non-human animals therefore stand to lose or gain from AI advances in morally relevant ways, it seems logical that we should seek ways to represent them in one way or another. (See below for more discussion around democratic representation of non-human animals.)

Tangible actions by companies

Inclusion of animals in these kinds of materials could help ensure animals are included in companies’ risk assessments. For example, Microsoft’s Responsible AI Impact Assessment Template, which is designed to assess ‘the impact an AI system may have on people, organizations, and society’, could in principle easily be used to list risks to animals’ interests[6].  This would be a logical next step for companies who had already explicitly included animals in their mission statements, and could also be explicitly required by governments, such as in Biden’s Executive Order. Such efforts could also be supported by organizations such as METR (formerly ARC Evals), who currently work on assessing whether AI systems could pose catastrophic risks to civilization (including through partnerships with Anthropic and OpenAI), with future projects likely to include certifying companies that show commitment to ensuring their AI systems are safe before building or deploying them. While unlikely to happen in the short term (given that METR currently appears to be the only major organization doing this work, and it’s still at a very early stage), future similar initiatives could play a key role in holding companies accountable for the potential impacts of their models on non-human animals and highlighting clear examples of best practice. 

A clear tangible action for AI companies who had included animals in their mission statements and risk assessments would be to limit, or eliminate, speciesist biases in their models. Current Large Language Models (LLMs) exhibit speciesist biases, reflecting the speciesist nature of the corpus that they have been trained on. For example, LLM responses currently tend to condemn the slaughter and consumption of animals such as cats and dogs, while condoning the slaughter and consumption of animals such as pigs, chickens, and fish. (For further examples, see ‘Speciesist bias in AI: how AI applications perpetuate discrimination and unfair outcomes against animals’.) AI companies already have a range of tools at their disposal to address these kinds of biases such as by instructing their fine-tuning contractors to choose the outputs that are the least harmful to animals. 

Another tangible way for companies to put their money where their mouth is would be to refuse to work with industries that depend on the exploitation of animals, such as factory farms. However, it could be counterproductive if only the more ethically minded companies opted for this approach, leaving the animal agriculture industry to work with less scrupulous AI developers who are less likely to be concerned about the genuine welfare implications of any on-farm AI systems that they help to create. A more general move that could also help support animal-inclusive efforts would be for companies to re-think their governance structures: rather than set up as standard corporations, which by default must serve shareholders’ interests rather than society at large, they could opt for a more public-benefit-oriented approach (though the recent events at OpenAI, with its ‘capped-profit’ hybrid NGO-corporation model, have cast some doubt on the effectiveness of that approach). 

Seeking these kinds of specific improvements directly from industry could be particularly important given that there is currently so little government regulation of AI, with the most important decisions about AI models being disproportionately made by a small number of people at the leading AI companies. However, governmental action also seems necessary: not just to avoid a scenario where a few influential companies end up disproportionately controlling a technology that will affect the whole world, but also because the average company trying to use AI responsibly is currently confronted by a huge amount of information and considerations that governments should help clarify, whether through binding regulations or soft guidance. 

Explicitly mention animals in binding commitments

While inclusion in non-binding commitments would be a useful start, animals’ interests will also need to be protected in more specific legislation, with sanctions for non-compliance. The relevant regulations are likely to cut across a wide range of domains. Coghlan and Parker outline this in ‘Harm to Nonhuman Animals from AI: a Systematic Account and Framework’: ‘concerns about harmful AI impacts on animals will likely need to be addressed in animal welfare laws and regulations and in a host of other laws and policies, such as traffic regulations, safety mandates for automated vehicles and drones, environmental protection laws, and laws designed to regulate AI more generally.’

This also applies to AI deployment by the intensive animal agriculture industry, such as to monitor animal behavior, predict disease outbreaks, and optimize feeding schedules. These industries need to be held fully accountable for their use of AI and its impacts on animals, which will require stringent top-down government regulations (in addition to other incentives, such as making compliance with certain optional standards a condition for government contracts, as well as non-governmental methods, such as lobbying campaigns by advocacy organizations, and recognition schemes for industry best-practice). The industry is already arguing that all efforts to maximize profitability using AI will near-inevitably maximize welfare. Governments already typically defer to animal agriculture corporations to set the standard when it comes to animal welfare, despite their obvious vested interests in profit over the animals’ wellbeing, and the use of AI will only further exacerbate this problem. AI systems could even begin to define animal welfare standards, rather than just assess compliance with them. While this could theoretically result in welfare standards that are driven more by data than by humans’ vested interests, it could also produce seemingly objective, evidence-driven measurements that in reality only affect the animals’ health and productivity rather than their actual wellbeing. 

The 2023 EU Parliament report ‘Artificial intelligence in the agri-food sector’ provides a sense of what animal agriculture AI policy might look like in practice, setting out specific recommendations for AI’s integration in the EU’s Common Agricultural Policy (CAP):

  • Data-driven technology should be integrated into the official animal welfare standards, reducing the use of the subjective, one-off assessments currently being carried out.
  • Labels on animal-derived products should include objective data derived from on-farm sensors.
  • ISO (International Standardization Organization) standards on the design of animal monitoring technologies should be adopted by the industry.

Likewise, it will be necessary for the outcomes of any AI-assisted assessments to be available to objective regulatory bodies, not just companies’ leadership, to ensure that these assessments lead to meaningful changes.  (This assumes that regulators will have sufficient funding, capacity, authority, and understanding of the field to review the assessment outcomes, which is far from certain given governments’ current deprioritization of resources towards animal welfare policy and enforcement.) For example, AI assessments might highlight such low levels of welfare, or such high rates of mishandling by farmers, that animal agriculture companies decide to simply ignore the outcomes or drastically dial down the models’ sensitivity so that they only flag the most extreme instances of cruelty or suffering.

As AI advances become increasingly relevant across a whole swathe of industries, it will be impossible to introduce new legislation to ensure the responsible use of AI in every single use case. Cross-cutting, mandatory principles will be necessary for guiding the development of animal-inclusive AI. For example, Biden’s recent ‘Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence’ will ‘require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government’ and ‘develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy’. It specifically mentions the need to ensure that AI is safe for various groups, such as consumers, patients, students, workers, entrepreneurs, and small businesses. Adding animals to such lists of stakeholders would be a useful step, and though this is likely to be a hard sell, there are at least some promising precedents to build on. For example, the UK’s recent Sentience Act requires policy-makers to consider animals’ interests when making or changing laws (at least in principle – the UK’s current Government doesn’t seem to be taking that commitment very seriously).

Such efforts require some degree of non-human animal representation in key decision-making bodies, such as the USA’s AI Institute, the UK’s Frontier AI Task Force, and the UN’s new High-level Advisory Body on Artificial Intelligence, set up to ‘offer diverse perspectives and options on how AI can be governed for the common good’. There’s precedent for this kind of representation; for example, at the national level, the German government recently created an office for animal welfare, with a permanent position dedicated to the representation of animals’ interests. This success was due in part to the efforts of Animal Society and their international campaign network Representing AnimalsAnimals in the Room is another example of an organization seeking to ensure that animals are properly represented in key decision-making processes. These initiatives build on the political theory and discussion surrounding the democratic representation of hard-to-represent human groups, such as children and people with disabilities.

Extending equal consideration to all sentient beings impacted by AI systems won’t always be straightforward, or even possible. For example, even if self-driving cars can be made to detect smaller animals to avoid hitting them, efforts to achieve this might make the technology physically unwieldy or difficult to sell. Drivers might also be unwilling to make the necessary trade-offs that would allow this; for example, to travel in a car that avoids entire roads that cross the migration routes of crabs or other animals. More fundamentally, humans appointed to represent animals’ interests will inevitably have an imperfect view of what animals actually want and need, as opposed to just what humans assume they want and need. But these are still important discussions to have, even if they will often be complex and involve some difficult trade-offs. 

At the UN level specifically, such efforts to ensure animal representation are also made harder by the fact that animal welfare is not currently included in the UN’s Sustainable Development Goals (SDGs). Promisingly, the 2019 UN Global Sustainable Development Report did identify animal welfare as a key missing issue in the 2030 Agenda for Sustainable Development and the SDGs, and various advocacy groups are campaigning for an additional SDG focussed on eliminating animal exploitation. If successful, this would provide a stronger basis for future efforts to ensure animal representation in UN governance.

Publish more research on AI’s potential impacts on animals

While there is some excellent research exploring the significance of AI for animals (see e.g. the research papers listed under ‘Key resources’ on the AI for Animals website), this is still a relatively unexplored area. Additional research would help to identify the key leverage points for decreasing the risks for animals from AI (such as pragmatic ways to include animals in AI value learning and AI democratization efforts), communicate the nature of these risks more broadly, establish this as a field worthy of consideration, and build relationships with key stakeholders (e.g. AI safety researchers) outside of the animal advocacy movement. This could include a greater focus on animal interests and speciesist biases within the field of fairness research, which focuses specifically on techniques to address algorithmic bias. This will be increasingly important as AI ethics experts are invited to join bodies such as the UN’s Advisory Body and the UK’s Frontier AI Task Force. 

To help achieve this, non-profit organizations, government agencies, universities, and other academic institutions could create research fellowships designed both to support researchers in animal advocacy-related fields to move into the AI safety space, and to support AI safety researchers to move into animal advocacy-related fields, to help increase the number of researchers straddling the two areas. Similar fellowships already exist, such as the AI for Good Foundation’s Summer Fellowship Program, which offers participants the chance ‘gain real world experience at the intersection of Sustainability and Artificial Intelligence & Machine Learning’. The interests of non-human animals could also be included in AI-relevant education curricula, such as the AI Safety Fundamentals course. Likewise, once the field is more advanced, researchers could create courses specifically focused on the AI safety risks to non-human animals. 

Awareness-raising efforts by researchers could also seek to bridge the two fields. For example, in October 2022 over 500 academics published the Montreal Declaration on Animal Exploitation, which ‘condemn[s] the practices that involve treating animals as objects or commodities’. Similar future efforts could also focus on the need to ensure that all efforts are taken to ensure that AI does not entail significant animal suffering. Researchers could also continue to seek publication of their work in mainstream media; some examples of outlets that are already doing this well include Sentient Media (such as recent articles on speciesist biases in AI and AI-assisted intensive farming) and Vox (such as their recent article on the parallels between AI safety risks and humans’ own treatment of non-human animals), which will hopefully pave the way for other larger outlets to follow suit.

Build up the ‘AI safety x animal advocacy’ community

Existing animal advocacy organizations could expand their scope to include AI-specific initiatives. This could include campaigns work (such as targeted lobbying of the animal agriculture industry to ensure that they use AI in a responsible way, or engaging key decision-makers to ensure representation of animals in AI legislation) and awareness-raising work such as organizing seminars, conferences, and workshops, and engaging the public through videos, podcasts, and infographics. For organizations unfamiliar with AI, understanding how they could use AI tools to enhance the effectiveness of the advocacy campaigns could be a useful starting point. Some useful resources in this space include the organization NFPS.AI (which provides not-for-profit organizations with direct access to the latest AI knowledge and tools), VEG3 (an AI-powered tool specifically designed to help animal rights advocates be more impactful in their work), and recent webinars on Harnessing AI for Animal Advocacy Campaigns and AI & ChatGPT in the Movement by Stray Dog InstituteVegan Hacktivists, and Plant Based News

Advocates could also create new non-profit organizations (like the recently launched Open Paws) and alliances, potentially following a similar model to the AI for the Planet Alliance or the AI for Good Foundation, or forming partnerships with these organizations. Finding common points of interest would be a useful start here; for example, looking at AI for the Planet’s entrants to their 2022 ‘Call for Solutions’, one finalist (Kayrros) was focussed on measuring wildfire risk, which is also a major danger for animals. (Of course, there are also clear points of divergence: another of their finalists (Aquaconnect) is working to scale up the aquaculture industry.)

Some activities carried out by AI for the Planet Alliance that could also be relevant for the ‘AI for Animals’ space include: putting out a public call for proposals for how to use AI for the good of the planet (with prizes for the best ideas); publishing a report on AI climate solutions; and organizing conferences, including at UN events. (Future AI and animals conferences could also build on the recent Princeton conference on ‘Artificial Intelligence, Conscious Machines, and Animals: Broadening AI Ethics’). The AI for Good Foundation, meanwhile, seeks to ‘identify, prototype, and scale solutions that engender positive social change’. In addition to the fellowship program mentioned above, their projects include an auditing initiative to ensure responsible use of AI by businesses, and an SDG catalyst initiative to channel private investment into organizations using AI to help achieve the UN’s Sustainable Development Goals (another example of why it would be useful to have an SDG focussed on animal exploitation). Giving Tuesday also has a Generosity AI Working Group, which collates research questions, datasets, tools, and other resources relating to the adoption of AI in the social sector. All of these initiatives would equally lend themselves to the AI/animals sphere. 

Funding for the creation of new organizations and partnerships could come from companies, grantmakers, donors, and governments who decide to invest in companies using AI for public goods. For example, between 2017 and 2022 Microsoft’s AI for Earth grants supported over 950 conservation projects using AI, and the UK government recently committed £54m to develop secure AI that can help solve major challenges.

Collaboration and knowledge-sharing will also be key to finding solutions and overcoming setbacks[7]. AI for the Planet’s own model is one example of such a network, bringing together an alliance of leaders from politics, academia, business, and a variety of other domains. The AI For Good Foundation offers several further examples, such as their Council for Good (which likewise brings together a diverse array of leaders for regular idea-sharing sessions), their SDG Data Catalog (which aims to connect researchers and students with datasets relevant to the UN’s Sustainable Development Goals), and their AI + SDG Launchpad (which enables educational institutions to create curricula bridging the gap between ‘Data Enabled Sciences’ and the UN’s 2030 Sustainable Development Agenda). 

To support these efforts, animal advocates interested in AI can focus on building their knowledge, both from the strategic governance angle and from the more technical angle. For general AI knowledge, Blue Dot Impact offers an AI Safety Fundamentals course (with the option to work through the curricula for their Governance course and Alignment course at your own pace, rather than sign up for the full course).

For specific ‘AI x animals’ knowledge, there are useful research papers and other resources listed at the AI for Animals website, and the Impactful Animal Advocacy AI and Animals Wiki has a more comprehensive list. Previous EA Forum articles cover considerations of non-human sentient beings in AI value learning, the existential risk posed by speciesist biases in AIwhat animal advocacy means in the age of AI, the potential value of AGI x animal welfare outreach, and the urgent need to steer AI to care for animals, and the latest Open Philanthropy newsletter explores what AI could mean for animals. There are also dedicated AI channels on the Impactful Animal Advocacy Slack and Longtermism and Animals Discord

Lastly, it might be helpful just to paint a more concrete picture of what ‘animal-inclusive AI’ actually looks like. For example, Future of Life Institute’s Worldbuilding Competition sought to encourage positive, plausible visions of AI futures, with one shortlisted entry specifically focussing on How AI could help us to communicate with animals. This kind of world-building might help more people understand AI’s tangible implications for animals.

Conclusion

The deployment of advanced AI brings huge risks for animals. However, in a world where trillions of animals are already suffering and being exploited, not deploying advanced AI in a way that will genuinely improve their lives also risks foregoing enormous benefits. 

Animal-inclusive AI will require an enormous concerted effort on multiple fronts. Most of that effort will probably be exerted by people working outside of the AI/animals intersection: by animal advocates whose work helps to ensure that the moral economy in which AI systems emerge is a compassionate one, and by AI safety experts whose work to make AI safe for humanity are likely to have some massive spillover benefits for non-human animals. 

However, it will also almost certainly require a thriving community of people seeking to bridge those two fields within politics, academia, industry, the nonprofit sector, journalism, and a broad swathe of other domains. If you’d like to share any ideas about what that might look like and how you could contribute, please feel free to comment below. Thanks!

  1. ^

    AI’s potentially huge implications for animals’ rights and wellbeing could also be recognized in any future international animal-focused treaties. Though there is currently no such treaty, several have been proposed, such as the Convention on Animal Protection, the Universal Declaration on Animal Welfare, and the UN Convention on Animal Health and Protection (UNCAHP). AI’s significance for animal welfare could also be reflected in the materials produced by the World Organization for Animal Health (WOAH) (whose standards serve as a de facto global benchmark, with delegates from 183 nations), as well as materials pertaining to the ‘One Health’ approach, which seeks to simultaneously optimize the health of people, animals and the environment and appears to be gaining significant global traction. The related ‘One Welfare’ approach seeks to highlight the interconnections between animal welfare, human wellbeing and the environment, but currently seems less developed and less widely recognized.

  2. ^

    See 1:00 into the video.

  3. ^

    The final guidelines removed a sentence from the draft guidelines that read ‘Avoiding harm may also be viewed in terms of harm to the environment and animals; thus, the development of environmentally friendly AI may be considered part of the principle of avoiding harm.’ While this language runs into the same problem as the final wording in that it conflates animals’ interests with the environment, it’s arguably still better than the final wording.

  4. ^

    Other examples include Google DeepMind (whose mission is to ‘Build AI responsibly to benefit humanity), Microsoft (whose ‘Responsible AI’ principles are ‘committed to the advancement of AI driven by ethical principles that put people first’), or Meta (whose own ‘Responsible AI’ principles are ‘propelled by [their] mission to help ensure that AI at Meta benefits people and society’. Meta recently disbanded its Responsible AI team, which is a possible indicator that mission statements alone don’t necessarily mean much. 

  5. ^

    The Statement on AI Risk is just a one-sentence statement: ‘Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.’

  6. ^

    Some relevant sections include: 'Stakeholders, potential benefits, and potential harms'; 'Fairness considerations' ([...] identify any demographic groups, including marginalized groups, that may require fairness considerations [and] prioritize these groups for fairness consideration and explain how the fairness consideration applies); 'Minimization of stereotyping, demeaning, and erasing outputs'; and 'Sensitive uses, [including] consequential impact on legal position or life opportunities, [...] risk of physical or psychological injury, [and] threat to human rights'.

    Animals could be explicitly recognized under these sections, e.g., by clarifying that they should be subject to fairness considerations, and by including a ‘Threat to animals’ rights’ bullet under the ‘Sensitive Uses’ section.

  7. ^

    AI for the Planet’s report ‘How AI can be a powerful tool in the fight against climate change’ sums this up concisely: 

    'Practitioners need to share knowledge about best practices and promising uses of AI if they are to prepare solutions for wide-scale government and corporate deployment. Shared learning, such as through use-case libraries, can help promising solutions achieve their full potential and avoid pitfalls as they transition from the research, pilot, and proof-of-concept phases to implementation. Best-practice sharing is critical to adoption across both the Global North and Global South.'


     

Comments9
Sorted by Click to highlight new comments since: Today at 5:30 AM

I’m curating this post. It is an impressively detailed resource for considering the impact of present and future AI systems on animals, and the steps governments and corporations could take to lessen that impact. It’s full of good ideas, expressed with the right amount of epistemic modesty. 

The effect of AI on animals, today and in the future, is an important and neglected topic. I’m glad that this post was written, and I hope more people seriously consider these risks, and whether they can do something about them. 

If you’d like to find out more, I’d recommend this paper by Peter Singer and Yip Fai Tse.

Great comprehensive post, Max!

Most AI companies have themselves published some sort of non-binding mission statement setting out their commitment to ensuring that AI is a net-positive for humanity, such as the OpenAI Charter and Anthropic’s in-house constitution (which includes principles based on the Universal Declaration of Human Rights and various other sources)[4].  Again, animals are notably absent from these mission statements and again, while we should be very wary of corporate lip service, their inclusion seems like a necessary first step to ensuring that the AI systems they develop respect animals’ interests.

As a side note, there is also no mention of "digital minds", "digital sentience", "artificial minds" nor "artificial sentience" in OpenAI's Charter, Claude's Constitution or Google DeepMind's mission. I guess quite some people working on these labs think digital minds will become a reality in the next few decades, and some even say they would work on it if timelines were not so short, so it is a little surprising the labs are not more vocal about it.

Moves in this direction by the more ethically oriented developers like Anthropic and OpenAI

Nitpick. I thought Anthropic and Google Deepmind were the 2 most ethical developers among them and OpenAI.

Promisingly, the 2019 UN Global Sustainable Development Report did identify animal welfare as a key missing issue in the 2030 Agenda for Sustainable Development and the SDGs, and various advocacy groups are campaigning for an additional SDG focussed on eliminating animal exploitation.

SDG 18 (zero animal exploitation) looks like a great initiative. I had no idea it existed; thanks for sharing!

Cheers Vasco! Glad you found it helpful and thanks for the useful points :-)

Thanks for this post, Max!

 

tl;dr: Lemme know if you have ideas for approaches to animal-inclusive AI that would also rank among the most promising ways to reduce human extinction risk from AI. I think they probably don't be exist, but it'd be wicked cool if they did.

 

Most EAs working on AI safety are primarily interested in reducing the risk of human extinction. I agree that this is of astronomical importance, especially when you consider all the wild animal suffering that would continue in our absence.

Many things that would move us toward animal-inclusive AI would also help move us away from extinction risks. But I suspect the majority of those things, while helpful, would not be among the most helpful ways to reduce extinction risk. In other words, we should be wary suspicious convergence; "what is best for one thing is usually not the best for something else."

I'm working on plans to do more to support a rigorous search for approaches to animal-inclusive AI (or approaches to advancing wild animal welfare science broadly) that would also rank among the most promising ways to reduce human extinction risk from AI. In the meantime, I'd encourage anyone interested in the broader subject to consider this narrower subset, and to reach out to me if they're excited to work on it more (cameronms@wildanimalinitiative.org).

To be clear, I also think animal-inclusive AI is worth pursuing for its own sake (i.e., working on animal-inclusive AI seems likely to be among the most impactful things you can do to make the world a better place in the set of scenarios where humans don't go extinct), and I'd be excited to see work on most of the approaches discussed above. In those cases -- especially when building coalitions with people who might have different priorities -- I think it's useful to be transparent about the fact that what we're doing is important, but we don't think it's one of the most promising ways to avoid human extinction.

Thanks Cameron! That's a helpful point that I didn't really touch on in this post. Great that you're doing work in that space - I'm really interested to hear more about it so will get in touch.

I'm working on plans to do more to support a rigorous search for approaches to animal-inclusive AI (or approaches to advancing wild animal welfare science broadly) that would also rank among the most promising ways to reduce human extinction risk from AI.

Interesting! I am interesting in discussing this idea further with you.

Could it be the case that another way to think about it is to search within the best approaches to reduce human x-risk, for a subset that is aslo animal inclusive? For example, if working on AI alignment is one of the best ways to reduce human x-risk, then we try to look for the subset within these alignment strategies that are also animal friendly?

Executive summary: Bringing about animal-inclusive AI will require concerted efforts across government, academia, industry, and civil society to represent animals' interests, conduct research, raise awareness, and build partnerships.

Key points:

  1. Governments should include animals explicitly in AI commitments and regulations to assess risks and represent animals' interests.
  2. Companies should include animals in mission statements, governance, risk assessments, and address biases in models.
  3. More research is needed on AI's impacts on animals and bridging AI safety and animal advocacy fields.
  4. Advocacy groups should expand scope to AI risks, build partnerships, and increase public understanding.
  5. A thriving community is needed across sectors to ensure AI benefits rather than harms animals.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Your second section outlines the importance of not putting political principles like democracy above moral ones like the inherent value of sentient life / negative utility of suffering. Democracy is a means to an end (that being a fair society where living beings can flourish). It shouldn't be an end in and of itself. Where democracy and animal rights/welfare conflict, I will always choose animal rights/welfare. The same applies to human rights as well. 

Thanks for highlighting that point Hayven - I agree, and also hope we get to the point where animals are sufficiently well represented in democratic decision-making that those kinds of conflicts are massively reduced.

Curated and popular this week
Relevant opportunities