Hide table of contents

TL;DR: this is an incredibly underdeveloped field with very little funding deployed and only one small direct work org. There are clear basic projects that need to get off the ground. Please donate to help start the new Falcon Fund which will do active grantmaking on these projects.

It’s been over three years since ChatGPT was released, and in that time the Animal Welfare movement has had to grapple with what it means to face off with rapidly advancing AI. This has not gone very well. Not only does the Animal Welfare movement have almost no money, it also has almost no technical talent. From the AI side, billions of dollars have (rightly) gone into the AI Safety field to figure out how we’re going to navigate the future, and many of the people working on that deeply care about animals. In many ways, the future of animal welfare is just a subset of problems that people in AI Safety are already working on. Nonetheless, AI is about to be deployed in ways that will impact countless animals, and there seem to be no safeguards in place.

The animal movement in an ideal world would have put at least 10% of funds into projects specifically around navigating fast AI takeoff, but it is hard to fault the animal movement for not doing this; again there is no money and no expertise. It is hard to make speculative bets in areas you know nothing about when your counterfactual dollar can keep a hen out of a cage for 10 years. Fortunately, the animal movement did have Constance Li who went hard on field building with Sentient Futures and was able to bring more people to the table. Because the field has led with community building instead of concrete projects, this has led to people being confused about the point.

That's all to say, the AIxAnimals field is very much behind.

It is currently in a stage of course correction. There are a million directions this field could go in and there will be a lot of muddling through to figure out what the most important things are, BUT we know what the basics are that need to get done. The main grantmakers are starting to accept grant proposals for this space, they pooled together $300k for the AI x Animals RFP (upskilling, fieldbuilding, and research), and the Survival and Flourishing Fund has released a $2-4M animal welfare theme round, though it sounds like money might not be released until next year.

The big missing thing has been an active grantmaker with expertise to urgently get the main projects off the ground. Manifund has now set up the Falcon Fund to do this and bring technical talent into the space; they are now looking for funding. I have donated to it. You should as well. It’s very important.

We need a lot of brainpower on this problem. In any universe where AI doesn’t kill us all, we want to make sure all sentient life is having a good time. The problem of how AI affects animals is happening on large scales already, even with just consumers using LLMs. We need to address it. There are a huge number of people working on technical alignment, but only one tiny org doing direct work on alignment for animals. This avenue of work is important in its own right, but there may also be an argument that this line of work will help in getting AI to not kill us as well. Animals are a subset of sentient beings, and we will see in real time how alignment is playing out as all these animals are hugely affected.

I am not part of any AIxAnimals organization, but I do talk with pretty much everyone involved. I do fundraising for the whole Animal Welfare field, and my background is in machine learning research for alternative proteins. I am trying to help the AIxAnimals field where I can.

What is AIxAnimals?

The criticism the other day had a very good definition:

The AI x animals argument, as I understand it: AI systems are making decisions that affect how we use animals. Those systems don't adequately represent animal welfare. If we can get welfare into the benchmarks/constitutions of AI labs, we can shift outcomes for animals at huge scale before they get locked in.

I would call this the safety/alignment side of AIxAnimals, which is what we’re mostly talking about.

There’s the other part which is the capabilities side like using AI to make lab-grown meat. I’ll leave that for further down.

Why focus on this at all?

This is the intersection between the worst thing that humanity does, and the technology that will determine the future. It’s an incredibly compelling moral drama. Will we allow the atrocity of factory farming to keep on going? Or will we start putting down lines of what is and is not acceptable. As we think through all the many ethical questions that AI brings up, I think it is important to focus on the biggest ethical calamity. It will be very telling for how the rest of our problems go.

1. Direct impact on animals

The values that AIs have could be the difference between the practice ending entirely, and taking factory farming to the stars. We should be very afraid of lock-in. Solving cultivated meat is definitely not a guarantee, and you should not underestimate that people want meat from an animal that lived and really suffered. We need to navigate this well. There are so many animals that even a marginal difference could have a huge effect. This may be even larger when you consider wild animals. As a basic example, the difference in recommendations of insecticides for farmers to use could affect countless lives.

2: Impact on all sentient life, including digital minds and humans

Animals are sentient. If AIs treat them poorly, it means we are creating systems that do not value sentience. This can go badly in many ways. Work on Animal and AI welfare are grouped together in the goals of Sentient Futures or the Center for Mind, Ethics, and Policy for this reason, it’s essentially the same problem of moral patienthood. We want to make progress on problems around AI and moral patienthood, and treatment of animals is both a necessary condition and an immediately useful indicator of progress in this area.

The Survival and Flourishing Fund puts it well:

As AI capabilities continue to advance, humanity’s relationship to animal welfare takes on increased significance and urgency. The moral frameworks we develop and institutionalize now, including how we weigh the interests of non-human animals, have the potential to influence the values embedded in AI systems through the norms, laws, and training objectives that are set for them. Therefore, how humanity treats animals today may shape how AI systems treat all sentient life in the future.

Jaan Tallinn goes further to suggest that AIs valuing animals may be directly important for AIs treating humans well. He calls this class of ideas protective moralities:

I want to support morally motivated initiatives that, by symmetry, might increase humanity’s chances of being treated well by advanced AI even if we no longer directly control it. Examples include freedom and sovereignty for individuals and territories, mercy toward other species, fair allocation of resources, cooperativity, and caring and caretaking toward others. These are abstract moral objectives that, should they end up applying to AI systems, might be somewhat protective of humanity as a special case.

3. We shouldn’t build AI systems that commit moral atrocities

This is my own more vibes-based argument but I think it’s true. I think this would go badly generally. We want a flourishing future for all sentient beings. But if we go out the gate already committing an atrocity, this is just pointing at disaster for the whole project. So anyone can exercise power over anyone else as long as they control the AI? That seems like it’ll go really poorly for us and all other sentient beings.

Who is involved?

Main organizations

  • Sentient Futures is the fieldbuilding org for AIxAnimals. They have been very successful in bringing people together with conferences that frontier AI lab employees attend. They also do fieldbuilding for digital minds work.
  • CAML: Compassion Aligned Machine Learning. They are a research organization that creates benchmarks and does research for how AI labs might improve on the benchmarks.

To be clear, there is exactly one small organization doing direct work in the space. CAML runs on a shoestring budget and had to move to Mexico to afford rent. This is not great to interact with an industry that’s based in SF.

Academia and research

Other

What should this field specifically try to achieve?

1. Measurement

AI systems are going to be deployed to run factory farms, to manage wild ecosystems, to do scientific testing, and to help people pick mouse traps.

We need clear visibility of how AI systems are being developed and deployed in ways that harm animals. This visibility allows the broader animal welfare field to triage the most important problems. Niki Dupuis wrote Animal welfare in 1800 to demonstrate how hard it would have been to predict the rise of factory farming, and how your efforts would have probably been misplaced. Something far, far worse may be coming. We need to know what is going on.

Measurement will also allow for feedback to AI system developers, application developers, and the public. We sometimes forget this in the animal movement, but most people involved actually don’t want to harm animals, and may try to help us if we show them what’s happening.

Rethink Priority’s research on AI in aquaculture is a good example of what we should be doing more of. We should also push for more frequent updating. We can be in trade shows, review public patents, and generally get more live visibility on what companies are doing and how they are affecting animals.

2. Direct Impact

We want AI systems to behave in ways that show consideration for animals.

Some basic examples:

  • We want LLMs to recommend insect repellents that are high-welfare
  • AI systems may increasingly be in charge of menu planning, it would be great if they defaulted to higher-welfare options like vegan, vegetarian, beef, and wild-caught fish over chicken and shrimp
  • LLMs should refuse to help with explicit needless harms to animals

Due to the scale of AI adoption, these interventions have a large impact.

There are countless examples we could think of where AIs will be used, and where it will be fairly uncontroversial to make improvements. CAML runs a number of benchmarks that measure how models perform on various questions. What we want are more benchmarks, thinking about the most important and tractable questions to be improving on. And we want to make them really good benchmarks to get people at frontier labs to want to hill climb on them.

While this ideas are short-termist in scope, it is important to get these norms placed in systems, especially as more powerful AIs are developed. They also provide a concrete way to see how things may go wrong, which is important for future research.

3. Moral Patienthood Consideration

It is really hard to predict how future AI systems will develop. Under one theory, the values that models have now can be propagated into more advanced models, as they will want to try to recreate their values.

If there was a value we’d want models to have, it could be consideration of moral patienthood. That is, to consider the interests of sentient beings. This would be broadly good across human welfare, model welfare, and animal welfare. I’ve written about ways to approach research here. This is a focus area for the NYU Center of Mind, Ethics, and Policy.

4. Policy

Last year, Anima International was able to get a line mentioning risks to non-human welfare into the EU AI Code of Practice. This is a voluntary code that allows AI labs to demonstrate compliance to the EU AI Act. Most labs have signed onto it. Nothing has been litigated yet but this is a great start.

We want more of this. This is a great source of leverage as we do not have to directly fight animal agriculture. This allows AI labs to be held globally at a higher standard, as locally animal welfare laws may be incredibly lax.

It is unclear what the next best policy wins will be, but it is important to be prepared and be opportunistic, and use evidence from what we are seeing with real use of AI in ways that harm animals.

What specific projects should be started immediately?

I will just quote text from the Falcon Fund’s page

  • Animal harm benchmarks. There are only a handful of animal harm benchmarks, none of which adopted by frontier labs. Other benchmarks that are well known and used (SWE bench, FrontierMath) came about through rising to the top of a marketplace of benchmarks. The same should happen with animal welfare benchmarks. Many benchmarks should be created, some by established ML engineers with the goal that one or two get traction to “hill-climb” on.
  • Animal Welfare Constitutions: Recently, Claude’s constitution was published with a value of “Welfare of animals and of all sentient beings” when determining how to respond to a prompt. This is one line of an 84 page document from one frontier lab. There should be ready made versions of texts of various lengths for constitutions, system cards, etc. to improve model behaviours and considerations for animals.
  • Watchdog organization: As AI begins to take effect across industries, there is a good chance the factory farming industry and others will start to use AI in ways beyond Precision Livestock Farming that will be important to get out ahead of. Keeping an eye on industry practices as well as effects on wild animals will be important to identify high-leverage, urgent interventions
  • Animal welfare salience in AI labs: Assuming AI systems are going to have profound effects on the world, it is important for those shaping the technology to be aware of and care about issues related to animal welfare as they are developing a technology with potentially large lock-in effects

AI-enabled technologies for animal welfare

Obviously there is a lot to be doing here, and it’s also incredibly underfunded.

AI for Cultivated Meat and Alternative Proteins

I am very excited about this. If you have expertise in this area, please contact me. I want to make a whitepaper on the critical path to solving this.

This will cost billions of dollars, and will need to be led by some large institution or lab. The AIxAnimals projects above will maybe cost a couple million and have a lot of leverage, and can be acted on immediately.

I want us to be careful as we explore this. I hear common proposals like the one from the criticism post

Cell culture optimisation is an enormous search space; finding the exact combination of nutrients, temperatures, and growth factors that make cells proliferate efficiently. AI can model and run simulated experiments at a speed that wet lab trial and error cannot match.

This is not possible right now. The cultivated industry does not have the money, talent, or data throughput to collect data on the scales necessary to do custom modeling and simulations over large search spaces. I wrote about it here: The world model will not be built by lab-grown meat.

Advances in this field will come downstream from advances in better funded corners of the sciences. Once they are able to model and simulate in generalizable ways, then we’ll be able to use it. Self-driving labs and simulated biology are definitely things we should prepare for though. Write a whitepaper with me.

Advocacy Tech

I am generally bearish on upskilling people at existing orgs. But there are many other approaches I think we should pursue:

  • Seeing if we can outcompete existing orgs with new orgs that use AI
  • A Palantir-esque forward-deployed team that can attach to existing orgs and build custom things for them
  • Using AI-enabled message testing

I haven’t seen anything promising yet on these.

Conclusion: What I am excited for

I’m excited for the Falcon Fund. Please help seed it.

I’m excited to see how the Survival and Flourishing Fund decides to deploy money.

I’m excited to see the Center for Mind, Ethics, and Policy push this on the academic side. They were instrumental in making progress on AI welfare being a mainstream thing, so I’m very keen to see what they release on the intersection of Animal and AI welfare.

19

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities