I'm worried that animal welfare advocates might neglect the importance of AI in determining what happens to animals.  More specifically, I'm worried that the value 

  • {animal welfare matters}

and the true-according-to-me belief 

  • {AI is going to transform and determine most of what happens on Earth}

... don't exist in same person often enough, such that opportunities to steer AI technology toward applications that care for animals could go under-served.

Of course, we could hope that AI alignment solutions, if effective in protecting human wellbeing, would likely serve animals as well.  But I'm not so sure, and I'd like to see more efforts to change the memetic landscape among present-day humans to better recognize the sentience and moral importance of of animal life, especially wild  animals that we might not by default think of humanity as "responsible for".  The only concrete example I know of is the following, which seems to have had little support from or connection to EA:

  • https://www.projectceti.org/ - a project using ML to translate the language of sperm whale in their natural habitat.  As far as I know, they are not fully funded, could probably use support from EAs, and I think the work they're doing is in-principle feasible from a technical perspective.

Ideally, I'd like to see a lot more support for projects like the above, which increase AI <> animal welfare bandwidth over the next 3-5 years, before more break-neck progress in AI makes it even harder to influence people and steer where technology and its applications are going.

So!  If you care about animals, and are starting to get more interested in importance of AI, please consider joining or supporting or starting projects that steer AI progress toward caring more about animals.  I'm sad to say my day job is not addressing this problem nearly as well or as quickly as I'd like (although we will somewhat), so I wanted to issue a bit of a cry for help — or at least, a cry for "someone should do something here".  

Whatever you decide, good luck, and thanks for reading!

Comments37
Sorted by Click to highlight new comments since:

Good points! This is exactly the sort of work we do at Sentience Institute on moral circle expansion (mostly for farmed animals from 2016 to 2020, but since late 2020, most of our work has been directly on AI—and of course the intersections), and it has been my priority since 2014. Also, Peter Singer and Yip Fai Tse are working on "AI Ethics: The Case for Including Animals"; there are a number of EA Forum posts on nonhumans and the long-term future; and the harms of AI and "smart farming" for farmed animals is a common topic, such as this recent article that I was quoted in. My sense from talking to many people in this area is that there is substantial room for more funding; we've gotten some generous support from EA megafunders and individuals, but we also consistently get dozens of highly qualified applicants whom we have to reject every hiring round, including people with good ideas for new projects.

This is exactly the sort of work we do at Sentience Institute on moral circle expansion (mostly for farmed animals from 2016 to 2020, but since late 2020, most of our work has been directly on AI—and of course the intersections)

Sentience Institute has, in its research agenda, research projects about digital sentients (which presumably include certain possible forms of AI) as moral patients, but (please correct me if I'm wrong) in the "In-progress research projects" section there doesn't seem to be anything substantial about the impact of AI (especially transformative AI) on animals?

That's right that we don't have any ongoing projects exclusively on the impact of AI on nonhuman biological animals, though much of our research includes that, especially the outer alignment idea of ensuring an AGI or superintelligence accounts for the interests about all sentient beings, including wild and domestic  nonhuman biological animals. We also have several empirical projects where we collect data on both moral concern for animals and for AI, such as on perspective-taking, predictors of moral concern, and our recently conducted US nationally representative survey on Artificial Intelligence, Morality, and Sentience (AIMS).

For various reasons discussed in those nonhumans and the long-term future posts and in essays like "Advantages of Artificial Intelligences, Uploads, and Digital Minds" (Sotala 2012), biological nonhuman animals seem less likely to exist in very large numbers in the long-term future than animal-like digital minds. That doesn't mean we shouldn't work on the impact of AI on those biological nonhuman animals, but it has made us prioritize laying groundwork on the nature of moral concern and the possibility space of future sentience. I can say that we have a lot of researcher applicants propose agendas focused more directly on AI and biological nonhuman animals, and we're in principle very open to it. There are far more promising research projects in this space than we can fund at the moment. However, I don't think Sentience Institute's comparative advantage is working directly on research projects like CETI or Interspecies Internet that wade through the detail of animal ethology or neuroscience using machine learning, though I'd love to see a blog-depth analysis of the short-term and long-term potential impacts of such projects, especially if there are more targeted interventions (e.g., translating farmed animal vocalizations) that could be high-leverage for EA.

Thanks for the explanation; I do support what SI is doing (researching problems around digital sentience as moral patients, which seems to be an important and neglected area), and your reasoning makes sense!

Yeah. Some people have told me (probably as a joke) that the best way to improve wild animal welfare is to invent AGI and let the AGI figure it out. But that feels very handwavey; the missing step is, how do we align the AGI to care about wild animals?

I've recently become interested in the intersection of ML and animal welfare, so these projects seem right up my alley.

I am so glad to see people interested in this topic! What do you think of my ideas on AI for animals written here?

And I don't think we have to wait for full AGI to do something for wild animals with AI. For example, it seems to me that with image recognition and autopilot, an AI drone can identify wild animals that have absolutely no chance of surviving (fatally injured, about to be engulfed by forest fire), and then euthanize them to shorten their suffering.

For onlookers, it seems like Holly Elmore is a thought leader in this area and touched on the intersection with WAW and AI with this post.

There's a minor comment here poking about "TAI (transformative AI)" and wild animal welfare.

Some people have told me (probably as a joke) that the best way to improve wild animal welfare is to invent AGI and let the AGI figure it out.

I believe this, not as a joke. But I do agree with you that this requires solving the broader alignment problem and also ensuring that the AGI cares about all sentient beings.

Hi Andrew, I am glad that you raised this. I agree that animal welfare matters and AI will likely decide most of what happens in the future. I also agree that this is overlooked, both by AI people and animal welfare people. One very important aspect is how AI will tranform the factory farming industry, which might change the effectiveness of a lot of interventions farmed animal advocates are using.

I have been researching the ethics of AI concerning nonhuman animals over the last year, supervised by Peter Singer. Along with two other authors, we wrote a paper on speciesist biases in AI. But our scope is not just about algorithmic biases, but basically anything that we identify as affecting a large number of nonhuman animals (AI to decipher animal language is one topic of many, and I am glad to report that there are actually at least two more project on this going on). AI will affect the lives of farmed animals and wild animals, you can take a peek into our research in this talk, or there will be a paper coming out in 1-2 months.

Coincidentally (or is it?), just before you posted this, there is a post called megaprojects for animals, and my comment which included AI for animals.

Fai, your link to the paper didn't work for me, is  this the correct link?

 

https://arxiv.org/ftp/arxiv/papers/2202/2202.10848.pdf

Ah yes! I think copy and paste probably didn't work at that time, or my brain! I fixed it.

Hmm for some reason I feel like this will get me downvoted, but: I am worried that an AI with "improve animal welfare" built into its reward function is going to behave a lot less predictably with respect to human welfare. (This does not constitute a recommendation for how to resolve that tradeoff.)

I think this is exactly correct and I don't think you should be downvoted?

 

Uh...this comment here is a quick attempt to try to answer this concern most directly.

 

Basically, longtermism and AI safety has the ultimate goal of improving the value of the far future, which includes all moral agents. 

  • So in a true, deep sense, animal welfare must already be included. So instructions  that sound like, "improve animal welfare", should be accounted for already in "AI alignment".
  • Now, despite the above most current visions/discussions of the far future that maximize welfare ("make the future good") focuses on people. This focus on people seems reasonable for various reasons.
  • If you wanted to interrogate these reasons, and figure out what kind of people, what kind of entities, or what animals are involved, this seems to involve looking at versions of "Utopia".
    • However, getting a strong vision of Utopia seems not super duper promising at the  immediate moment.
      • The reason why it's not promising is because of presentation reasons and the lower EV. Trying to have people sit around and sketch out Utopia is hard to do, and maybe we should just get everyone on board for AI safety.
      • This person went to a conference and wrote a giant paper (I'm not joking, it's 72 pages long), to try to understand how to present this.
      • Because it is relevant (for example, to this very concern and many other issues in various ways) someone I know briefly tried to poke at work at "utopia" (they spent like a weekend on it).
        • To get a sense of this work, the modal task in this person's "research" was a 1on1 discussion (with a person from outside EA but senior and OK with futurism). The discussions basically went like:
          "Ok, exploring the vision of the future is good. But let's never, ever use the word Utopia, that's GG. Also, I have no idea how to start.".

Uh, so if you read the above (or just the 1st or 2nd layer deep of bullet points in the above comment), this raises questions.

  • Like, does this mean animal welfare boils down to AI safety? 
  • What is the point of this post, really? What do we do?

So yeeeahhh....It's pretty hard to begin. 

Yeeeahhh.....

Like, so there's a lot of considerations here. 

So this comment is answering: "What should we do about this issue about AI and animal welfare?" 

Uh, basically, the thoughts in this comment here are necessarily meta.... apologies to your eyeballs.

 

Let's treat this like field building

So to answer this question, it's sort of good to treat this problem as early field building (even if it doesn't shake down to a field or cause area). 

It seems beneficial to have some knowledge of wild animal welfare, farmed animal welfare, and AI safety

  • And each major camp in them, e.g. the short timeline people, slow takeoff people, and the S-risk interests.
  • You should get a glance at the non-EA "AI ethics people" (their theory of change or worldview already appears in this post and comments).
  • Uh, it seems possible you might benefit from some domain knowledge of biology, animal welfare science, applied math and machine learning at various points.

So I'm saying, like, knowledge of some of the key literature, worldviews, subcultures. 

Ideally, you would have a sense of how these orgs and people fit together and also how they might change or grow in the next few years, maybe.

 

Thoughts about why this context matters

So you might want to know the above because this would be a new field or new work and actual implementation matters. Issues like adverse selection, seating and path dependency is important. 

Concrete (?) examples of considerations:

 

  • You want to figure out how viable and useful each instantiation of these current existing areas/beliefs and their people/institutions/interventions are, if you are trying to seat a new project or field in it.
    • These existing fields are overlapping and you could start in any number of them. For example S-risk matters a lot for "very short timelines".
    • Where are the good people? What current leaders, cultures and do they have?

 

  • You can start in many different places and worldviews depending on the credence you have. You want to communicate with others well, even if they have different beliefs.
    • Like, seating the field firmly with one group of people (AI ethics) is not going to play super duper well with the EA or near-EA safety people.

 

  • Domain knowledge in fields like linguistics, machine learning and other knowledge is helpful to parse what is going on in these interventions:

    • For example, uh....I'm pretty uncertain or confused about the "communicate with animals" subthreads in this post:
      • I took a look at the whale one. So what's going to be relevant is, like, linguistics and theory of mind, rather than "AI".
        • The pivotal role of "machine learning" in some sort of unsupervised learning...is plausible I guess? But doesn't seem likely to be the heart of the problem in communicating with whales. So that substantively knocks at the "AI element" here.
      • Most moral patients (IMO over >99.99%) aren't going to be easy to communicate with, so I'm uncertain what the theory of change is here.
      • Maybe there's instrumental value from nerdsniping? I think I'm OK nerdsniping people into animal welfare, but that's complicated:
        • Some of the very top animal welfare leaders in EA that we look up to are basically Jedi Knights or Wallfacers, and that is a different pattern than you see from people who are really excited casually by ML or AI.

          (I admit, it does seem sort of awesome to communicate with Octopuses, like how do you even feel bro, you're awesome?)
      • I'm concerned with adverse selection and loss of focus
      • Charismatic mega fauna are a focus already (literally too much, according to even some non-EAs), and it's unclear how more attention on those animals will most usefully help animal welfare. 
        • The animals that are being most grievously abused aren't going to communicate with the same kind of expressiveness as whales.

 

I guess there's still more. Like, divergences between animal welfare and other cause areas in some scenarios. I guess this is why I poked at here in this comment.

 

"So what? Why did read this comment? Give me some takeaways"

  • Probably talking to the S-risk people is a good start, like, message the people or the orgs in S-risk or sentience who commented here
  • Before jumping into using AI, I would get a little sense of the technical domains or theories of change involved, or speak to some experts, about the applications being proposed
    • I would try to stay concrete, which helps avoid the "very online" stuff
  • If you really had to get a project going or funded right away, I think audio and computer vision uses in factory farms is useful and has many applications (identifying suffering) and probably has the right mix of impact and shiny object.

It is a minor point But I would like to pushback on some misconceptions involving “panda conservation”, mostly by paraphrasing the relevant chapter from Lucy Cookes The Truth About Animals.

Contrary to headlines about libidoless pandas driving themselves extinct, the main reason pandas are going extinct is the main reason animal species in general are going extinct, habitat loss as humans take and fracture There land.

Giant Pandas almost entirely rely on bamboo for food , bamboo engages in synchronous flowering with the other bamboo plants in the area and then Seeds and dies off, because of this it is important that the pandas have a wide range of space they can travel across not only to mate with other pandas, but to access new bamboo forests when The ones they live in die.

These forests even in “ protected” areas, are threatened by mining, roads, and agriculture.

Meanwhile , giant pandas become an international symbol of China, China sends pandas to its allies as gifts, or loans them to foreign zoos at a million dollars per year( these rules also applying to any offspring born in the foreign countries), and panda cubs draw in domestic tourism, and large numbers are bred of an animal that doesn’t breed well in captivity, to release 10 socially maladjusted giant pandas 8 of which don’t survive.

Pandas aren’t hogging conservation dollars because 1)the money isn’t /conservation money/ that would go to other species, It’s politics and business 2)The benefits that would protect wild pandas, ( protecting large intact tracts of land) would also help a wide array of wildlife, this is a general trend of megafauna, they need more space and are disproportionately impacted by habitat loss, which is the leading cause of species extinction, and they are charismatic, functioning as umbrella species that protect whole ecosystems.
3) The most effective ways to save panda populations aren’t being acted upon in the first place

Side-note: I do think pandas are an obvious place to start when it comes to genetically modifying wildlife, considering they are a Charismatic Megafaunal Herbivore, that normally has twins , but always abandons one offspring in the wild( because bamboo is too low calorie compared to what it’s omnivore ancestors ate to feed both twins) modifying them to only produce one offspring at a time feels like a no-brainer assuming we can get There numbers up still.

Yes, everything you said sounds correct.

My guess is that most money that is "raised using a picture of a Panda", actually goes to conservation broadly. 

Maybe advocacy that focuses on mega fauna is more mixed in value and not negative (but this seems really complicated and I don't really have any good idea).

Finally, I didn't read the article, but slurs against an animal species seems like really bad thinking. Claims that Pandas or other animals are to blame for their situation, are almost always a misunderstanding of evolution/fitness, because, as you point out, they basically evolved perfectly for their natural environment.

Thanks for this excellent note.

Hi Cate, thank you for your courage to express potentially controversial claims, and I upvoted (but not strongly) for this reason.

I am not a computer or AI scientist. But my guess is that you are probably right, if by "predictable" we mean "predictable to humans only". For example, in a paper (not yet published) Peter Singer and I argue that self-driving cars should identify animals that might be on the way and dodge them. But we are aware that the costs of detection and computation will rise, and that the AI will have more constraints in its optimization problem. As a results the cars might be more expensive and they might be willing  sacrifice some human welfare, such as by causing discomfort or scare to passengers while braking violently for a rat crossing. 

But maybe this is not a reason to worry. If, like how most of the stakes/wellbeing lie in the future,  most of the stakes and wellbeing lie with nonhuman animals, maybe that's a bullet we need to bite. We (longtermists) probably wouldn't say we worry that if an AI cares about the whole future it would be a lot less predictable with respect to the welfare of current people, we are likely to say this is how it should be. 

Another reason to not over-worry is that human economics will probably constrain that from happening to a high extent. Using the self-driving car example again, if some companies' cars care about animals, some don't, the cars that don't will, other things being equal, be cheaper and safer for humans. So unless we so miraculously convince all car producers to take care of animals, we probably won't have the "problem" (which for me, that we won't get "that problem" is the actual problem). The point probably goes beyond just economics, politics, culture, human psychology, possibly all have similar effects. My sense is that as far as humans are in control of the development of AI, AI is more likely to be too humancentric than not being humancentric enough.

This is one of the reasons I care about AI in the first place, and it's a relief to see someone talking about it. I'd love to see research on the question: "Conditional on the AI alignment problem being 'solved' to some extent, what happens to animals the next hundred years after that?"

Some butterfly considerations:

  1. How much does it matter for the future of animal welfare whether current AI researchers care about animals?
    1. Should responsible animal advocates consider trying hard to become AI researchers?
    2. If by magic we 'solve' AI by making it corrigible-to-a-certain-group-of-people, and that corrigible AI is still (by magic) able to do pivotal things like prevent other powerfwl AIs from coming into existence, then the values of that group could matter a lot.
  2. How likely is it that some values get 'locked in' for some versions of 'solved AI'? It doesn't matter whether you think locked-in values doesn't count as 'solved'. I'm not here to debate definitions, just figure out how important it is to get some concern for animals in there if the set of values is more or less inelastic to later changes in human values, e.g. due to organizational culture or intentional learning rate decay in its value learning function or something I have no clue.
  3. For exactly the same reasons it could be hard for the AI to understand human preferences due to 'The Pointers Problem', it is (admittedly to a lesser extent) hard for humans to understand animal preferences due to the 'Umwelt Problem': what animals care about is a function of how they see their own environment, and we might expect less convergence in latent categories between the umwelts of lesser intelligences. So if an AI being aligned means that it cares about animals to the extent humans do, it could still be unaligned with respect to the animals' own values to the extent humans are mistaken about them (which we most certainly are).

So if an AI being aligned means that it cares about animals to the extent humans do, it could still be unaligned with respect to the animals' own values to the extent humans are mistaken about them (which we most certainly are).

 

I very much agree with this. This will actually be one of the topics I will research in the next 12 months, with Peter Singer.

Love this. It's one of the things on my "possible questions to think about at some point" list. My motivation would  be

  1. Try to figure out what specific animals care about. (A simple sanity check here is to try to figure out what a human cares about, which is hard enough. Try expand this question to humans from different cultures, and it quickly gets more and more complicated.)
  2. Try to figure out how I'm figuring out what animals care about. This is the primary question, because we want to generalize the strategies for helping beings that care about different things than us. This is usefwl not just for animals, but also as a high-level approach to the pointers problem in the human case as well.

Most of the value of the project comes from 2, so I would pay very carefwl attention to what I'm doing when trying to answer 1. Once I make an insight on 1, what general features led me to that insight?

There's another communication-focused initiative, Interspecies Internet, to use AI/ML to foster interspecies communication between human and nonhuman animals that seems like it might be relevant here, albeit somewhat different from Project CETI. Interspecies Internet seems to have gained some traction outside of EA and their projects may be of some interest here. 

This might be a bit late, but I reckon it might be quite relevant to put this here in this thread. Here's my paper with Peter Singer on AI Ethics: The Case for Including Animals:

https://link.springer.com/article/10.1007/s43681-022-00187-z

I'm excited to see this post, thank you for it! 

I also think much more exploration and/or concrete work needs to be done in this "EA+AI+animals" (perhaps also non-humans other than animals) direction, which (I vaguely speculate) may extend far beyond the vicinity of the Project CETI example that you gave. Up till now, this direction seems almost completely neglected. 

There is the Earth Species project, an "open-source collaborative and non-profit dedicated to decoding non-human language" co-founded by Aza Raskin and based in Berkeley. Seems like Project Ceti but for all other-than-humans. They're just getting started but truly excited by  such projects and the use of AI to bridge Umwelts. Thanks for the post.

Not sure why my link is just rerouting to this page, url is earthspecies.org

A project called Evolving Language was also hiring a ML researcher  to "push the boundaries of unsupervised and minimally supervised learning problems defined on animal vocalizations and on human language data".

There's also deepsqueak which studies rat squeaks using DL. But their motive seems to be to do better, and more, animal testing. (not suggesting this is neccessarily net bad)

Thanks you for this post! I am really interested in this intersection!

Let's say my cause area is helping the most animals. Is it better to donate to animals directly or AI alignment research? If the answer is AI alignment research where is the best fund to donate to?

I and a few other people are discussing how to start some new charities along the lines of animals and longtermism, which includes AI. So maybe that's what we need in EA before we can talk about where we can donate to help steer AI to better care for animals.

Not sure what is meant by "donate to humans directly"? 

Also I suggest not limiting yourself to these two categories, as there're likely better areas to donate to in order to help advance the "AI for animals" direction (e.g. supporting individuals or orgs that are doing high-impact work in this specific direction (if there isn't any currently, consider committing donations to future ones), or even better, starting a new initiative if you're a good fit and have good ideas). 

Sorry typo I meant donate to animals directly.

Curated and popular this week
Relevant opportunities