Animal welfare is what brought me to EA. I spent several years working for animal advocacy organisations, and the EA ideals to do with rigorous thinking about where effort makes the biggest difference was something I believed in fully.
This post is me thinking aloud, not staking a firm position. I'd genuinely welcome pushback from people who know this space better than I do.
The framing of the problem is a bit odd to me
The AI x animals argument, as I understand it: AI systems are making decisions that affect how we use animals. Those systems don't adequately represent animal welfare. If we can get welfare into the benchmarks/constitutions of AI labs, we can shift outcomes for animals at huge scale before they get locked in. Okay.
But nobody is 'ignoring' animal welfare; they're just indifferent. AI systems are being built to do exactly what they were designed to do, which is to faithfully execute human preferences. And those are, in aggregate, to eat cheap meat, conduct research on living organisms when it's convenient, and prioritise cost and efficiency in agricultural supply chains. AI is reflecting the values of the humans. I don't think you can sneak those values in, unless there are specific opportunities to tweak things here and there before they get cemented.
Are there such opportunities? So far, I can't break this down to anything tangible. 'If we don't do anything, the systems will become entrenched and determine animal outcomes for decades to come' - what systems? What outcomes? Who, where? Can someone give me a few clear examples of tractable situations?
Naming the situation isn't enough. I suspect there are situations that represent a theoretical fork in the road: The EU AI Act is a real regulatory framework being implemented now. Procurement systems at major food companies are being built. Agricultural AI platforms from John Deere and Bayer are being deployed. But how is any of this tractable - what are you hoping orgs/grantees/EA people can do about those things?
It seems like the best we could hope for is to effect a thin layer of consideration on top of a reality (the collective attitudes of humans towards animals) that will bypass our efforts the moment it conflicts with something humans really do care about. Like profit.
The point I've seen raised about Claude's constitution only containing one line regarding animal welfare, first of all seems arbitrary (it doesn't matter how many words there are; only what the words say), and secondly, merely reflects the real situation; our attitudes, as a whole. Focusing on 'making AI go better for animals' by convincing AI labs to suddenly care seems to be addressing the symptom rather than the cause.
Where I think AI might help
What if, instead of trying to push concern for farmed or wild animals in to AI labs situationally, we can use AI to make traditional animal agriculture obsolete? Maybe that's where funding should go; at least, there are clearly definable ways that AI could catalyse that outcome:
Cell culture optimisation is an enormous search space; finding the exact combination of nutrients, temperatures, and growth factors that make cells proliferate efficiently. AI can model and run simulated experiments at a speed that wet lab trial and error cannot match.
Scaffold design, one of the hardest unsolved problems in cultivated meat, involves getting cells to grow in three-dimensional structures that actually resemble meat texture. AI can help design and test scaffolding materials and geometries by modelling cell behaviour computationally.
I've also read that AI can optimise production processes in ways that could drive costs down dramatically.
Each of these seem like a more robust theory of change to me than 'do something to prevent detrimental lock-in'.
What I'm uncertain about
The regulatory and scaling challenges that cultivated meat faces are large and I'm not qualified to assess them fully. I'm aware cultivated meat has had a difficult few years commercially and faces active political opposition in some markets. I don't know if those are terminal or temporary problems.
It's also possible I'm underestimating the leverage of getting welfare into AI systems; maybe one well-placed benchmark really does shift how frontier labs think, and that ripples out in ways I'm not aware of.
If that's so, then can someone tell me, in plain English, what that looks like? I.e '[lab] is currently planning [this development]. If we do [this action], we can change it to [this outcome], which will mean [x number] of animals experience [less suffering, presumably].'
At the very least, if there is a much clearer plan for impact for 'making AI go better for animals', then I think it ought to be communicated more concretely than what I've seen so far, to avoid people in the space either writing posts like this, or just kind of going along with the trend - even if they don't understand it.
TLDR: I don't understand the tractability of 'make AI go better for animals' except for where it may speed up our path to cultivated meat adoption, which isn't mentioned in any of the 'make AI go better for animals' stuff that I've read.
