Why do you think deep ecology values would not get locked in to an AI system? Presumably there are ethical priorities we might want an AI system to have or constrain itself by which are not instrumental for something else, so it's not obvious to me that something needs to be instrumental to stick around as a value
Under what conditions do you think an AI system would possess values that lead it populate other worlds with animals? If deep ecology values are not locked in, I would think this makes AI's much less likely to spread biological life to other planets or terraform them to enable this. I could see something more like a "recklessness" where it accidentally spreads life in pursuit of some other goal, but I have a hard time seeing what human-aligned goal set which explicitly excludes deep ecology would lead it to seed other planets with wildlife. Maybe something adjacent like Pet Planets for people to entertain themselves with / other cases where humans want animals around for a purely hedonistic purpose which is orthogonal to their welfare?
This will be more of a loose collection of weakly held takes and what I see as cruxes for this question than a firm position.
I think that for farmed animal welfare, the crux is "If all the technical barriers to cultivated meat / brainless chickens / something that can outcompete meat in the market are solved so that we reach the theoretical optimum for suffering-free meat or meat alternatives, will we still be in too much of an economic stable state due to economies of scale for regular meat to get outcompeted?"
i.e. I have a strong intuition that the theoretically optimal way to produce meat or something that is experientially superior to it does not require raising sentient beings in horrific conditions, so I think whether things go well for farmed animals is mostly a function of economic or cultural lock-in
For wild animal welfare, I worry significantly more about lock-in of e.g. the naturalistic fallacy that systems are good as they are and we should abstain from any interventions. Whether this dominates long-term animal welfare concerns depends on the relative numbers of wild vs farmed vs other (pet?) animals. I think wild animal welfare does become a tractable problem post-AGI, and the question will be whether there is will to solve it, and whether those with the will to solve it are able to (i.e. how much does this require global agreement/coordination that won't be possible due to lock-in?)
Cruxes related to this: To what extent will animal advocates be enabled by AGI, to what extent will they need to be unilateralists, and to what extent could their actions backfire?
AGI going well for animals seems to depend on how much it amplifies the agency of animal advocates, which is a channel that could be closed off due to power concentration (i.e. we may not see the world where AI is focused on problems like cultivated meat or wild animal welfare in certain worlds). That said, I would not classify power-concentrating outcomes that leave some humans without meaningful agency over the world as "good for humans".
In general, I think this question as framed is going to produce a lot of false disagreement due to diverging definitions of "AGI going well for humans". For me, things like lock-in of current values, power concentration, and a loss of agency over the future are firmly in the bucket of "AGI did not go well for humans", which potentially cuts out a lot of the failure modes for animals.
Crux: Can AI be robustly aligned to animal preferences in principle? There are a lot potential concerns around specification gaps (e.g. you optimize for a proxy of animal welfare, not the real thing) for the long-term future of AI and animals. That said, I don't think this is something a sufficiently powerful AI could not solve with sufficiently advanced technology. If preference / welfare is a physical structure in the brains of animals, a hypothetical AI could determine exactly what to optimize for without needing some external channel, though it might still need to calibrate its understanding of what structures correspond to preference based on e.g. behaviors that communicate preferences or analogies to creatures for which this is clearer.
Crux: how many actors have terminal preferences for suffering? agency may be amplified for animal advocates, but it could also be amplified for malevolent actors.
Also, some arguments I do not quite buy:
Stability of regulations, culture, and consumer preferences keeping cultivated meat or something similar off the table. I think post-AGI, a lot of things that are not baked into the trajectory of technology or the preferences of agents will get eroded. For instance, I do not expect particular state regulations over cultivated meat to have much stability into the long-term future. The exception here is if these things get locked in -- I am not immediately sure how likely this is, but I have a harder time imagining things like a terminal consumer preference for slaughtered meat or current regulations getting locked in than some broad moral intuition that humans are a higher moral priority because they are "smarter", conservationism vs welfarism w.r.t. to wild systems, etc.
That because meat consumption is on the rise globally, post-TAI trends will look the same. I think this hinges too much on AI being a normal technology. If we had a strong reason to believe that it was physically impossible to produce meat / something better without suffering, this would change my mind, but I do not think this is true. I would put >85% probability on factory farming not existing 100 years post-AGI because I think optimizing for something consumable is more likely to optimize away the suffering than to keep it around by happenstance. I could see things getting worse near-term due to precision livestock farming until a phase shift later this century that makes vat meat economically competitive at scale.
Nice points! A few questions: