If we take things as they stand at the moment, AGI going well for humans doesn’t translate to AGI going well for animals. There is however a world where AGI has been trained in such a manner that it recognises sentience as its metric for moral consideration, which in turn would result in AGI going well for humans and for animals alike.
Say we define "going well" simply as "better than the status quo", for a human, "going well" might mean radical life extension or a post-scarcity economy. With the status quo for animals (especially factory farmed animals) currently being incredibly low, "going well" might just mean a slight improvement in living conditions such as slightly less cramped cages. In that sense, human and animal interests are currently very unaligned.
Assuming AGI solves problems humanity can't (such as war, mental illness, climate change, resource limitations), it likely requires some level of control over human actions. If we lose our autonomy to an AGI but it ends factory farming, did things "go well"? Or does the loss of what makes us human outweigh the reduction in suffering?
Without timely interventions such as training AGI to recognise sentience, we could just easily slide into a "Hyper-Efficiency" trap when it comes to the welfare of farmed animals:
Precision Suffering: AI could monitor disease and stress just enough to pack animals into even higher densities without them dying, maximizing profit while ignoring the pain threshold.
Automated Sentience: AGI could bring trillions of insects or fish into existence in zero-welfare, fully automated systems.
Genetic Masking: We could see AI-accelerated breeding for "pain-insensitive" animals, which might hide visible distress while the internal sentient experience remains one of suffering.
Humane-washing: Minor AI fixes, like identifying a single sick cow, could be used as a "Welfare Facade" to hide systemic cruelty from the public.
On the flip side, the positive path is equally transformative. AI could render slaughterhouses obsolete by making cultivated meat cheaper and more accessible than farmed meat. Even more interestingly, AI "translation" of animal vocalizations could create a social breakthrough. If we can decode and prove an animal's expressed distress in real-time, it becomes much harder for human society to ignore their sentience.
If AGI/TAI is powerful enough to cause a fundamental break in our current legal and social systems, then animal advocacy needs to shift. For example, should the long-term goal of animal advocacy be to influence the "Constitutions" of labs like OpenAI, Anthropic, and DeepMind to ensure that "sentience" (the ability to feel), rather than "intelligence," is the metric for moral consideration?
As things stand, AGI going well for humans does not automatically translate to it going well for animals. We could easily build a human utopia that functions as an animal nightmare. To avoid that, we need to ensure AI is trained to recognize sentience as a non-negotiable variable in its moral calculus.
If we take things as they stand at the moment, AGI going well for humans doesn’t translate to AGI going well for animals. There is however a world where AGI has been trained in such a manner that it recognises sentience as its metric for moral consideration, which in turn would result in AGI going well for humans and for animals alike.
Say we define "going well" simply as "better than the status quo", for a human, "going well" might mean radical life extension or a post-scarcity economy. With the status quo for animals (especially factory farmed animals) currently being incredibly low, "going well" might just mean a slight improvement in living conditions such as slightly less cramped cages. In that sense, human and animal interests are currently very unaligned.
Assuming AGI solves problems humanity can't (such as war, mental illness, climate change, resource limitations), it likely requires some level of control over human actions. If we lose our autonomy to an AGI but it ends factory farming, did things "go well"? Or does the loss of what makes us human outweigh the reduction in suffering?
Without timely interventions such as training AGI to recognise sentience, we could just easily slide into a "Hyper-Efficiency" trap when it comes to the welfare of farmed animals:
On the flip side, the positive path is equally transformative. AI could render slaughterhouses obsolete by making cultivated meat cheaper and more accessible than farmed meat. Even more interestingly, AI "translation" of animal vocalizations could create a social breakthrough. If we can decode and prove an animal's expressed distress in real-time, it becomes much harder for human society to ignore their sentience.
If AGI/TAI is powerful enough to cause a fundamental break in our current legal and social systems, then animal advocacy needs to shift. For example, should the long-term goal of animal advocacy be to influence the "Constitutions" of labs like OpenAI, Anthropic, and DeepMind to ensure that "sentience" (the ability to feel), rather than "intelligence," is the metric for moral consideration?
As things stand, AGI going well for humans does not automatically translate to it going well for animals. We could easily build a human utopia that functions as an animal nightmare. To avoid that, we need to ensure AI is trained to recognize sentience as a non-negotiable variable in its moral calculus.