“Do you know what the most popular book is? No, it’s not Harry Potter. But it does talk about spells. It’s the Bible, and it has been for centuries. In the past 50 years alone, the Bible has sold over 3.9 billion copies. And the second best-selling book? The Quran, at 800 million copies.
As Oxford Professor William MacAskill, author of the new book “What We Owe The Future”—a tome on effective altruism and “longtermism”—explains, excerpts from these millennia-old schools of thought influence politics around the world: “The Babylonian Talmud, for example, compiled over a millennium ago, states that ‘the embryo is considered to be mere water until the fortieth day’—and today Jews tend to have much more liberal attitudes towards stem cell research than Catholics, who object to this use of embryos because they believe life begins at conception. Similarly, centuries-old dietary restrictions are still widely followed, as evidenced by India’s unusually high rate of vegetarianism, a $20 billion kosher food market, and many Muslims’ abstinence from alcohol.”
The reason for this is simple: once rooted, value systems tend to persist for an extremely long time. And when it comes to factory farming, there’s reason to believe we may be at an inflection point.”
Read the rest on Forbes.
I agree that this is an important issue and it feels like the time is ticking down on our window of opportunity to address it. I can imagine some scenarios in which this value lock in can play out.
At some point, AGI programmers will reach the point where they have the opportunity to train AGI to recognize suffering vs happiness as a strategy to optimize it to do the most good. Will those programmers think to include non-human species? I could see a scenario where programmers with human-centric world views would only think to include datasets with pictures and videos of human happiness and suffering. But if the programmers value animal sentience as well, then they could include datasets of different types of animals as well!
Ideally the AGI could identify some happiness/suffering markers that could apply to most nonhuman and human animals (vocalizations, changes in movement patterns, or changes in body temperature), but if they can’t then we may need to segment out different classes of animals for individual analysis. Like how would AGI reliably figure out when a fish is suffering?
And on top of all this, they would need to program the AGI to consider the animals based on moral weights, which we are woefully unclear on right now.
There is just so much we don’t know about how to quantify animal suffering and happiness which would be relevant in programming AGI. It would be great to be able to identify these factors so we can eventually get that research into the hands of the AGI programmers who become responsible for AI take-off. Of course, all this research could be for negligible impact if the key AGI programmers do not think animal welfare is an important enough issue to take on.
Are there any AI alignment researchers currently working on the issue of including animals in the development of AI safety and aligned goals?