“Do you know what the most popular book is? No, it’s not Harry Potter. But it does talk about spells. It’s the Bible, and it has been for centuries. In the past 50 years alone, the Bible has sold over 3.9 billion copies. And the second best-selling book? The Quran, at 800 million copies.

As Oxford Professor William MacAskill, author of the new book “What We Owe The Future”—a tome on effective altruism and “longtermism”—explains, excerpts from these millennia-old schools of thought influence politics around the world: “The Babylonian Talmud, for example, compiled over a millennium ago, states that ‘the embryo is considered to be mere water until the fortieth day’—and today Jews tend to have much more liberal attitudes towards stem cell research than Catholics, who object to this use of embryos because they believe life begins at conception. Similarly, centuries-old dietary restrictions are still widely followed, as evidenced by India’s unusually high rate of vegetarianism, a $20 billion kosher food market, and many Muslims’ abstinence from alcohol.”

The reason for this is simple: once rooted, value systems tend to persist for an extremely long time. And when it comes to factory farming, there’s reason to believe we may be at an inflection point.”

Read the rest on Forbes.

Comments13


Sorted by Click to highlight new comments since:

Thanks for sharing Brian!

If you don't mind, I'll copy the two parts that stood out to me the most and helped to clarify the point for me better. If these points are valid, and I do think the logic makes sense, then this is quite concerning. Would love to hear other peoples thoughts on this.

And here’s the crux: If it (AGI) arrives, it may lock-in the values that exist at the time, including how we think about and treat animals on factory farms. This is because an AGI could be coded to reflect the preferences of the programmer—a potentially powerful individual or institution, since it’s unlikely this technology will emerge in a decentralized way given the capital and technical expertise required to build it—for the purpose of assisting them in achieving their and what they believe should be society’s goals, and one of those goals might be raising animals for food. What’s more, an AGI would be able to figure out how to farm animals in even more efficient ways, decreasing the cost of meat—which most people would celebrate—and increasing the profit margins of those who stand to benefit from this technology. No human would be more powerful than an AGI, so whatever force aims an AGI would have more power than any force that does not have that ability.

This value lock-in, combined with the fact that an AGI would not be hard to replicate, makes it such that the values encoded into the AGI could exist for as long as the universe can support life. As MacAskill writes, “There’s nothing different in principle between the software that encodes Pong and the software that encodes an AGI. Since that software can be copied with high fidelity, an AGI can survive changes in the hardware instantiating it. AGI agents are potentially immortal.”

I agree that this is an important issue and it feels like the time is ticking down on our window of opportunity to address it. I can imagine some scenarios in which this value lock in can play out.

At some point, AGI programmers will reach the point where they have the opportunity to train AGI to recognize suffering vs happiness as a strategy to optimize it to do the most good. Will those programmers think to include non-human species? I could see a scenario where programmers with human-centric world views would only think to include datasets with pictures and videos of human happiness and suffering. But if the programmers value animal sentience as well, then they could include datasets of different types of animals as well!

Ideally the AGI could identify some happiness/suffering markers that could apply to most nonhuman and human animals (vocalizations, changes in movement patterns, or changes in body temperature), but if they can’t then we may need to segment out different classes of animals for individual analysis. Like how would AGI reliably figure out when a fish is suffering?

And on top of all this, they would need to program the AGI to consider the animals based on moral weights, which we are woefully unclear on right now.

There is just so much we don’t know about how to quantify animal suffering and happiness which would be relevant in programming AGI. It would be great to be able to identify these factors so we can eventually get that research into the hands of the AGI programmers who become responsible for AI take-off. Of course, all this research could be for negligible impact if the key AGI programmers do not think animal welfare is an important enough issue to take on.

Are there any AI alignment researchers currently working on the issue of including animals in the development of AI safety and aligned goals?

Agree with the sentiment, thanks for the reply!

Of course, all this research could be for negligible impact if the key AGI programmers do not think animal welfare is an important enough issue to take on.

Exactly what I was thinking too. Unfortunately I think AGI will (and  likely already  is) move at light speed compared to the inclusion of animal consideration in our moral circle (when has tech not greatly exceeded pace with social movements?). If there's going to be a lock in, I'm fairly confident it's going to be well before we'll be where we need to be with our relationship with animals— even if we abolish factory farming by then.

So where does that leave us? Infiltrate companies working on AGI? Bring them into our circles and engage in conversations? Entice  programmers/researchers with restricted grants (to help shape those datasets)? Physically mail them a  copy of Animal Liberation? Are we even ready to engage in a meaningful way?

There's just so many questions. Really thought-provoking stuff.

Are there any AI alignment researchers currently working on the issue of including animals in the development of AI safety and aligned goals?

Would love to know this too! I'm fairly new to this world and still poking around and learning, if I dig anything up I'll edit this post. 

I'm currently working in technical AI safety, and I have two main thoughts on this:
1) We currently don't have the ability to robustly imbue AI with ANY values, let alone values that include all animals. We need to get a lot farther with solving this technical problem (the alignment problem) before we can  meaningfully take any actions which will improve the longterm future for animals.
2) The AI Safety community generally seems mostly on board with animal welfare, but it's not a significant priority at all, and I don't think they take seriously the idea that there are S-risks downstream of human values (e.g. locking in wild-animal suffering). I'm personally pretty worried about this, not because I have a strong take about the probability of S-risks like this, but because the general vibe is just so apathetic about this kind of thing that I don't trust them to notice and take action if it were a serious problem.

Thanks for your comment. Are there any actions the EA community can take to help the AI Safety community prioritize animal welfare and take more seriously the idea that there are S-risks downstream or human values?

Archive Link 

As an example of this dynamic he calls “early plasticity, later rigidity,” MacAskill asks us to consider the U.S. Constitution. It was written over 116 days, and amended eleven times in the first six years. But in the last fifty years, it’s only been amended once. I suspect that if we don’t make headway in ending factory farming soon, it’ll be not unlike many of the constitutional laws we find distasteful—seemingly impossible to overturn.

Are there many parts of the constitution that 'we', meaning people in general, find 'distasteful'? My impression is that most of the constitution either has few critics, or, if it has many critics, also has many defenders, or the critics disagree about how to change it. If we were to write it from scratch today, we'd probably end up with something quite different in many respects, but that doesn't mean there are massive generally agreed problems with it.

Yes, there are many things that majorities want that they do not get, especially if there are many people who oppose the change, and they care a lot about opposing the change. A 60:40 split is a long way away from being sufficiently universal 'distaste' that we should expect it to necessarily triumph. This was true when the constitution was first written and remains the case today, so is not a sign on increasing rigidity. 

Why then do you think there are fewer amendments overtime?

I think there are two main reasons:

  1. The low hanging fruit was picked early; you can only pass the first amendment once.
  2. Changing SCOTUS philosophies, in particular the rise of Living Constitution Doctrine, meant that formal amendments were not required because the Justices would just make up new interpretations of old words to suite contemporary political situations. With the recent fall from favour of this doctrine and the rise of Origionalism it seems possible to me we might see more amendments in the future.

Interesting. Thanks for your comments.

In the meantime, I would treat the constitution component in the piece as a metaphor to illustrate the idea of lock-in for a general audience.

I’d certainly write the constitution differently (why doesn’t it mention welfare for insects, for example?), but I more take it to mean that numerous amendments were required to make it moral, and still many more are needed.

why doesn’t it mention welfare for insects, for example

Because most people do not care about insect welfare. The issue is not 'rigidity'; no sane amendment process would lead to the constitution mentioning insect welfare. 

Curated and popular this week
 ·  · 1m read
 · 
 ·  · 5m read
 · 
When we built a calculator to help meat-eaters offset the animal welfare impact of their diet through donations (like carbon offsets), we didn't expect it to become one of our most effective tools for engaging new donors. In this post we explain how it works, why it seems particularly promising for increasing support for farmed animal charities, and what you can do to support this work if you think it’s worthwhile. In the comments I’ll also share our answers to some frequently asked questions and concerns some people have when thinking about the idea of an ‘animal welfare offset’. Background FarmKind is a donation platform whose mission is to support the animal movement by raising funds from the general public for some of the most effective charities working to fix factory farming. When we built our platform, we directionally estimated how much a donation to each of our recommended charities helps animals, to show users.  This also made it possible for us to calculate how much someone would need to donate to do as much good for farmed animals as their diet harms them – like carbon offsetting, but for animal welfare. So we built it. What we didn’t expect was how much something we built as a side project would capture peoples’ imaginations!  What it is and what it isn’t What it is:  * An engaging tool for bringing to life the idea that there are still ways to help farmed animals even if you’re unable/unwilling to go vegetarian/vegan. * A way to help people get a rough sense of how much they might want to give to do an amount of good that’s commensurate with the harm to farmed animals caused by their diet What it isn’t:  * A perfectly accurate crystal ball to determine how much a given individual would need to donate to exactly offset their diet. See the caveats here to understand why you shouldn’t take this (or any other charity impact estimate) literally. All models are wrong but some are useful. * A flashy piece of software (yet!). It was built as
Garrison
 ·  · 7m read
 · 
This is the full text of a post from "The Obsolete Newsletter," a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to build Machine Superintelligence. Consider subscribing to stay up to date with my work. Wow. The Wall Street Journal just reported that, "a consortium of investors led by Elon Musk is offering $97.4 billion to buy the nonprofit that controls OpenAI." Technically, they can't actually do that, so I'm going to assume that Musk is trying to buy all of the nonprofit's assets, which include governing control over OpenAI's for-profit, as well as all the profits above the company's profit caps. OpenAI CEO Sam Altman already tweeted, "no thank you but we will buy twitter for $9.74 billion if you want." (Musk, for his part, replied with just the word: "Swindler.") Even if Altman were willing, it's not clear if this bid could even go through. It can probably best be understood as an attempt to throw a wrench in OpenAI's ongoing plan to restructure fully into a for-profit company. To complete the transition, OpenAI needs to compensate its nonprofit for the fair market value of what it is giving up. In October, The Information reported that OpenAI was planning to give the nonprofit at least 25 percent of the new company, at the time, worth $37.5 billion. But in late January, the Financial Times reported that the nonprofit might only receive around $30 billion, "but a final price is yet to be determined." That's still a lot of money, but many experts I've spoken with think it drastically undervalues what the nonprofit is giving up. Musk has sued to block OpenAI's conversion, arguing that he would be irreparably harmed if it went through. But while Musk's suit seems unlikely to succeed, his latest gambit might significantly drive up the price OpenAI has to pay. (My guess is that Altman will still ma