Epistemic status: Very quickly written, on a thought I've been holding for a year and that I haven't read elsewhere. 

I believe that within this decade, there could be AGIs (Artificial General Intelligences) powerful enough that the values they pursue might have a value lock-in effect, at least partially. This means they could have a long-lasting impact on the future values and trajectory of our civilization (assuming we survive).

This brief post aims to share the idea that if your primary focus and concern is animal welfare (or digital sentience), you may want to consider engaging in targeted outreach on those topics towards those who will most likely shape the values of the first AGIs. This group likely includes executives and employees in top AGI labs (e.g. OpenAI, DeepMind, Anthropic), the broader US tech community, as well as policymakers from major countries.

Due to the risk of lock-in effects, I believe that the values of relatively small groups of individuals like the ones I mentioned (less than 3 000 people in top AGI labs) might have a disproportionately large impact on AGI, and consequently, on the future values and trajectory of our civilization. My impression is that, generally speaking, these people currently 

a) don't prioritize animal welfare significantly 

b) don't show substantial concern for digital minds sentience. 

Hence if you believe those things are very important (which I do believe), and you think that AGI might come in the next few decades[1] (which a majority of people in the field believe), you might want to consider this intervention.

Feel free to reach out if you want to chat more about this, either here or via my contact you can find here.

  1. ^

    Even more so if you believe, as I do along with many software engineers in top AGI labs, that it could happen this decade.

Comments16
Sorted by Click to highlight new comments since: Today at 7:28 AM

Strongly agree that if lock-in happens, it will be very important for those controlling the AIs to care about all sentient beings. My impression of top AGI researchers is that most take AI sentience pretty seriously as a possibility, and it seems hard for someone to think this without also believing animals can be sentient.

Obviously this is less true the further you get from AI safety/OpenAI/DeepMind/Anthropic. An important question is, if AGI happens and the control problem is solved, who ends up deciding what the AGI values?

I'm pretty uncomfortable with the idea of random computer scientists, tech moguls, or politicians having all the power. Seems like the ideal to aim for is a democratic process structured to represent the reflective interests of all sentient beings. But this would be extremely difficult to do in practice. Realistically I expect a messy power struggle between various interest groups. In that case, outreach to leaders of all the interest groups to protect nonhuman minds is crucial, as you suggest.

I wrote some related thoughts here, curious what you think.

My impression of top AGI researchers is that most take AI sentience pretty seriously as a possibility, and it seems hard for someone to think this without also believing animals can be sentient.

I am not saying this is common, but it is alarming that Eliezer Yudkowsky, a pretty prominent figure in the space, thinks that AI sentience is possible but nonhuman animals are not sentient.

Agreed, it's a pretty bizarre take. I'd be curious whether his views have changed since he wrote that FB post

This is indeed a good idea (although it isn't that clear to me "how to do targeted outreach to people there" woud work, but I havent done targeted outreach before)

A future where the current situation would continue, but with AI making us more powerful, would in all likelihood be a very bad one if we are to include farmed animals (it gets more complicated if you include wild animals).

See the following relevant articles:

 Optimistic longtermism would be terrible for animals

Longtermism plans to colonize space and expand as much as possible would be horrifying for animals. We could bring factory farms, animal testing and see earth animals to other planets

If we don't end factory farming soon it might be there forever :  

Values and habits can lock-in for a long time - especially habits like meat eating.

For me, it sounds likely that the "expected value" of the future depends mostly on what happens to farmed and wild animals. See the Moral Weight project : "Given hedonism and conditional on sentience, we think (credence: 0.65) that the welfare ranges of humans and the vertebrate animals of interest are within an order of magnitude of one another". 

Why the expected numbers of farmed animals in the far future might be huge

The case could be made that the case for concerning about farmed animals in the far future is at least as strong as the case concerning for humans in the far future.

Longtermists tend to be super optimistic that alternative proteins and cultured meat will be more efficient and cheaper, baffling people working in animal welfare

[reasons include that factory farming goes beyond "cows+chicken+pigs" and could come to include fish and insects. It also includes silk, fashion, pigments, medical supplies, experimentation. Current AI is already boosting factory farming.]

Thanks for writing this! I have been meaning to write something about why I think digital sentience should potentially be prioritized more highly in EA; in lieu of that post, here's a quick pitch:
 

  1. One of EA's comparative advantages seems to have been "taking ideas seriously." Many of the core ideas from EA came from other fields (economics, philosophy, etc.); the unusual aspect of EA is that we didn't take invertebrate welfare or Famine, Affluence, and Morality as intellectual thought experiments but instead serious issues.
  2. It seems possible to me that digital welfare work will, by default, exist as an intellectual curiosity. My sample of AI engineers is skewed, but my sense is most of them will be happy to discuss digital sentience for a couple hours, but are unlikely to focus on it heavily.
  3. Going from "that does seem like a potentially big problem, someone should look into that" to "I'm going to look into that" is a thing that EA's are sometimes good at doing.

On (2): I agree most are unlikely to focus on it heavily, but convincing some people at top labs to care at least slightly seems like it could have a big effect in making sure at least a little animal welfare and digital minds content is included in whatever they train AIs to aim towards. Even a small amount of empathy and open-mindedness for what could be capable of suffering should do a lot for the risk of astronomical suffering.

I'm not too confident that AGI's would be prone to value lock in. Possibly I am optimistic about ai, but ai already seems quite good at working through ethical dilemmas and acknowledging that there is nuance and conflicting views on morals. It would seem like quite the blunder to simply regard the morals of those closest to them as the ones of most importance.

But AIs could value anything. They don’t have to value some metric of importance that lines up with what we care about on reflection. That is, it wouldn’t be a blunder in an epistemic sense. AIs could know their values lack nuance and go against human values, and just not care.

Or maybe you’re just saying that, with the path we’re currently on, it looks like powerful AIs will in fact end up with nuanced values in line with humanity’s. I think this could still constitute a value lock-in, though, just not one that you consider bad. And I expect there would still be value disagreements between humans even if we had perfect information, so I’m skeptical we could ever instill values into AIs that everyone is happy about it.

I’m also not sure AI would cause a value lock-in, but more because powerful AIs may be widely distributed such that no single AGI takes over everything.

Interesting, I wonder if AGI will have a process for deciding it's values(like a constitution). But then the question is how it decides on what that process is(if there is one).

I thought there might be a connection between having a nuanced process for an agi to pick it's values and problem solving ability(ex. How to end the world), such that having the ability to end the world must mean that they have a good ability to work through nuance on their values and think it may not be valuable. Possibly this connection might not always exist in which case, epic sussyness may occur

Yeah, there might be a correlation in practice, but I think intelligent agents could have basically any random values. There are no fundamentally incorrect values, just some values that we don't like or that you'd say lack importance nuance. Even under moral realism, intelligent systems don't necessarily have to care about the moral truth (even if they're smart enough to figure out what the moral truth is). Cf. the orthogonality thesis.

I mean, I agree that it has nuance but it's still trained on a set of values that are pretty much current western people values, so it will probably put more or less emphasis on various values according to the weight western people give to each of those. 

Not too sure how important values in data sets would be. Possibly AGI's may be created different than current LLMs in simply not needing a dataset to be trained from

this idea sounds good and your website looks great (best of luck with your projects! :)

Thanks for sharing, Simeon!

I guess part of the lack of concern for artificial sentience is explained by people at top labs focussing on aligning AGI with human values, rather than impartial value (relatedly). Ensuring that AI systems are happy seems like a good strategy to increase impartial value. It would lead to good outcomes even in scenarios where humans become disempowered. Actually, the higher the chance of humans becoming disempowered, the more pressing is artificial sentience? I suppose it would make sense for death with dignity strategies to address this (some discussion).

Peter Singer and Tse Yip Fai were doing some work on animal welfare relating to AI last year: https://link.springer.com/article/10.1007/s43681-022-00187-z It looks like Fai at least is still working in this area. But I'm not sure whether they have considered or initiated outreach to AGI labs, that seems like a great idea.

Curated and popular this week
Relevant opportunities