80000 Hours says, "We think intense efforts to reduce meat consumption could reduce factory farming in the US by 10-90%. Through the spread of
more humane attitudes, this would increase the expected value of the future of humanity by 0.01-0.1%."

I'm having trouble understand reducing current meat consumption would increase the expected value of the future. I'm not entirely sure if by spreading humane attitudes 80000 Hours if referring to current humans having more humane attitudes or far-future humans having more humane attitudes. Also, when it says "humane attitude", I'm not sure if they actually mean changing people's terminal values to be more humane or merely being more humane not by changing their terminal values but by getting place higher value in increasing animal rights by better informing them about how bad they are.

If it's referring to making current humans more humane without changing their terminal values, then it's not clear to me how that would improve the far future. People in the far future could presumably have plenty of time to learn on their own what damage factory farming causes if for whatever reason factory farming is still in use.

If it's referring to making current humans more humane by changing their terminal values, then it's not clear to me how this would occur. My understanding is that animal rights activities tend to spend their time showing people how bad conditions are, and I see no mechanism by which this would change people's terminal values. And if they do change peoples' terminal values to be more humane, and this is what people would like, then I don't see why people in the far future wouldn't just change their terminal values to be more humane on their own.

I've look around on the Internet for a while for answers to this question, but found none.

30

0
0

Reactions

0
0
Comments14
Sorted by Click to highlight new comments since: Today at 6:49 AM

Hi Evira — this is an incredibly hard figure to estimate and we haven't decided to deeply investigate the question, so this should basically be viewed as a guess informed by the views of other people involved in effective altruism.

It is also a pretty low figure (in my view), which reflects that we're also skeptical of the size of these effects. But here are some pathways to consider:

  • Animal organisations do sometimes campaign on the moral patienthood of non-humans, and persuade people of this, especially in countries where this view is less common;
  • Getting people to stop eating meat makes it easier for them to concede that the welfare of non-humans is of substantial importance;
  • Fixing the problem of discrimination against animals allows us to progress to other moral circle expansions sooner, most notably from a long-termist perspective, recognising the risks of suffering in thinking machines;
  • Our values might get locked in this century through technology or totalitarian politics, in which case we need to rush to reach something tolerable as quickly as possible;
  • Our values might end up on a bad but self-reinforcing track from which we can't escape, which is a reason to get to something tolerable quickly, in order to make that less likely;
  • Animal advocacy can draw people into relevant moral philosophy, effective altruism and related work on other problems, which arguably increases the value of the long-term future.

Thank you for the detailed response. Some responses to your points:

Our values might get locked in this century through technology or totalitarian politics, in which case we need to rush to reach something tolerable as quickly as possible;

I'm having a hard time thinking of how technology could lock in our values. One possibility is that AGI would be programmed to value what we currently value with no ability to have moral growth. However, it's not clear to me why anyone would do this. People, as best as I can tell, value moral growth and thus would want AGI to be able to exhibit it.

There is the possibility that programming AGI to value only what we currently value right now without the possibility of moral growth would be technically easier. I don't see why this would be the case, though. Implementing people's CEV, as Eliezer proposed, would allow for moral growth. Narrow value learning, as Paul Christiano proposed, would presumably allow for moral growth if the AGI learns to avoid changing people's goals. AGI alignment via direct specification may be made easier by prohibiting moral growth, but the general consensus I've seen is that alignment via direct specification would be extremely difficult and thus improbable.

There's the possibility of people creating technology for the express purpose of preventing moral growth, but I don't know why people would do that.

As for totalitarian politics, it's not clear to me how they would stop moral growth. If there is anyone in charge, I would imagine they would value their personal moral growth and thus would be able to realize that animal rights are important. After that, I imagine the leader would then be able to spread their values onto others. I know little about politics, though, so there may be something huge I'm missing.

I'm also a little concerned that campaigning for animals rights may backfire. Currently many people seem unaware of just how bad animal suffering is. Many people also love eating meat. If people become informed of the extent of animal suffering, then to minimize cognitive dissonance I'm concerned people will stop caring about animals rather than stop eating meat.

So, my understanding is, getting a significant proportion of people to stop eating meat might make them more likely to exhibit moral growth by caring about other animals, which would be useful for one, unlikely to be used, alignment strategy. I'm not saying this is the entirety of your reasoning, but I suspect it would be much more efficient working on AI alignment by directly working on alignment research or by convincing people that such alignment research is important.

Another possibility is to attempt to spread humane values by directly teaching moral philosophy. Does this sound feasible?

Our values might end up on a bad but self-reinforcing track from which we can't escape, which is a reason to get to something tolerable quickly, in order to make that less likely;

Do you have any situations in mind in which this could occur?

Fixing the problem of discrimination against animals allows us to progress to other moral circle expansions sooner, most notably from a long-termist perspective, recognising the risks of suffering in thinking machines;

I'm wondering what your reasoning behind this is.

Animal advocacy can draw people into relevant moral philosophy, effective altruism and related work on other problems, which arguably increases the value of the long-term future.

I'm concerned this may backfire as well. Perhaps people would after becoming vegan, figure they have done a sufficiently large amount of good and thus be less likely to pursue other forms of altruism.

This might seem unreasonable: performing one good deed does not seem to increase the costs or decrease the benefits of performing other good deeds by much. However, it does seem to be how people act. As evidence, I heard that despite wealth having steeply diminishing returns to happiness, wealthy individuals give a smaller proportion of their money to charities. Further, some EA's have a policy of donating 10% of their income, even if after donating 10% they still have far more money than necessary for living comfortably.

I think many of your concerns will come down to views on the probabilities assigned to certain possibilities.

I'm having a hard time thinking of how technology could lock in our values. One possibility is that AGI would be programmed to value what we currently value with no ability to have moral growth. However, it's not clear to me why anyone would do this. People, as best as I can tell, value moral growth and thus would want AGI to be able to exhibit it.

Even then, the initial values given to the AGIs may have a huge influence, and some of these can be very subjective, e.g. how much extra weight more intense suffering should receive compared to less intense suffering (if any) or other things we care about, and how much suffering we think certain beings experience in given circumstances.

Implementing people's CEV, as Eliezer proposed, would allow for moral growth.

Besides being sensitive to the initial views, people hold contradictory views, so could there not be more than one CEV here? Some will be better or worse than others according to EAs who care about the wellbeing of sentient individuals, and if we reduce the influence of worse views, this could make better solutions more likely.

As for totalitarian politics, it's not clear to me how they would stop moral growth. If there is anyone in charge, I would imagine they would value their personal moral growth and thus would be able to realize that animal rights are important. After that, I imagine the leader would then be able to spread their values onto others. I know little about politics, though, so there may be something huge I'm missing.

It's of course possible, but is it almost inevitable that these leaders will value their own personal moral growth enough, and how many leaders will we go through before we get one that makes the right decision? Even if they do value personal moral growth, they still need to be exposed to ethical arguments or other reasons that would push them in a given direction. If the rights and welfare of certain groups of sentient beings are not on their radar, what can we expect from these leaders?

Also, these seem to be extremely high expectations of politicians, who are fallible and often very self-interested, and especially in the case of totalitarian politics.

I'm also a little concerned that campaigning for animals rights may backfire. Currently many people seem unaware of just how bad animal suffering is. Many people also love eating meat. If people become informed of the extent of animal suffering, then to minimize cognitive dissonance I'm concerned people will stop caring about animals rather than stop eating meat.

There is indeed evidence that people react this way. However, I can think of a few reasons why we shouldn't expect the risks to outweigh the possible benefits:

1. Concern for animal rights and welfare seems to be generally increasing (despite increasing consumption of animal products; this is not driven by changing attitudes on animals), and I think there is popular support for welfare reform in many places, with the success of corporate campaigns and improving welfare legislation as evidence for this, and attitude surveys generally. I think people at Sentience Institute see welfare reforms as building momentum more than justifying complacency.

2. If this is a significant risk, animal product substitutes (plant-based and cultured) and institutional approaches, which are currently prioritized in EA over individual outreach, should help to make the choice to not eat meat easier, so fewer people will resolve their cognitive dissonance this way. People who care about sentient beings can play an important role in the development and adoption of such technologies and reform of institutions (through campaigning), so it's better to have more of them.

3. Animal advocates don't just show people how bad animal suffering is; other arguments and approaches are used.

4. There's some (weak) evidence that animal advocacy messaging works to get people to reduce their consumption of animal products, and cruelty messaging seemed more effective than environmental and abolitionist/rights/antispeciesist messages. See also other reports and blog posts by Humane League Labs.


So, my understanding is, getting a significant proportion of people to stop eating meat might make them more likely to exhibit moral growth by caring about other animals, which would be useful for one, unlikely to be used, alignment strategy.

How unlikely do you think this is? It's not just the AI safety community that will influence what safety features will go into AIs, but also possibly other policy makers, politicians, voters and corporations.

I'm wondering what your reasoning behind this is.

One reason could be just a matter of limited time and resources; advocates can move onto other issues when their higher priorities have been addressed. Another is that comparisons between more similar groups of individuals probably work better in moral arguments in practice, e.g. as mammals and birds receive more protections, it will become easier to advocate for fishes and invertebrates (although this doesn't stop us from advocating for these now). If more sentient animals have more protections, it will be easier to advocate for the protection of sentient AIs.

I'm concerned this may backfire as well. Perhaps people would after becoming vegan, figure they have done a sufficiently large amount of good and thus be less likely to pursue other forms of altruism.

It's possible. This is self-licensing. Some responses:

Anecdotally, the students in the animal rights society at the University of Waterloo are also much more engaged in environmental activism than most students. Social justice advocates are often involved in multiple issues.

Human rights organizations seem to be increasing their support for animal protection (written by an animal and human rights advocate).

Support for protections for human rights and welfare, and animals seems correlated both in individuals and legally at the state-level in the US.

Veg*nism seems inversely related to prejudice, dominance and authoritarianism generally.

There's evidence that randomly assigning people people to participate in protests makes them more likely to participate in future protests.

As evidence, I heard that despite wealth having steeply diminishing returns to happiness, wealthy individuals give a smaller proportion of their money to charities.

Wealthier people might also be less compassionate on average:

https://www.psychologytoday.com/us/blog/the-science-behind-behavior/201711/why-people-who-have-less-give-more

https://www.scientificamerican.com/article/how-wealth-reduces-compassion/

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5240617/ (the effect might be small)

Further, some EA's have a policy of donating 10% of their income, even if after donating 10% they still have far more money than necessary for living comfortably.

I would guess that EAs who donate a larger percentage of their income (and people who donate more of their income to EA-aligned charities) are more involved in the movement in other ways on average.

Imagine you heard about alien civilization that was pivoted towards colonizing the stars. But most of these aliens had almost no moral recognition and some of them were raised in inhumane conditions to be killed for trivial reasons for the other aliens. If I heard about this situation, I would be pretty concerned about what the aliens would do when they started colonizing the stars. I wouldn't be rooting for them by trying to prevent existential risk instead of trying to improve their values.


But of course, that's a description of our society. There are some additional details about our society that make me more hopeful about it, but it seems quite weird to say that improving our values in this way wouldn't be important.

I think this comment may help explain and direct you to further reading.

My understanding is that animal rights activities tend to spend their time showing people how bad conditions are, and I see no mechanism by which this would change people's terminal values.

Most people haven't thought much about animal welfare or rights, and being confronted with conditions can push them to do so. Also, activists don't just show people conditions, they actually make arguments, often by analogy with companion animals or humans, too, which are effectively antispeciesist arguments.

Furthermore, getting people to reduce their consumption of animal products, however this is done (e.g. by improving substitutes), tends to make them less prone to the cognitive dissonance and rationalization that prevents them from recognizing the importance of nonhuman animals.

It could, a priori, be the case that improving animal rights now increases the probability that attitudes will be humane (to a given degree) in the far future at all; there's some concern with value lock-in with AGI, for example. Some changes could also be hard to reverse even if attitudes improve, e.g. spreading self-propagating suffering animals or artificially sentient beings into space.

And if human influence continues to increase over time (e.g. as we spread in space), then a delay in the progress of moral circle expansion could have effects that add up over time, too. To illustrate, suppose we have two (finite or infinite) sequences representing the amount of suffering in our sphere of influence at each point in time, but we make earlier progress on moral circle expansion in one so the amount of suffering in our sphere of influence is reduced by 1 at each step in that sequence compared to the other; or, the other sequence is a shift of the one with earlier moral circle expansion. With some choice of units, the sequences could look like 1, 2, 3, 4, 5, ..., n and 2, 3, 4, 5, ..., n+1, with the last value of each sequence appearing at time n. The sum of the differences between the two sequences is 1 + 1 + 1 + 1 + ... + 1 = n, which grows without bound as a function of n, so it could end up very large.

It's also not crucial that the sum of the differences grow without bound, just that it's large.

I don't know that these are the scenarios they have in mind at 80,000 Hours, though.

To illustrate, suppose we have two (finite or infinite) sequences representing the amount of suffering in our sphere of influence at each point in time, but we make earlier progress on moral circle expansion in one so the amount of suffering in our sphere of influence is reduced by 1 at each step in that sequence compared to the other;

Just to say I really liked this point, which I think applies equally to focusing on the correct account of value (and opposed to who the value-bearers are, which is this point)

I'll just share a couple of resources from Sentience Institute here, as they are relevant to the original question and didn't see the other commenters mention them:

"Why I prioritize moral circle expansion over artificial intelligence alignment"

"Social change v. food technology" in our "Summary of Evidence for Foundational Questions in Effective Animal Advocacy" (although this isn't directly relevant to the question, there are some relevant factors here, e.g. discussion of setpoints).

Thanks for posting this question! It seems pretty tricky to figure out the connections between short-term animal welfare and long-term value, and I'm glad you sparked more discussion on the subject.

Through the spread of more humane attitudes, this would increase the expected value of the future of humanity by 0.01-0.1%.

I don't know how 80k evaluates the expected value of the future of humanity in other cases, but to me that number seems small in a way that suggests to me they have already "priced in" the uncertainty you are seeing.

My guess is the figure is so small at least partly because of an assumption that the default expected value of the far future is high already. If this is the case, then someone who expects disvalue to be far more prominent in the future all else equal will consider this increase in humane values much more important, relatively speaking.

If it's referring to making current humans more humane by changing their terminal values, then it's not clear to me how this would occur. My understanding is that animal rights activities tend to spend their time showing people how bad conditions are, and I see no mechanism by which this would change people's terminal values.

Plenty of people now think that slavery is bad now that it has been abolished. It is intuitively clear to me that when people lose the ability to rationalize something, then they will tend to be more careful before endorsing it as good (especially if it causes a bunch of suffering). Right now, few people explicitly care about animal suffering (besides maybe dogs and cats) -- but in a world where factory farming is remembered as a great crime of the past, I expect our attitudes to shift.

I think that in the future, people will eliminate acquiring food by suffering animals in factory farms anyways. This is because people will presumably be able to live in virtual realities and efficiently create virtual food without causing any suffering. Thoughts?

If we reach a point in history where humans can upload themselves and survive without the consumption of physical resources, the world will look different in so many ways that almost every cause area we think about will be totally unrecognizable. This is a thing that could someday happen, but that doesn't mean there's any less value in bringing about the end of factory farming much sooner.

More from Evira
Curated and popular this week
Relevant opportunities