80000 Hours says, "We think intense efforts to reduce meat consumption could reduce factory farming in the US by 10-90%. Through the spread of
more humane attitudes, this would increase the expected value of the future of humanity by 0.01-0.1%."
I'm having trouble understand reducing current meat consumption would increase the expected value of the future. I'm not entirely sure if by spreading humane attitudes 80000 Hours if referring to current humans having more humane attitudes or far-future humans having more humane attitudes. Also, when it says "humane attitude", I'm not sure if they actually mean changing people's terminal values to be more humane or merely being more humane not by changing their terminal values but by getting place higher value in increasing animal rights by better informing them about how bad they are.
If it's referring to making current humans more humane without changing their terminal values, then it's not clear to me how that would improve the far future. People in the far future could presumably have plenty of time to learn on their own what damage factory farming causes if for whatever reason factory farming is still in use.
If it's referring to making current humans more humane by changing their terminal values, then it's not clear to me how this would occur. My understanding is that animal rights activities tend to spend their time showing people how bad conditions are, and I see no mechanism by which this would change people's terminal values. And if they do change peoples' terminal values to be more humane, and this is what people would like, then I don't see why people in the far future wouldn't just change their terminal values to be more humane on their own.
I've look around on the Internet for a while for answers to this question, but found none.
Thank you for the detailed response. Some responses to your points:
I'm having a hard time thinking of how technology could lock in our values. One possibility is that AGI would be programmed to value what we currently value with no ability to have moral growth. However, it's not clear to me why anyone would do this. People, as best as I can tell, value moral growth and thus would want AGI to be able to exhibit it.
There is the possibility that programming AGI to value only what we currently value right now without the possibility of moral growth would be technically easier. I don't see why this would be the case, though. Implementing people's CEV, as Eliezer proposed, would allow for moral growth. Narrow value learning, as Paul Christiano proposed, would presumably allow for moral growth if the AGI learns to avoid changing people's goals. AGI alignment via direct specification may be made easier by prohibiting moral growth, but the general consensus I've seen is that alignment via direct specification would be extremely difficult and thus improbable.
There's the possibility of people creating technology for the express purpose of preventing moral growth, but I don't know why people would do that.
As for totalitarian politics, it's not clear to me how they would stop moral growth. If there is anyone in charge, I would imagine they would value their personal moral growth and thus would be able to realize that animal rights are important. After that, I imagine the leader would then be able to spread their values onto others. I know little about politics, though, so there may be something huge I'm missing.
I'm also a little concerned that campaigning for animals rights may backfire. Currently many people seem unaware of just how bad animal suffering is. Many people also love eating meat. If people become informed of the extent of animal suffering, then to minimize cognitive dissonance I'm concerned people will stop caring about animals rather than stop eating meat.
So, my understanding is, getting a significant proportion of people to stop eating meat might make them more likely to exhibit moral growth by caring about other animals, which would be useful for one, unlikely to be used, alignment strategy. I'm not saying this is the entirety of your reasoning, but I suspect it would be much more efficient working on AI alignment by directly working on alignment research or by convincing people that such alignment research is important.
Another possibility is to attempt to spread humane values by directly teaching moral philosophy. Does this sound feasible?
Do you have any situations in mind in which this could occur?
I'm wondering what your reasoning behind this is.
I'm concerned this may backfire as well. Perhaps people would after becoming vegan, figure they have done a sufficiently large amount of good and thus be less likely to pursue other forms of altruism.
This might seem unreasonable: performing one good deed does not seem to increase the costs or decrease the benefits of performing other good deeds by much. However, it does seem to be how people act. As evidence, I heard that despite wealth having steeply diminishing returns to happiness, wealthy individuals give a smaller proportion of their money to charities. Further, some EA's have a policy of donating 10% of their income, even if after donating 10% they still have far more money than necessary for living comfortably.