I wrote this post because while I originally wanted to comment on the post I’m referring to, written by Richard Ngo, (which is also on lesswrong) I realized that my reply would not be ideal as a comment. There are 3 reasons for this: 1. It’s too long. 2. It’s been some time since the original post. 3. I feel as if it’s important enough to be a post of its own. Therefore, I am making my first post on the EA forum (after lurking for at least 4 years).
Before I begin, I would like to state that most arguments below are not post-hoc, as Richard worried. I’ve actually questioned the impact of personal diet vs impact of career before engaging with any EAs, and this questioning, to different extent at different times, affected my "level of veganness" in the earlier days. Also, anything related to AI and longtermism are not made-up in reaction to the said, or even any EA forum posts, I thought about them for years, and I currently work on AI ethics, focusing on animals. (under the guidance of Peter Singer, as a contractor RA at Princeton)
I have four main points to express, which I chose to address based on my perception on what might be important disagreements I had with some common views in the comment sections:
- I observed that EAs, including most animal welfare EAs, generally seem too willing to disregard considerations involving tiny probabilities, when it comes to animals. This doesn’t seem right to me.
- People's doubts that their diets affect the values they hold and reveal seem odd to me. More below.
- Some EAs might be underestimating the severity of the problem of factory farming, both as it is happening now, but maybe more so, in the long-term future. And I think changing one’s view on how big the problem of factory farming is likely to update some views on the question of diet. To limit the length of my post, I will only write about the long-term side of the story.
- It seems that quite some people think that the value signaling argument isn’t compelling, either that it doesn’t work, or that it could work both ways. I am not denying that it could work both ways, but I am going to give one example to address the doubt that it doesn't work at all, explaining one specific way that I believe quite reasonably points to the conclusion that certain EAs' diet might have major impacts on animals through the values they hold and communicate.
Part 1: Small chance, big impact, for animals
I observed that EAs, including most animal welfare EAs, generally seem too willing to disregard considerations involving tiny probabilities, when it comes to animals. This doesn’t seem right to me. This caused people to sometimes say things like "it feels unlikely that my diet affected my compassion/values/prioritization", and then dismissed the train of thought. But an unlikely pathway could still be important if the stake of the outcome is large enough. In reaction to this, the short slogan I would like to say here might be: It is mainly the expected values at stake that matter, not the size of the chance, and this applies to work on animal welfare too. And, going with that, I argue that the things at stake for animals might be much larger than some might think. Even if we focus only on factory farmed animals (and therefore ignore issues like wild animal suffering in space). I am saying this because I observed (not just in this post) that people tend to think of animal welfare cause areas as "mid-termish" or even short-termish", and therefore think of them as relatively small scales in the longtermist perspective. (see part 3)
Part 2: Diet likely affects the values we hold and reveal - to an extent that matters.
First, there is some uncertainty over what we can conclude from the research about meat consumption and people’s attitudes toward animals. To me, it still seems plausible that eating less meat helps people care more about animals, even if we have doubts over the research quality, we still shouldn’t think the research shouldn’t update us to some extent. And if it does update us, we should consider to apply the principle laid out in part 1. Second, to offer a humble anecdote, I feel that my diet shaped my thinking (not claiming that it was necessarily in a good way, although I tend to believe that it helped me move away from certain moral and scientific blindspots, ), it was the key factor behind my constant thinking about animal welfare, which eventually led me to come up with ideas related to the intersection between animal welfare, longtermism, and AI, and eventually pursue a career on it (I planned to expand on this in other posts. Actually, I expected those other posts to be my first EA forum posts, not this one so spare me for not arguing enough for why these intersections are important). I’m currently writing about AI’s potential harm to farmed animals through the use of AI in factory farming and algorithmic biases. I’ve written about AI making the price of meat cheaper and therefore increasing the chance factory farming will stay. And AI identifying the word “chicken” more with pictures of cooked chicken than pictures of living chickens, therefore reinforcing speciesism. If I instead ate meat, I am unlikely to think about these things (as it would be more difficult for me to do so than if I am plant-based), or if I do think about it, it’s less likely that I think these patterns as harmful. This further shows that in my case, my diet shapes my thinking heavily. (and I am not centering my claim here on that these research findings are important ones, and the conclusions I drew are the right ones. My core claim here was that diet did shape my thinking). Third, I find it even less plausible that our diet won’t have impacts through signaling effects. Maybe some people are arguing along the lines of the proposed impact path not being convincing, which I will address in part 4. But I find it clear that diet does at least partially signal our values (to humans and AI). As another anecdote, when I choose who to discuss issues about how AI affects animals’ lives, I screen people very carefully to avoid backfiring and information hazards. One very important selection criteria I use is whether people are veg*ns or reducetarians. Of course, I do not think that people who do not practice any of these diets are not trustworthy, but I gain a high level of confidence within a short period of time if a person practices these diets for ethical reasons.
Part 3: The extent of suffering of factory farming in the long-term
I assume that most EAs have heard of the numbers and the situations of terrestrial vertebrate animals, so I won’t repeat them here. But I would like to point out the parts of the picture that are often missed, fish and invertebrates. My intention isn’t even to talk about their numbers and situations - that can be easily searched - but rather, point out what we could be missing if you omit them from our mental picture.
One thing that can be easily missed if fish and invertebrates are excluded, besides the much bigger numbers now, is that it might substantially change our views on how likely factory farming will stay on earth for extremely long, or even spread to the whole galaxy. I am writing a post on this topic (which I intended to be my first ever EA forum post, and should be ready within 10 days. But the short summary is that while terrestrial vertebrates are indeed extremely difficult and costly to ship from one planet to another or to space structures and are highly land and resource intensive under current technologies, fish, insects, and aquatic invertebrates are not, and are therefore proposed or advocated by various space visionaries and researchers to be the animal protein of choice for space colonies (spare me for delaying references to that post). A related issue is cultivated meat, of which most of the longtermists I have heard discussed about factory farming in the long-term future thinks that it will very likely (which raises the question, again, of whether the remaining possibility are already bad enough. But some did say certainly) replace all factory farming (I only heard one who didn’t, so far). The problem is that, if fish and invertebrates are also included, cultivated meat’s relative advantages becomes much smaller (land use, calorie/protein conversion ratios, pollution, etc), that led me to conclude (along with other reasons) in the post that there are substantial chances that factory farms will spread to the whole galaxy, and therefore that the expected number of beings who will suffer in factory farms in the long-term future can be astronomical.
Please bear with me even if your view is that humanity won’t expand beyond earth, as the inclusion of fish and invertebrates also increases the chance factory farming will stay on earth for extremely long (i.e. as long as earth is habitable for humans), therefore this section, and the following one is still relevant for you.
Part 4: An example of an impact path from value shaping and signaling
What does factory farming staying on earth for extremely long, or spreading to the whole galaxy, have to do with current dietary behaviors of EA? The importance lies in pointing out one example of an impact path on how diets shaping and revealing our values will actually affect the world in substantial ways. My thinking is roughly as follows: It might be rather uncompelling to talk about value signaling of our diet if we believed that the vast majority of the impacts of this value signaling lies outside of factory farming (in the long-term), because it requires speculative and hard to verify arguments about how our value signaling will affect those future events. But the value signaling argument seems much stronger if factory farming will be here to stay or even expand to space. And it also happens to relate to AI. In a paper we are drafting, me and Peter Singer argued AI will be used by the factory farming industry, and it will reshape the industry, including increasing stocking density (more suffering), AI mistakes in treating animals, and AI making it more likely that factory farming will stay. It’s hard to talk in detail about how work in AI alignment and AI governance relate to animals and factory farming, so I will just give two examples of the issues I think about and then conclude that values that AI alignment and AI governance people hold and reveal could affect the lives of animals significantly. One example relates to the alignment problem:
P1: Some AI companies talk about their AI being able to improve animal welfare. Even if we charitably think that they did actually put in this objective in their AI, how can we make sure that the AI is trying to increase actual welfare levels, instead of welfare indicators (which, as far as I know, were always defined by humans, and the “ground truths” of them being fulfilled, always labeled by humans). Another relates to governance and ethics:
P2: If an industry is considered to be morally wrong (at least by most utilitarians and EAs), how should we govern AI to help to replace this industry? Considering that AI, as me and Peter argue, will affect how long this industry will stay. (not just through making factory farming more efficient, it could also help research on PB/CM).
I said I set my example around factory farming because it feels less speculative and more intuitive, but maybe not everyone has those concerns. So I also thought of other considerations on how the values AI people hold and reveal toward animals might affect the lives of animals generally. None of these considerations relate to diets if diets don’t affect the values one holds or reveals. So at this point you might like to refer back to part 2 if you are not convinced. In any case, my considerations are:
P3: AI alignment researchers can choose the scope of beings to align with AI. For instance, Stuart Russell expressed explicitly in his book Human Compatible, and his interview with 80k, that he wants AI to be aligned with humanity only, and animals’ interests aligned only as much as humans care about them. To help you imagine how this issue affects the lives of animals, think about the difference between an AI designed to align with human defined and human evaluated animal welfare indicators, and AI designed to align with actual welfare of animals.
P4: One direction of AI alignment is to let AI infer human preferences/values (IRL, debate, coherent extrapolated volition, etc.). I have no AI background and am looking as an outsider and am likely to be wrong. But I honestly find it hard to think that AI alignment researchers won’t have a greater than average input on what kinds of preferences/values the AI will learn.
P5: Maybe AI doesn't need to be made to explicitly learn from humans to learn harmful values toward animals? A lot of times, how something is designed, or the fact that it is designed, reflects a lot of human values. For example, the fact that most AI giants in the world are working on projects making AI to help factory farms lower costs is a good example of AI researchers’ values directly affecting animals. Also, human language and image labeling contain our speciesist patterns, and will therefore be reflected in language models, search algorithms, image recognition systems etc (I wrote a paper with Peter Singer and two other professors on how these issues, now on ArXiv). I reckon that AI researchers will form only a miniscule portion of this data, but it also seems likely to me that these people will directly affect how these AI behave toward animals by choosing what problems to fix, which not. This makes it an example to P1.
No conclusions section, but do me a favor?
In the end, I would like to look for some people to guide me in thinking about how AGI might infer our preferences and values from our diets. I look forward to discussions.