1068Joined Jun 2020


Contractor RA to Peter Singer, Princeton


I thought about using AI/ML to help alternative protein research and thought about whether it is better for talents to work for one of the alt-pro companies, or independently. I think I much prefer to see independent work on this because the research results will be much more likely to be shared among the whole industry. If I am a funder I think I will only consider funding this if this work will be open source.

To me, the scariest implication of octopus farming is that it updates me downward, maybe significantly, the probability that factory farming will be eliminated/replaced entirely. If humans are so eager to develop a type of factory farming that is so difficult and inefficient, I am afraid I just can't see how we can guarantee that factory farming won't continue into the far future. (yes, I am talking about the type of "far" that the average longtermists speak of)

Hi Trevor, I am really interested in the link about "it could quickly get much worse", but you seemed to have pasted the wrong thing there.

but I think that compassion needs to be extended to all the people who have been impacted by the FTX crisis

I agree with this. But I think compassion needs to be even further extended to every sentient beings who might be worse off because of this event (but this is assuming EA suffering from this event will be less good done in the world, and I recognize that there are people who seem to genuinely think that the world will be better if EA disappears from the world).  And the implication of this is that we need to think about whether these mockeries and passionate outrage are good for these sentient beings.

It might be the case that some EAs who are doing mockery or passionate outrage are doing it as a way of damage control. But from a longer-term perspective, I am not sure these mechanisms are net-good, for the reason below.

On a more general level, it seems to me that trusting and following our social norms systematically and reliably leaves out most sentient beings who deserve our compassion (future people and nonhuman animals, nonhuman animals in general, potential digital beings). And anger/disgust as mechanisms for "enforcing ethics" seems to me to be particularly dubious, if not harmful, as they often also show anger/disgust to those who don't show anger/disgust toward what most people think deserve anger or disgust, thereby reinforcing whatever norms that are already widely held, instead of an extension mechanism. Also, people can observe what things attract anger/disgust and what not, and I believe the observation will inevitably make some, if not many, people use that as evidence for how bad/important/urgent some issue is. 

On a personal level, I have tried to move away from using anger or disgust to regulate my moral thinking and my actions, or as mechanisms to change the world, and I seemed to have had some success. I used to be extremely angry with people who know the suffering of factory-farmed animals but still choose to keep fueling it, and moderately disgusted with farmed animal advocates who somehow think the suffering of animals in nature is okay. But I no longer feel these emotions as strongly as I used to. And I have to admit, I don't feel much emotional anger or disgust this time even though I think something very wrong likely has happened.


UPDATE: I saw Wixela's comment above after finishing typing this. I agree with Wixela that EAs are sometimes better off feeling what we genuinely feel, especially given that EAs already have pretty widespread and strong norms on controlling emotions and letting rationality fix our instincts/emotions. But I stand by the view that anger/disgust as mechanisms of "enforcing ethics" is pretty dubious.

I think "easier to say/write" is not a good enough reason (certainly much weaker than the concern of fighting two philosophical battles, or weirding people away) to always say/write "people"/"humanity". 

My understanding is that when it was proposed to use humans/people/humanity to replace men/man/mankind to refer to humans generally, there were some pushbacks. I didn't check the full details of the pushbacks but I can imagine some saying man/mankind is just easier to say/write because they have fewer words, and are more commonly known at that time. And I am pretty sure that "mankind" not being gender-neutral is what led to feminists, literature writers, and even etymologists to eventually support using "humanity" instead. 

You mentioned "the two [meanings] can get muddled". For me, that's a reason to use "sentient beings" instead of "people". This was actually the reason some etymologists mentioned when they supported using "humanity" in place of "mankind", because back in their times, the word "man/men" started to mean both "humans" and "male humans", making it possibly, if not likely, suggest that anything that relates to the whole humanity has nothing to do with women.

And it seems to me that we need to ask, as much as we need to ask whether "mankind" is not explicitly mentioning women  as one of the stakeholders given the currently most common meaning of the word "man", whether "people" is a good umbrella term for all sentient beings. It seems to me that it is clearly not.

I am glad that you mentioned the word "person". I think even though the same problems still exist if we use this word insofar as people think the word "person" can only be used on humans (which arguably is most people), the problems are less severe. For instance, some animal advocates are trying to advocate for some nonhuman animals to be granted legal personhood, (and some environmentalists tried to seek for nature entities to be given legal personhood, and some of them succeeded). My current take is that "person" is better. But still not ideal as it is quite clear that most people now can only think of humans when they see/hear "person".

I agree that "few philosophical longtermists would exclude nonhuman animals from moral consideration". But I took it literally because I do think there is at least one  who would exclude nonhuman animals. Eliezer Yudkowsky, whom some might doubt how much he is a philosopher/longtermist, holds the view that pigs cannot feel pain (and by choosing pig he is basically saying no nonhuman animals can feel pain). It also seems to me some "practical longtermists" I came across are omitting/heavily discounting nonhuman animals in their longtermist pictures. For instance, Holden Karnofsky said in an 2017 article on Radical Empathy that his " own reflections and reasoning about philosophy of mind have, so far, seemed to indicate against the idea that e.g. chickens merit moral concern. And my intuitions value humans astronomically more." (but he accepts that he could be wrong and that there are smart people who think he is wrong about this, so that he is willing to have OP's "neartermist" part to help nonhuman animals) And it seems to me that the claim is still mostly right because most longtermists are EAs or lesswrongers or both.  But I expect some non-EA/lesswronger philosophers to become longtermists in the future (and presumably this is what the advocates of longtermism want, even for those people who only care about humans), and I also expect some of them to not care about animals.

Also, excluding nonhumans from longtermism philosophically is different from excluding nonhumans from the longtermist project. The fact that there isn't yet a single project supported by longtermist funders supporting work on animal welfare under the longtermist worldview makes the philosophical inclusion rather non-comforting, if not more depressing. (I mean, not even a single, which could easily be justified by worldview/intervention diversification. And I can assure you that it is not because of a lack of proposals)

P.S. I sometimes have to say "animal" instead of "nonhuman animals" in my writings to not freak people out or think I am an extremist. But this clearly suffers from the same problem I am complaining.

I want to raise an issue I saw in China. Some people invented a way of washing crayfish (and increasingly crabs too) using an ultrasonic bath. As you can see here (and here), the animals show fierce escape behavior as soon as the ultrasonic is turned on, and they try to stay away from the water when there is ultrasonic passing through, but they have to stay in the bath for 2-5 minutes. Seems extremely inhumane if these animals feel pain.

I understand that Ord, and MacAskill too, have given similar explanations, and for multiple times among each of them. But I disagree that the terminology is not biased - It still leads a lot of readers/listeners to focus on the future of humans if they haven't seen/heard these caveats, or maybe even if they have read/heard about it. 

I don't think the fact that among organisms only humans can help other sentient beings justifies almost always using languages like "future of humanity", "future people", etc. For example, "future people matter morally just as much as people alive today". Whether this sentence should be said with "future people" or "future sentient beings" shouldn't have anything to do whether humans/people will be the only beings who can help other sentient beings. It just looks like a strategic move to reduce the weirdness of longtermism, or avoiding fighting two philosophical battles (which are probably sound reasons, but I also worry that this practice locks in humancentric/speciesist values) So yes, until AGI comes only humans can help other sentient beings but the future that matters should still be a "future of sentient beings". 

And I am not convinced that the terminology didn't serve speciesism/humancentrism in the community. As a matter of fact, some of prominent longtermists, when they try to evaluate the value of the future, they focused on how many future humans there could be and what could happen to them. Holden Karnofsky and some others took it further and discussed digital people. MacAskill wrote about the number of nonhuman animals in the past and present in WWOTF, but didn't discuss how many of them there will be and what might happen to them. 

Speaking of synaptic connection, there's another problem: Human adults have less than the average human infants. Peter Huttenlocher "showed that synaptic density in the human cerebral cortex increases rapidly after birth, peaking at 1 to 2 years of age, at about 50% above adult levels . It drops sharply during adolescence then stabilizes in adulthood, with a slight possible decline late in life."

But that apparently didn't make most humans to think that human babies are more sentient/more complex in experience. Actually, we pretty much thought the reverse. Untill mid 1980s, most doctors do not use anesthesia in operations for newborn human infants.

I think another reason to think that insect farming should be given more attention is its longterm implications. It seems the most likely to be done on other planets or space stations, and it seems the least likely to be replaced by alternative proteins (and btw some people view them as alternative protein).

Load More