I'm a managing partner at AltX, an EA-aligned quantitative crypto hedge fund. I previously earned to give as a Quant Trading Analyst at DRW. In my free time, I enjoy reading, discussing moral philosophy, and exploring Wikipedia rabbit holes.
(Disclaimer: I take RP's moral weights at face value, and am thus inclined to defend what I consider to be their logical implications.)
Specifically with respect to cause prioritization between global heath and animal welfare, do you think the evidence we've seen so far is enough to conclude that animal welfare interventions should most likely be prioritized over global health?
In "Worldview Diversification" (2016), Holden Karnofsky wrote that "If one values humans 10-100x as much [as chickens], this still implies that corporate campaigns are a far better use of funds (100-1,000x) [than AMF]." In 2023, Vasco Grilo replicated this finding by using the RP weighs to find corporate campaigns 1.7k times as effective.
Let's say RP's moral weights are wrong by an order of magnitude, and chickens' experiences actually only have 3% of the moral weight of human experiences. Let's say further that some remarkably non-hedonic preference view is true, where hedonic goods/bads only account for 10% of welfare. Still, corporate campaigns would be an order of magnitude more effective than the best global health interventions.
While I agree with you that it would be premature to conclude with high confidence that global welfare is negative, I think the conclusions of RP's research with respect to cause prioritization still hold up after incorporating the arguments you've enumerated in your post.
I appreciate that, and I agree with you!
However, as far as I'm aware, EA-recommended family planning interventions do decrease the amount of children people have. If these charities benefit farmed animals (and I believe they do), decreasing the human population is where these charities' benefits for farmed animals come from.
I've estimated that both MHI and FEM prevent on the order of 100 pregnancies for each maternal life they save. Unless my estimates are way too high (please let me know if they're wrong; I'm happy to update!), even if only a very small percentage of these pregnancies would have resulted in counterfactual births, both of these charities would still on net decrease the amount of children people have.
It’s noteworthy that if the procreation asymmetry is rejected, the sign of family planning interventions is the opposite of the sign of lifesaving interventions like AMF. Thus, those who support AMF might not support family planning interventions, and vice versa.
For what it's worth, both Holden and Jeff express considerable moral uncertainty regarding animals, while Eliezer does not. Continuing Holden's quote:
My own reflections and reasoning about philosophy of mind have, so far, seemed to indicate against the idea that e.g. chickens merit moral concern. And my intuitions value humans astronomically more. However, I don’t think either my reflections or my intuitions are highly reliable, especially given that many thoughtful people disagree. And if chickens do indeed merit moral concern, the amount and extent of their mistreatment is staggering. With worldview diversification in mind, I don’t want us to pass up the potentially considerable opportunities to improve their welfare.
I think the uncertainty we have on this point warrants putting significant resources into farm animal welfare, as well as working to generally avoid language that implies that only humans are morally relevant.
I agree with you that it's quite difficult to quantify how much Eliezer's views on animals have influenced the rationalist community and those who could steer TAI. However, I think it's significant--if Eliezer were a staunch animal activist, I think the discourse surrounding animal welfare in the rationalist community would be different. I elaborate upon why I think this in my reply to Max H.
I apologize for phrasing my comment in a way that made you feel like that. I certainly didn't mean to insinuate that rationalists lack "agency and ability to think critically" -- I actually think rationalists are better at this than almost any other group! I identify as a rationalist myself, have read much of the sequences, and have been influenced on many subjects by Eliezer's writings.
I think your critique that my writing gave the impression that my claims were all self-evident is quite fair. Even I don't believe that. Please allow me to enumerate my specific claims and their justifications:
On whether aligned TAI would create a utopia for humans and animals, I think the arguments for pessimism--especially about the prospects for animals--are serious enough that having TAI steerers care about animals is very important.
Thanks for describing your reasons. My criterion for moral patienthood is described by this Brian Tomasik quote:
When I realize that an organism feels happiness and suffering, at that point I realize that the organism matters and deserves care and kindness. In this sense, you could say the only "condition" of my love is sentience.
Many other criteria for moral patienthood which exclude animals have been proposed. These criteria always suffer from some combination of the following:
The most parsimonious definition of moral patient I've seen proposed is just "a sentient being". I don't see any reason why I should add complexity to that definition in order to exclude nonhuman animals. The only motivation I can think of for doing this would be to compromise on my moral principles for the sake of the pleasure associated with eating meat, which is untenable to a mind wired the way mine is.
Eliezer's perspective on animal consciousness is especially frustrating because of the real harm it's caused to rationalists' openness to caring about animal welfare.
Rationalists are much more likely than highly engaged EAs to either dismiss animal welfare outright, or just not think about it since AI x-risk is "obviously" more important. (For a case study, just look at how this author's post on fish farming was received between the EA Forum and LessWrong.) Eliezer-style arguments about the "implausibility" of animal suffering abound. Discussions of the implications of AI outcomes on farmed or wild animals (i.e. almost all currently existing sentient beings) are few and far between.
Unlike Eliezer's overconfidence in physicalism and FDT, Eliezer's overconfidence in animals not mattering has serious real-world effects. Eliezer's views have huge influence on rationalist culture, which has significant influence on those who could steer future TAI. If the alignment problem will be solved, it'll be really important for those who steer future TAI to care about animals, and be motivated to use TAI to improve animal welfare.
Agreed. I'm planning on writing up a post about it, but I'm very busy and I'd like the post to be extremely rigorous and address all possible objections, so it probably won't be published for a month or two.
Yes, I agree with that caveat.