AS

Ariel Simnegar

880 karmaJoined May 2022

Bio

Participation
3

I'm a managing partner at AltX, an EA-aligned quantitative crypto hedge fund. I previously earned to give as a Quant Trading Analyst at DRW. In my free time, I enjoy reading, discussing moral philosophy, and exploring Wikipedia rabbit holes.

Comments
143

Yes, I agree with that caveat.

(Disclaimer: I take RP's moral weights at face value, and am thus inclined to defend what I consider to be their logical implications.)

Specifically with respect to cause prioritization between global heath and animal welfare, do you think the evidence we've seen so far is enough to conclude that animal welfare interventions should most likely be prioritized over global health?

In "Worldview Diversification" (2016), Holden Karnofsky wrote that "If one values humans 10-100x as much [as chickens], this still implies that corporate campaigns are a far better use of funds (100-1,000x) [than AMF]." In 2023, Vasco Grilo replicated this finding by using the RP weighs to find corporate campaigns 1.7k times as effective.

Let's say RP's moral weights are wrong by an order of magnitude, and chickens' experiences actually only have 3% of the moral weight of human experiences. Let's say further that some remarkably non-hedonic preference view is true, where hedonic goods/bads only account for 10% of welfare. Still, corporate campaigns would be an order of magnitude more effective than the best global health interventions.

While I agree with you that it would be premature to conclude with high confidence that global welfare is negative, I think the conclusions of RP's research with respect to cause prioritization still hold up after incorporating the arguments you've enumerated in your post.

I appreciate that, and I agree with you!

However, as far as I'm aware, EA-recommended family planning interventions do decrease the amount of children people have. If these charities benefit farmed animals (and I believe they do), decreasing the human population is where these charities' benefits for farmed animals come from.

I've estimated that both MHI and FEM prevent on the order of 100 pregnancies for each maternal life they save. Unless my estimates are way too high (please let me know if they're wrong; I'm happy to update!), even if only a very small percentage of these pregnancies would have resulted in counterfactual births, both of these charities would still on net decrease the amount of children people have.

It’s noteworthy that if the procreation asymmetry is rejected, the sign of family planning interventions is the opposite of the sign of lifesaving interventions like AMF. Thus, those who support AMF might not support family planning interventions, and vice versa.

For what it's worth, both Holden and Jeff express considerable moral uncertainty regarding animals, while Eliezer does not. Continuing Holden's quote:

My own reflections and reasoning about philosophy of mind have, so far, seemed to indicate against the idea that e.g. chickens merit moral concern. And my intuitions value humans astronomically more. However, I don’t think either my reflections or my intuitions are highly reliable, especially given that many thoughtful people disagree. And if chickens do indeed merit moral concern, the amount and extent of their mistreatment is staggering. With worldview diversification in mind, I don’t want us to pass up the potentially considerable opportunities to improve their welfare.

I think the uncertainty we have on this point warrants putting significant resources into farm animal welfare, as well as working to generally avoid language that implies that only humans are morally relevant.

I agree with you that it's quite difficult to quantify how much Eliezer's views on animals have influenced the rationalist community and those who could steer TAI. However, I think it's significant--if Eliezer were a staunch animal activist, I think the discourse surrounding animal welfare in the rationalist community would be different. I elaborate upon why I think this in my reply to Max H.

I apologize for phrasing my comment in a way that made you feel like that. I certainly didn't mean to insinuate that rationalists lack "agency and ability to think critically" -- I actually think rationalists are better at this than almost any other group! I identify as a rationalist myself, have read much of the sequences, and have been influenced on many subjects by Eliezer's writings.

I think your critique that my writing gave the impression that my claims were all self-evident is quite fair. Even I don't believe that. Please allow me to enumerate my specific claims and their justifications:

  1. Caring about animal welfare is important (99% confidence): Here's the justification I wrote to niplav. Note that this confidence is greater than my confidence that animal suffering is real. This is because I think moral uncertainty means caring about animal welfare is still justified in most worlds where animals turn out not to suffer.
  2. Rationalist culture is less animal-friendly than highly engaged EA culture (85% confidence): I think this claim is pretty evident, and it's corroborated here by many disinterested parties."
  3. Eliezer's views on animal welfare have had significant influence on views of animal welfare in rationalist culture" (75% confidence):
    1. A fair critique is that sure, the sequences and HPMOR have had huge influence on rationalist culture, but the claim that Eliezer's views in domains that have nothing do with rationality (like animal welfare) have had outsize influence on rationalist culture is much less clear.
    2. My only pushback is the experience I've had engaging with rationalists and reading LessWrong, where I've just seen rationalists reflecting Eliezer's views on many domains other than "rationality: A-Z" over and over again. This very much includes the view that animals lack consciousness. Sure, Eliezer isn't the only influential EA/rationalist who believes this, and he didn't originate that idea either. But I think that in the possible world where Eliezer was a staunch animal activist, rationalist discourse around animal welfare would look quite different.
  4. Rationalist culture has significant influence on those who could steer future TAI (80% confidence):
    1. NYT: "two of the world’s prominent A.I. labs — organizations that are tackling some of the tech industry’s most ambitious and potentially powerful projects — grew out of the Rationalist movement...Elon Musk — who also worried A.I. could destroy the world and met his partner, Grimes, because they shared an interest in a Rationalist thought experiment — founded OpenAI as a DeepMind competitor. Both labs hired from the Rationalist community."
    2. Sam Altman:"certainly [Eliezer] got many of us interested in AGI, helped deepmind get funded at a time when AGI was extremely outside the overton window, was critical in the decision to start openai, etc".

On whether aligned TAI would create a utopia for humans and animals, I think the arguments for pessimism--especially about the prospects for animals--are serious enough that having TAI steerers care about animals is very important.

Thanks for describing your reasons. My criterion for moral patienthood is described by this Brian Tomasik quote:

When I realize that an organism feels happiness and suffering, at that point I realize that the organism matters and deserves care and kindness. In this sense, you could say the only "condition" of my love is sentience.

Many other criteria for moral patienthood which exclude animals have been proposed. These criteria always suffer from some combination of the following:

  1. Arbitrariness. For example, "human DNA is the criterion for moral patienthood" is just as arbitrary as "European DNA is the criterion for moral patienthood".
  2. Exclusion of some humans. For example, "high intelligence is the criterion for moral patienthood" excludes people who have severe mental disabilities.
  3. Exclusion of hypothetical beings. For example, "human DNA is the criterion for moral patienthood" would exclude superintelligent aliens and intelligent conscious AI. Also, if some people you know were unknowingly members of a species which looked/acted much like humans but had very different DNA, they would suddenly become morally valueless.
  4. Collapsing to sociopathy or nihilism. For example, "animals don't have moral patienthood because we have power over them" is just nihilism, and if a person used that justification to act the way we do towards farmed animals towards other humans, they'd be locked up.

The most parsimonious definition of moral patient I've seen proposed is just "a sentient being". I don't see any reason why I should add complexity to that definition in order to exclude nonhuman animals. The only motivation I can think of for doing this would be to compromise on my moral principles for the sake of the pleasure associated with eating meat, which is untenable to a mind wired the way mine is.

Eliezer's perspective on animal consciousness is especially frustrating because of the real harm it's caused to rationalists' openness to caring about animal welfare.

Rationalists are much more likely than highly engaged EAs to either dismiss animal welfare outright, or just not think about it since AI x-risk is "obviously" more important. (For a case study, just look at how this author's post on fish farming was received between the EA Forum and LessWrong.) Eliezer-style arguments about the "implausibility" of animal suffering abound. Discussions of the implications of AI outcomes on farmed or wild animals (i.e. almost all currently existing sentient beings) are few and far between.

Unlike Eliezer's overconfidence in physicalism and FDT, Eliezer's overconfidence in animals not mattering has serious real-world effects. Eliezer's views have huge influence on rationalist culture, which has significant influence on those who could steer future TAI. If the alignment problem will be solved, it'll be really important for those who steer future TAI to care about animals, and be motivated to use TAI to improve animal welfare.

I think the best reason is that it's not within the Overton window :)

Agreed. I'm planning on writing up a post about it, but I'm very busy and I'd like the post to be extremely rigorous and address all possible objections, so it probably won't be published for a month or two.

Load more