AS

Ariel Simnegar 🔸

2471 karmaJoined

Bio

Participation
3

I'm a managing partner at AltX, an EA-aligned quantitative crypto hedge fund. I previously earned to give as a Quant Trading Analyst at DRW. In my free time, I enjoy reading, discussing moral philosophy, and dancing bachata and salsa.

My substack: https://arielsimnegar.substack.com/

Comments
200

Thanks for this; I agree that "integrity vs impact" is a more precise cleavage point for this conversation than "cause-first vs member-first".

Would you sometimes advocate for prioritizing impact (e.g. SUM shipping resources towards interventions) over alignment within the EA community?

Unhelpfully, I'd say it depends on the tradeoff's details. I certainly wouldn't advocate to go all-in on one to the exclusion of the other. But to give one example of the way I think, I'd currently prefer the marginal 1M be given to EA Funds' Animal Welfare Fund than used to establish a foundation to investigate and recommend improvements to EA's epistemics.

It seems to me that I think the EA community has a lot more "alignment/integrity" than you do. This could arise from empirical disagreements, different definitions of "alignment/integrity", and/or different expectations we place on the community.

For example, the evidence Elizabeth presented of a lack of alignment/integrity in EA is that some veganism advocates on Facebook incorrectly claimed that veganism doesn't have tradeoffs, and weren't corrected by other community members. While I'd prefer people say true things to false things, especially when they affect people's health, this just doesn't feel important enough to update upon. (I've also just personally never heard any vegan advocate say anything like this, so it feels like an isolated case.)

One thing that could change my mind is learning about many more cases to the point that it's clear that there are deep systemic issues with the community's epistemics. If there's a lot more evidence on this which I haven't seen, I'd love to hear about it!

Thanks for the interesting conversation! Some scattered questions/observations:

  • Your conversation reminds me of the debate about whether EA should be cause-first or member-first.
    • My self-identification as EA is cause-first: So long as the EA community puts resources broadly into causes which maximize the impartial good, I'd call myself EA.
    • Elizabeth's self-identification seems to me to be member-first, given that her self-identification seems more based upon community members acting with integrity towards each other than about whether or not EA is maximizing the impartial good.
    • This might explain the difference between my and Elizabeth's attitudes about the importance of some EAs claiming that veganism doesn't entail tradeoffs without being corrected. I think being honest about health tradeoffs is important, but I'm far more concerned with shutting up and multiplying by shipping resources towards the best interventions. However, putting on a member-first hat, I could understand why from Elizabeth's perspective, this is so important. Do you think this is a fair characterization?
  • I'd love to understand more about the way Elizabeth reasons about the importance of raising awareness of veganism's health tradeoffs relative to vegan advocacy:
    • If Elizabeth is trying to maximize the impartial good, she should probably be far more concerned about an anti-veganism advocate on Facebook than about a veganism advocate who (incorrectly) denies veganism's health tradeoffs. Of course everyone should be transparent about health tradeoffs. However, if Elizabeth is being scope-sensitive about the dominance of farmed animal effects, I struggle to understand why so much attention is being placed on veganism's health tradeoffs relative to vegan advocacy.
    • By analogy, this feels like sounding an alarm because EA's kidney donation advocates haven't sufficiently acknowledged its potential adverse health effects. Of course everyone should acknowledge that. But when also considering the person being helped, isn't kidney donation clearly the moral imperative?

(I didn't downvote your comment, by the way.)

I feel bad that my comment made you (and a few others, judging by your comment's agreevotes) feel bad.

As JackM points out, that snarky comment wasn't addressing views which give very low moral weights to animals due to characteristics like mind complexity, brain size, and behavior, which can and should be incorporated into welfare ranges. Instead, it was specifically addressing overwhelming hierarchicalism, which is a view which assigns overwhelmingly lower moral weight based solely on species.

My statement was intended to draw a provocative analogy: There's no theoretical reason why one's ethical system should lexicographically prefer one race/gender/species over another, based solely on that characteristic. In my experience, people who have this view on species say things like "we have the right to exploit animals because we're stronger than them", or "exploiting animals is the natural order", which could have come straight out of Mein Kampf. Drawing a provocative analogy can (sometimes) force a person to grapple with the cognitive dissonance from holding such a position.

While hierarchicalism is common among the general public, highly engaged EAs generally don't even argue for hierarchicalism because it's just such a dubious view. I wouldn't write something like this about virtually any other argument for prioritizing global health, including ripple effects, neuron count weighting, denying that animals are conscious, or concerns about optics.

(Just wanted to say that your story of earning to give has been an inspiration! Your episode with 80k encouraged me to originally enter quant trading.)

Given your clarification, I agree that your observation holds. I too would have loved to hear someone defend the view that "animals don't count at all". I think it's somewhat common among rationalists, although the only well-known EA-adjacent individuals I know who hold it are Jeff, Yud, and Zvi Mowshowitz. Holden Karnofsky seems to have believed it once, but later changed his mind.[1]

As @JackM pointed out, Jeff didn't really justify his view in his comment thread. I've never read Zvi justify that view anywhere either. I've heard two main justifications for the view, either of which would be sufficient to prioritize global health:

Overwhelming Hierarchicalism

Solely by virtue of our shared species, helping humans may be lexicographically preferential to helping animals, or perhaps their preferences should be given an enormous multiplier.

I use the term "overwhelming" because depending on which animal welfare BOTEC is used, if we use constant moral weights relative to humans, you'd need a 100x to 1000x multiplier for the math to work out in favor of global health. (This comparison is coherent to me because I accept Michael St. Jules' argument that we should resolve the two envelopes problem by weighing in the units we know, but I acknowledge that this line of reasoning may not appeal to you if you don't endorse that resolution.)

I personally find overwhelming hierarchicalism (or any form of hierarchicalism) to be deeply dubious. I write more about it here, but I simply see it as a convenient way to avoid confronting ethical problems without having the backing of any sound theoretical justification. I put about as much weight on it as the idea that the interests of the Aryan race should be lexicographically preferred to the interests of non-Aryans. There's just no prior for why that would be the case.

Denial of animal consciousness

Yud and maybe some others seem to believe that animals are most likely not conscious. As before, they'd have to be really certain that animals aren't conscious to endorse global health here. Even if there's a 10% chance that chickens are conscious, given the outsize cost-effectiveness of corporate campaigns if they are, I think they'd still merit a significant fraction of EA funding. (Probably still more than they're currently receiving.)

I think it's fair to start with a very strong prior that at least chickens and pigs are probably conscious. Pigs have brains with all of the same high-level substructures, which are affected the same way by drugs/painkillers/social interaction as humans' are, and act all of the same ways that humans act would when confronted with situations of suffering and terror. It would be really surprising a priori if what was going on was merely a simulacrum of suffering with no actual consciousness behind it. Indeed, the vast majority of people seem to agree that these animals are conscious and deserve at least some moral concern. I certainly remember being able to feel pain as a child, and I was probably less intelligent than an adult pig during some of that.

Apart from that purely intuitive prior, while I'm not a consciousness expert at all, the New York Declaration on Animal Consciousness says that "there is strong scientific support for attributions of conscious experience to other mammals and to birds". Rethink Priorities' and Luke Muehlhauser's work for Open Phil corroborate that. So Yud's view is also at odds with much of the scientific community and other EAs who have investigated this.

All of this is why I feel like Yud's Facebook post needed a very high burden of proof to be convincing to me. Instead, it seems like he just kept explaining what his model (a higher-order theory of consciousness) believes without actually justifying his model. He also didn't admit any moral uncertainty about his model. He asserted some deeply unorthodox and unintuitive ideas (like pigs not being conscious), admitted no uncertainty, and didn't make any attempt to justify them. So I didn't find anything about his Facebook post convincing.

Conclusion

To me, the strongest reason to believe that animals don't count at all is because smart and well-meaning people like Jeff, Yud, and Zvi believe it. I haven't read anything remotely convincing that justifies that view on the merits. That's why I didn't even mention these arguments in my follow-up post for Debate Week.

Trying to be charitable, I think the main reasons why nobody defended that view during Debate Week were:

  • They didn't have the mental bandwidth to be willing to deal with an audience I suspect would be hostile. Overwhelming hierarchicalism is very much against the spirit of radical empathy in EA.
  • They may have felt like most EAs don't share the basic intuitions underlying their views, so they'd be talking to a wall. The idea that pigs aren't conscious might seem very intuitive to Eliezer. To me, and I suspect to most people, it seems wild. I could be convinced, but I'd need to see way more justification than I've seen.
  1. ^

    in 2017, Holden's personal reflections "indicate against the idea that e.g. chickens merit moral concern". In 2018, Holden stated that "there is presently no evidence base that could reasonably justify high confidence [that] nonhuman animals are not 'conscious' in a morally relevant way".

@AGB 🔸 would you be willing to provide brief sketches of some of these stronger arguments for global health which weren’t covered during the Debate Week? Like Nathan, I’ve spent a ton of time discussing this issue with other EAs, and I haven’t heard any arguments I’d consider strong for prioritizing global health which weren’t mentioned during Debate Week.

As you wrote, there's no view on this I'm confident in. But speaking from having had certain enduring experiences of suffering, like being very sick for weeks on end, or being bullied at school for years, at times life can just be enduringly awful. Yes, one can develop certain coping mechanisms to make the bad times easier to bear, but if the bad times are bad enough, I think they do just make life consistently far worse. Evidence from an earlier post of mine:

  • Extreme pain or discomfort reduces health-related quality of life by 41%.
  • Nerve damage results in a loss of health-related quality of life between 39% for diabetes-caused nerve damage and 85% for failed back surgery syndrome.
  • Suffering from cluster headaches is associated with greatly increased suicidality.
  • Patients suffering from chronic musculoskeletal pain would rather take a gamble with a ⅕ chance of dying and a ⅘ chance of being cured than continue living with their condition.

I also think that many coping mechanisms (e.g. "I'm suffering for a cause! Or for my children!" etc) are mostly possible because the suffering being has higher order brain function which allows those complex ideas to have similar mental sway to the feeling of suffering. So it feels plausible to me that a chicken would have a harder time "coping" with suffering than a human in an equivalent situation.

To quantify my subjective and very uncertain feelings on the matter, I'd put a 40-80% probability that coping mechanisms don't reduce chickens' suffering by more than 50% relative to the undiluted experience. But I think reasonable people can have all sorts of views on this, and would love to see further research.

Well written. I think the point on the badness of excruciating-level pain is really underemphasized, and would like to write a post about that at some point.

I'd love to try surveying the general population with thought experiments to find people's empirical tradeoffs of pain levels. My personal intuitions are definitely closer to your weights than Rethink's. I think a survey would be really valuable since it would provide probability distributions of pain level conversions which could augment a cross-cause model.

As an aside, I don't think someone writing an "activist" comment disqualifies them from being truthseeking.

I used to find it absurd to think one could justify spending on animals when they could be spending on humans. Over years, I changed my mind, between discussing consciousness and moral weights with others, reading many relevant writings, and watching relevant documentaries. I wrote a post explaining why I changed my mind, and engaged extensively with hundreds of comments.

So far, nobody has posed an argument for prioritizing global health over animal welfare which I've found convincing. If the case for animal welfare is indeed correct, then marginal global health funding could be doing orders of magnitude more good if instead allocated to animal welfare. I don't think it means I have bad epistemics, or that my writings aren't worth engaging with, if my actions are following the logical conclusions of my changed beliefs.

If global health is indeed better at the margin than animal welfare, then I would love to know, because that would mean I've been causing enormous harm by allocating my time and donations to preventing us from reducing more suffering. I strive to remain as open-minded as I can to that possibility, but for reasons I and others have written extensively about, I currently think it's very likely indeed that animal welfare is better at the margin.

Hi! As you point out, the 1000x multiplier I quoted comes from Vasco's analysis, which also uses Saulius's numbers and Rethink's moral weights.

The cross cause calculator came out about two weeks before I published my initial post. By then, I'd been working on that post for about seven months. Though it would have been a good idea, given my urge to get the post published, I didn't consider checking the cross cause calculator's implied multiplier before posting.

I've just spent some time trying to figure out where the discrepancy between Vasco's multiplier and the cross cause calculator's multiplier comes from:

  • They roughly agree on the GHD bar of ~20 DALYs per $1000.
  • Fixing a constant welfare range versus a probablistic range doesn't seem to make a huge difference for the calculator's result.
  • The main difference seems to be that the cross cause calculator assumes corporate campaigns avert between 160 and 3.6k chicken suffering-years per dollar. I don't know the precise definition of that unit, and Vasco's analysis doesn't place intermediate values in terms of that unit, so I don't know exactly where the discrepancy breaks down from there. However, there's probably at least an order of magnitude difference between Vasco's implied chicken suffering-years per dollar and the cross cause calculator's.

My very tentative guess is that this may be coming from Vasco's very high weightings of excruciating and disabling-level pain, which some commenters found unintuitive, and could be driving that result. (I personally found these weightings quite intuitive after thinking about how I'd take time tradeoffs between these types of pains, but reasonable people may disagree.)

It could also be that Rethink is using a lower Saulius number to give a more precise marginal cost-effectiveness estimate, even if the historical cost-effectiveness was much higher. That would be consistent with Open Phil's statement that they think the marginal cost-effectiveness of corporate campaigns is much lower than the historical average.

I think this is a great find, and I'm very open to updating on what I personally think the animal welfare vs GHD multiplier is, depending on how that discrepancy breaks down. I do think it's worth noting that every one of these comparisons still found animal welfare orders of magnitude better than GHD, which is the headline result I think is most important for this debate. But your findings do illustrate that there's still a ton of uncertainty in these numbers.

(@Vasco Grilo🔸 I'd love to hear your perspective on all of this!)

Thanks for the comment!

I've always heard "pinpricks vs torture" or the Omelas story interpreted as an example of the overwhelming badness of extreme suffering, rather than against scope sensitivity. I've heard it cited in favor of animal welfare! As one could see from the Dominion documentary, billions of animals live lives of extreme suffering. Omelas could be interpreted to argue that this suffering is even more important than is otherwise assumed.

I think it's hard to say what the simulation argument implies for this debate one way or the other, since there are many more (super speculative) considerations:

  • If consciousness is an illusion or a byproduct of certain kinds of computations which would arise in any substrate, then we should expect animals to be conscious even in the simulation.
  • I've heard some argue that the simulators would be interested in the life trajectories of particular individuals, which could imply that only a few select humans would be conscious, and nobody else. (In history, we tell the stories of world-changing individuals, neglecting those of every other individual. In video games, often only the player and maybe a select few NPCs are given rich behavior.)
  • The simulators might be interested in seeing what the pre-AGI world may have looked like, and will terminate the simulation once we get AGI. In that case, we'd want to go all-in on suffering reduction, which would probably mean prioritizing animals.

I agree with you that many claim the moral value of animal experiences is incommensurate with that of human experiences, and that categorical responsibilities would generally also favor humans.

Load more