Ariel Simnegar

327Joined May 2022

Bio

Participation
2

I'm a managing partner at Enlightenment Ventures, an EA-aligned quantitative crypto hedge fund. I previously earned to give as a Quant Trading Analyst at DRW. In my free time, I enjoy reading, discussing moral philosophy, and exploring Wikipedia rabbit holes.

Posts
1

Sorted by New

Comments
71

Hi Bob and RP team,

I've been working on a comparative analysis of the knock-on effects of bivalve aquaculture versus crop cultivation, to try to provide a more definitive answer to how eating oysters/mussels compares morally to eating plants. I was hoping I could describe how I'd currently apply the RP team's welfare range estimates, and would welcome your feedback and/or suggestions. Our dialogue could prove useful for others seeking to incorporate these estimates into their own projects.

For bivalve aquaculture, the knock-on moral patients include (but are not limited to) zooplankton, crustaceans, and fish. Crop cultivation affects some small mammals, birds, and amphibians, though its effect on insect suffering is likely to dominate.

RP's invertebrate sentience estimates give a <1% probability of zooplankton or plant sentience, so we can ignore them for simplicity (with apologies to Brian Tomasik). The sea hare is the organism most similar to the bivalve for which  sentience estimates are given, and it is estimated that a sea hare is less likely to be sentient than an individual insect. Although the sign of crop cultivation's impact on insect suffering is unclear, the magnitude  seems likely to dominate the effect of bivalve aquaculture on the bivalves themselves, so we can ignore them too for simplicity.

The next steps might be:

  1. Calculate welfare ranges:
    1. For bivalve aquaculture, use carp, salmon, crayfish, shrimp, and crabs to calculate a welfare range for the effect of bivalve aquaculture on marine populations.
    2. Use chickens as a model species to calculate a welfare range for the effect of crop cultivation on vertebrate populations.
    3. For the effect of crop cultivation on insect suffering, I might just toss this problem on to future researchers. I'm only doing this as a side project, and given the sheer complexity of the considerations at play, I'm worried I might publish something which inadvertently increases insect suffering instead of decreasing it.
  2. For several moral views (negative utilitarianism, symmetric utilitarianism) and several perspectives of the value of a typical wild animal's life (net negative, net neutral, net positive), extract relevant conclusions. (e.g. if bivalve aquaculture is robustly shown to increase marine populations, given Brian's arguments that crop cultivation likely reduces vertebrate populations, a negative utilitarian who views wild animal lives as net negative may want to oppose bivalve consumption.)

(Of course, I'd have to mention longtermist considerations. The effect of  norms surrounding animal consumption on moral circle expansion could be crucial. So could the effect of these consumption practices on climate change or on food security.)

I agree with you that many of the broad suggestions can be read that way. However, when the post suggests which concrete groups EA should target for the sake of philosophical and political diversity, they all seem to line up on one particular side of the aisle:

EAs should increase their awareness of their own positionality and subjectivity, and pay far more attention to e.g. postcolonial critiques of western academia

What politics are postcolonial critics of Western academia likely to have?

EAs should study other ways of knowing, taking inspiration from a range of academic and professional communities as well as indigenous worldviews

What politics are academics, professional communities, or indigenous  Americans likely to have?

EA institutions and community-builders should promote diversity and inclusion more, including funding projects targeted at traditionally underrepresented groups

When the term "traditionally underrepresented groups" is used, does it typically refer to rural conservatives, or to other groups? What politics are these other groups likely to have?

As you pointed out, this post's suggestions could be read as encouraging universal diversity, and I agree that the authors would likely endorse your explanation of the consequences of that. I also don't think it's unreasonable to say that this post is coded with a political lean, and that many of the post's suggestions can be reasonably read as nudging EA towards that lean.

A nativist may believe that the inhabitants of one's own country or region should be prioritized over others when allocating altruistic resources.

A traditionalist may perceive value in maintaining traditional norms and institutions, and seek interventions to effectively strengthen norms which they perceive as being eroded.

Would this include making EA appeal to and include practical advice for views like nativism and traditionalism?

Hi Nathan! If a field includes an EA-relevant concept which could benefit from an explanation in EA language, then I don’t see why we shouldn’t just include an entry for that particular concept.

For concepts which are less directly EA-relevant, the marginal value of including entries for them in the wiki (when they’re already searchable on Wikipedia) is less clear to me. On the contrary, it could plausibly promote the perception that there’s an “authoritative EA interpretation/opinion” of an unrelated field, which could cause needless controversy or division.

I agree with you that EA shouldn't be prevented from adopting effective positions  just because of a perception of partisanship. However, there's a nontrivial cost to doing so: the encouragement of political sameness within EA, and the discouragement of individuals or policymakers with political differences from joining EA or supporting EA objectives.

This cost, if realized, could fall against many of this post's objectives:

  • We must temper our knee-jerk reactions against deep critiques, and be curious about our emotional reactions to arguments – “Why does this person disagree with me? Why am I so instinctively dismissive about what they have to say?”
  • We must be willing to accept the possibility that “big” things may need to be fixed and that some of our closely-held beliefs are misguided
  • EAs should make a point of engaging with and listening to EAs from underrepresented disciplines and backgrounds, as well as those with heterodox/“heretical” views
  • EAs should consider how our shared modes of thought may subconsciously affect our views of the world – what blindspots and biases might we have created for ourselves?
  • EA institutions should select for diversity
    • Along lines of:
      • Philosophical and political beliefs

It also plausibly increases x-risk. If EA becomes known as an effectiveness-oriented wing of a particular political party, the perception of EA policies as partisan could embolden strong resistance from the other political party. Imagine how much progress we could have had on climate change if it wasn't a partisan issue. Now imagine it's 2040, the political party EA affiliates with is urgently pleading for AI safety legislation and a framework for working with China on reducing x-risk, and the other party stands firmly opposed because "these out-of-touch elitist San Francisco liberals think the world's gonna end, and want to collaborate with the Chinese!"

Well stated. This post's heart is in the right place, and I think some of its proposals are non-accidentally correct. However, it seems that many of the post's suggestions boil down to "dilute what it means to be EA to just being part of common left-wing thought". Here's a sampling of the post's recommendations which provoke this:

  • EAs should increase their awareness of their own positionality and subjectivity, and pay far more attention to e.g. postcolonial critiques of western academia
  • EAs should study other ways of knowing, taking inspiration from a range of academic and professional communities as well as indigenous worldviews
  • EAs should not assume that we must attach a number to everything, and should be curious about why most academic and professional communities do not
  • EA institutions should select for diversity
  • Previous EA involvement should not be a necessary condition to apply for specific roles, and the job postings should not assume that all applicants will identify with the label “EA”
  • EA institutions should hire more people who have had little to no involvement with the EA community providing that they care about doing the most good
  • EA institutions and community-builders should promote diversity and inclusion more, including funding projects targeted at traditionally underrepresented groups
  • Speaker invitations for EA events should be broadened away from (high-ranking) EA insiders and towards, for instance:
    • Subject-matter experts from outside EA
    • Researchers, practitioners, and stakeholders from outside of our elite communities
      • For instance, we need a far greater input from people from Indigenous communities and the Global South
  • EAs should consider the impact of EA’s cultural, historical, and disciplinary roots on its paradigmatic methods, assumptions, and prioritisations
  • Funding bodies should within 6 months publish lists of sources they will not accept money from, regardless of legality
    • Tobacco?
    • Gambling?
    • Mass surveillance?
    • Arms manufacturing?
    • Cryptocurrency?
    • Fossil fuels?
  • Within 5 years, EA funding decisions should be made collectively
  • EA institutions should be democratised within 3 years, with strategic, funding, and hiring policy decisions being made via democratic processes rather than by the institute director or CEO
  • EAs should make an effort to become more aware of EA’s cultural links to eugenic, reactionary and right-wing accelerationist politics, and take steps to identify areas of overlap or inheritance in order to avoid indirectly supporting such views or inadvertently accepting their framings

Including an explicit checkbox to post/comment anonymously could be useful. This would empower users who would otherwise feel uncomfortable expressing themselves (whistleblowers, users who fear social reprisal, etc).

However, it’s arguable that this proposal would reduce users’ sense of ownership of their words, and/or disincentivize users from associating their true identities with their stated beliefs.

Set up survey on cognitive/intellectual diversity within EA

For what it's worth, something like this has been done, with relevant sections on veg*nism, religious affiliation, politics, and morality. Would there be any particular questions you'd be interested in including, were this survey to be done again?

Hi Dhruv, thanks for sharing! Thoughtful posts which go against the grain are always great to see here.

Structural note: Perhaps a sequence would have been a better format for this series of posts?

Good points you made:

  • LMICs could benefit from increased animal advocacy, especially given the likely lower scrutiny on their factory farming practices.
  • Focusing animal advocacy on the ethical argument rather than other considerations is important for ensuring the message's fidelity over time. For example, if veganism for health reasons or for "leaving animals alone (including wild animals)" became the dominant message, animal advocacy's longtermist implications could be completely different.
  • Ensuring that animal farmers are taken under consideration is underemphasized in animal advocacy. One additional benefit is that abolitionism is very left-coded, and farmers are quite right-coded (at least in the US). Considering farmers helps to reduce the movement's codedness.

Some questions/comments:

  • Brian Tomasik thinks it's "pretty unclear whether promoting vegetarianism reduces or increases total animal suffering, both when considering short-run effects on wild animals on Earth and when considering long-run effects on society's values". It seems plausible that if wild animals live net negative lives in expectation, then well-meaning memes of environmental conservation accompanying animal advocacy could cause more harm than good. If we seek a robust net positive effect, wouldn't it be easier to argue for evidence-based welfarist interventions like the Humane Slaughter Association than for abolitionist advocacy?
  • Do you think there's a substantial long-term difference between immediate abolitionist advocacy and an incrementalist, welfarist approach which seeks to turn public opinion over the next few decades? Which do you think would more permanently and robustly expand society's moral circle? (By analogy, from the perspective of the temperance movement, it's arguable that the Prohibition would have endured for much longer if the Overton window had been shifted more in favor of temperance before Prohibition was enacted.)
  • You argue that cultured meat could turn meat into a luxury good rather than abolishing it altogether, which could perpetuate a culture where it's still considered "okay" to eat meat. It seems to me that if the price of cultured meat falls below parity, then people would be more likely to think: "I could pay X to create a chicken, torture it for its short life, and cruelly slaughter it, or I could pay 0.9 * X to not do that and get the same benefit...maybe torturing chickens is bad." (Similarly, a slave-abolitionist could argue that promoting industrialization over slavery could be bad for slave welfare, because we could just industrialize without ever making the moral leap to "slavery is bad".) Your consideration has merit, but do you think it's more decisive than the points in favor of cultured meat?
Load More