I'm a managing partner at Enlightenment Ventures, an EA-aligned quantitative crypto hedge fund. I previously earned to give as a Quant Trading Analyst at DRW. In my free time, I enjoy reading, discussing moral philosophy, and exploring Wikipedia rabbit holes.
I agree with you that many of the broad suggestions can be read that way. However, when the post suggests which concrete groups EA should target for the sake of philosophical and political diversity, they all seem to line up on one particular side of the aisle:
EAs should increase their awareness of their own positionality and subjectivity, and pay far more attention to e.g. postcolonial critiques of western academia
What politics are postcolonial critics of Western academia likely to have?
EAs should study other ways of knowing, taking inspiration from a range of academic and professional communities as well as indigenous worldviews
What politics are academics, professional communities, or indigenous Americans likely to have?
EA institutions and community-builders should promote diversity and inclusion more, including funding projects targeted at traditionally underrepresented groups
When the term "traditionally underrepresented groups" is used, does it typically refer to rural conservatives, or to other groups? What politics are these other groups likely to have?
As you pointed out, this post's suggestions could be read as encouraging universal diversity, and I agree that the authors would likely endorse your explanation of the consequences of that. I also don't think it's unreasonable to say that this post is coded with a political lean, and that many of the post's suggestions can be reasonably read as nudging EA towards that lean.
A nativist may believe that the inhabitants of one's own country or region should be prioritized over others when allocating altruistic resources.
A traditionalist may perceive value in maintaining traditional norms and institutions, and seek interventions to effectively strengthen norms which they perceive as being eroded.
Would this include making EA appeal to and include practical advice for views like nativism and traditionalism?
Hi Nathan! If a field includes an EA-relevant concept which could benefit from an explanation in EA language, then I don’t see why we shouldn’t just include an entry for that particular concept.
For concepts which are less directly EA-relevant, the marginal value of including entries for them in the wiki (when they’re already searchable on Wikipedia) is less clear to me. On the contrary, it could plausibly promote the perception that there’s an “authoritative EA interpretation/opinion” of an unrelated field, which could cause needless controversy or division.
I agree with you that EA shouldn't be prevented from adopting effective positions just because of a perception of partisanship. However, there's a nontrivial cost to doing so: the encouragement of political sameness within EA, and the discouragement of individuals or policymakers with political differences from joining EA or supporting EA objectives.
This cost, if realized, could fall against many of this post's objectives:
It also plausibly increases x-risk. If EA becomes known as an effectiveness-oriented wing of a particular political party, the perception of EA policies as partisan could embolden strong resistance from the other political party. Imagine how much progress we could have had on climate change if it wasn't a partisan issue. Now imagine it's 2040, the political party EA affiliates with is urgently pleading for AI safety legislation and a framework for working with China on reducing x-risk, and the other party stands firmly opposed because "these out-of-touch elitist San Francisco liberals think the world's gonna end, and want to collaborate with the Chinese!"
Well stated. This post's heart is in the right place, and I think some of its proposals are non-accidentally correct. However, it seems that many of the post's suggestions boil down to "dilute what it means to be EA to just being part of common left-wing thought". Here's a sampling of the post's recommendations which provoke this:
Including an explicit checkbox to post/comment anonymously could be useful. This would empower users who would otherwise feel uncomfortable expressing themselves (whistleblowers, users who fear social reprisal, etc).
However, it’s arguable that this proposal would reduce users’ sense of ownership of their words, and/or disincentivize users from associating their true identities with their stated beliefs.
Set up survey on cognitive/intellectual diversity within EA
For what it's worth, something like this has been done, with relevant sections on veg*nism, religious affiliation, politics, and morality. Would there be any particular questions you'd be interested in including, were this survey to be done again?
Hi Dhruv, thanks for sharing! Thoughtful posts which go against the grain are always great to see here.
Structural note: Perhaps a sequence would have been a better format for this series of posts?
Good points you made:
Some questions/comments:
Hi Bob and RP team,
I've been working on a comparative analysis of the knock-on effects of bivalve aquaculture versus crop cultivation, to try to provide a more definitive answer to how eating oysters/mussels compares morally to eating plants. I was hoping I could describe how I'd currently apply the RP team's welfare range estimates, and would welcome your feedback and/or suggestions. Our dialogue could prove useful for others seeking to incorporate these estimates into their own projects.
For bivalve aquaculture, the knock-on moral patients include (but are not limited to) zooplankton, crustaceans, and fish. Crop cultivation affects some small mammals, birds, and amphibians, though its effect on insect suffering is likely to dominate.
RP's invertebrate sentience estimates give a <1% probability of zooplankton or plant sentience, so we can ignore them for simplicity (with apologies to Brian Tomasik). The sea hare is the organism most similar to the bivalve for which sentience estimates are given, and it is estimated that a sea hare is less likely to be sentient than an individual insect. Although the sign of crop cultivation's impact on insect suffering is unclear, the magnitude seems likely to dominate the effect of bivalve aquaculture on the bivalves themselves, so we can ignore them too for simplicity.
The next steps might be:
(Of course, I'd have to mention longtermist considerations. The effect of norms surrounding animal consumption on moral circle expansion could be crucial. So could the effect of these consumption practices on climate change or on food security.)