Recent events seem to have revealed a central divide within Effective Altruism.
On one side, you have the people[1] who want EA to prioritise epistemics on the basis that if we let this slip, we'll eventually end up in a situation where our decisions will end up being what's popular rather than what's effective.
On the other side, you have the people who are worried that if we are unwilling to trade-off [2] epistemics at all, we'll simply sideline ourselves and then we won't be able to have any significant impact at all.
- How should we navigate this divide?
- Do you disagree with this framing? For example, do you think that the core divide is something else?
- How should cause area play into this divide? For example, it appears to me, that those who prioritise AI Safety tend to fall into the first camp more often and those who prioritise global poverty tend to fall into the second camp. Is this a natural consequence of these prioritisation decisions or is this a mistake?
Update: A lot of people disliked the framing which seems to suggest that I haven't found the right framing here. Apologies, I should have spent more time figuring out what framing would have been most conducive to moving the discussion forwards. I'd like to suggest that someone else should post a similar question with framing that they think is better (although it might be a good idea to wait a few days or even a week).
In terms of my current thoughts on framing, I wish I had more explicitly worded this as "saving us from losing our ability to navigate" vs. "saving us from losing our ability to navigate". After reading the comments, I'm tempted to add a third possible highest priority: "preventing us from directly causing harm".
I don't see the two wings being not-very-connected as a necessarily bad thing. Both wings get what they feel they need to achieve impact -- either pure epistemics or social capital -- without having to compromise with what the other wing needs. In particular, the social-capital wing needs lots of money to scale global health interventions, and most funders who are excited about that just aren't going to want to be associated with a movement that is significantly about the taboo. I expect that the epistemic branch would, by nature, focus on things that are less funding-constrained.
If EA was "built as a rejection of social desirability," then it seems that the pure-epistemics branch doesn't need the social-capital branch (since social-desirability thinking was absent in the early days). And I don't think it likely that social-capital-branch EAs will just start training guide dogs rather than continuing to do things at high multiples of GiveDirectly after the split. If the social-capital-branch gets too big and starts to falter on epistemics as a result, it can always split again so that there will still be a social-capital-branch with good epistemics.