Recent events seem to have revealed a central divide within Effective Altruism.
On one side, you have the people[1] who want EA to prioritise epistemics on the basis that if we let this slip, we'll eventually end up in a situation where our decisions will end up being what's popular rather than what's effective.
On the other side, you have the people who are worried that if we are unwilling to trade-off [2] epistemics at all, we'll simply sideline ourselves and then we won't be able to have any significant impact at all.
- How should we navigate this divide?
- Do you disagree with this framing? For example, do you think that the core divide is something else?
- How should cause area play into this divide? For example, it appears to me, that those who prioritise AI Safety tend to fall into the first camp more often and those who prioritise global poverty tend to fall into the second camp. Is this a natural consequence of these prioritisation decisions or is this a mistake?
Update: A lot of people disliked the framing which seems to suggest that I haven't found the right framing here. Apologies, I should have spent more time figuring out what framing would have been most conducive to moving the discussion forwards. I'd like to suggest that someone else should post a similar question with framing that they think is better (although it might be a good idea to wait a few days or even a week).
In terms of my current thoughts on framing, I wish I had more explicitly worded this as "saving us from losing our ability to navigate" vs. "saving us from losing our ability to navigate". After reading the comments, I'm tempted to add a third possible highest priority: "preventing us from directly causing harm".
Maybe. I am having a hard time imagining how this solution would actually manifest and be materially different from the current arrangement.
The external face of EA, in my experience, has had a focus on global poverty reduction; everyone I’ve introduced has gotten my spiel about the inefficiencies of training American guide dogs compared to bednets, for example. Only the consequentialists ever learn more about AGI or shrimp welfare.
If the social capital/external face of EA turned around and endorsed or put funding towards rationalist causes, particularly taboo or unpopular ones, I don’t think there would be sufficient differentiation between the two in the eyes of the public. Further, the social capital branch wouldn’t want to endorse the rationalist causes: that’s what differentiates the two in the first place.
I think the two organizations or movements would have to be unaligned, and I think we are heading this way. When I see some of the upvoted posts lately, including critiques that EA is “too rational” or “doesn’t value emotional responses,” I am seeing the death knell of the movement.
Tyler Cowen recently spoke about demographics as destiny of a movement, and that EA is doomed to become the US Democratic Party. I think his critique is largely correct, and EA as I understand it, ie the application of reason to the question of how to do the most good, is likely going to end. EA was built as a rejection of social desirability in a dispassionate effort to improve wellbeing, yet as the tent gets bigger, the mission is changing.