Recent events seem to have revealed a central divide within Effective Altruism.
On one side, you have the people[1] who want EA to prioritise epistemics on the basis that if we let this slip, we'll eventually end up in a situation where our decisions will end up being what's popular rather than what's effective.
On the other side, you have the people who are worried that if we are unwilling to trade-off [2] epistemics at all, we'll simply sideline ourselves and then we won't be able to have any significant impact at all.
- How should we navigate this divide?
- Do you disagree with this framing? For example, do you think that the core divide is something else?
- How should cause area play into this divide? For example, it appears to me, that those who prioritise AI Safety tend to fall into the first camp more often and those who prioritise global poverty tend to fall into the second camp. Is this a natural consequence of these prioritisation decisions or is this a mistake?
Update: A lot of people disliked the framing which seems to suggest that I haven't found the right framing here. Apologies, I should have spent more time figuring out what framing would have been most conducive to moving the discussion forwards. I'd like to suggest that someone else should post a similar question with framing that they think is better (although it might be a good idea to wait a few days or even a week).
In terms of my current thoughts on framing, I wish I had more explicitly worded this as "saving us from losing our ability to navigate" vs. "saving us from losing our ability to navigate". After reading the comments, I'm tempted to add a third possible highest priority: "preventing us from directly causing harm".
Yeah, I'm not saying there is zero divide. I'm not even saying you shouldn't characterize both sides. But if you do, it would be helpful to find ways of characterizing both sides with similarly positively-coded framing. Like, frame this post in a way where you would pass an ideological turing test, i.e. people can't tell which "camp" you're in.
The "not racist" vs "happy to compromise on racism" was my way of trying to illustrate how your "good epistemics" vs "happy to compromise on epistemics" wasn't balanced, but I could have been more explicit in this.
Saying one side prioritizes good epistemics and the other side is happy to compromise epistemics seems to clearly favor the first side.
Saying one side prioritizes good epistemics and the other side prioritizes "good optics" or "social capital" (to a weaker extent) seems to similarly weakly favor the first side. For example, I don't think it's a charitable interpretation of the "other side" that they're primarily doing this for reasons of good optics.
I also think asking the question more generally is useful.
For example, my sense is also that your "camp" still strongly values social capital, just a different kind of social capital. In your response to CEA's PR statement, you say "It is much more important to maintain trust with your community than to worry about what outsiders think". Presumably trust within the community is also a form of social capital. By your statement alone you could say one camp prioritizes maintaining social capital within the movement, and one camp prioritizes building social capital outside of the movement. I'm not saying this is a fair characterization of the differences in groups, but there might not be a "core divide".
It might be a difference in empirical views (e.g. both groups believe social capital and good epistemics are important. Both groups believe in good epistemics and social capital for predominantly consequentialist reasons. There is no "core" difference. But one group think that empirically, the downstream effects of breaking a truth-optimizing norm is what leads to the worst outcome. Perhaps this might be associated with a theory of change of a closely knit, high-trust, high-leverage group of people. It is less important what the rest of the world thinks, because it's more important that this high-leverage group of people can drown out the noise and optimize for truth, so they can make the difficult decisions that are likely to be missed by simply following what's popular and accepted. The other group thinks that the downstream effects of alienating large swaths of an entire race and people who care about racial equality (and the norm that this is an acceptable tradeoff) is what leads to the worst outcome. Perhaps this might be associated with a view that buy-in from outsiders is critical to scaling impact.
But people might have different values or different empirical views or different theories of change, or some combination. That's why I am more reluctant to frame this issue in such clear demarcated ways ("central divide", "on one side", "this camp"), when it's not settled that this is the cause of the differences in opinion.