Recent events seem to have revealed a central divide within Effective Altruism.
On one side, you have the people[1] who want EA to prioritise epistemics on the basis that if we let this slip, we'll eventually end up in a situation where our decisions will end up being what's popular rather than what's effective.
On the other side, you have the people who are worried that if we are unwilling to trade-off [2] epistemics at all, we'll simply sideline ourselves and then we won't be able to have any significant impact at all.
- How should we navigate this divide?
- Do you disagree with this framing? For example, do you think that the core divide is something else?
- How should cause area play into this divide? For example, it appears to me, that those who prioritise AI Safety tend to fall into the first camp more often and those who prioritise global poverty tend to fall into the second camp. Is this a natural consequence of these prioritisation decisions or is this a mistake?
Update: A lot of people disliked the framing which seems to suggest that I haven't found the right framing here. Apologies, I should have spent more time figuring out what framing would have been most conducive to moving the discussion forwards. I'd like to suggest that someone else should post a similar question with framing that they think is better (although it might be a good idea to wait a few days or even a week).
In terms of my current thoughts on framing, I wish I had more explicitly worded this as "saving us from losing our ability to navigate" vs. "saving us from losing our ability to navigate". After reading the comments, I'm tempted to add a third possible highest priority: "preventing us from directly causing harm".
I really dislike the "2 camps" framing of this.
I believe that this forum should not be a forum for certain debates, such as that over holocaust denial. I do not see this as a "trade-off" of epistemics, but rather a simple principle of "there's a time and a place".
I am glad that there are other places where holocaust denial claims are discussed in order to comprehensively debunk them. I don't think the EA forum should be one of those places, because it makes the place unpleasant for certain people and is unrelated to EA causes.
In the rare cases where upsetting facts are relevant to an EA cause, all I ask is that they be treated with a layer of compassion and sensivity, and an awareness of context and potential misuse by bad actors.
If you had a friend that was struggling with depression, health, and obesity, and had difficulty socialising, it would not be epistemically courageous for you to call them a "fat loser", even if the statement is technically true by certain definitions of the words. Instead you could take them aside, talk about your concerns in a sensitive manner, and offer help and support. Your friend might still get sad about you bringing the issue up, but you won't be an asshole.
I think EA has a good norm of politeness, but I am increasingly concerned that it still needs to work on norms of empathy, sensitivity, and kindness, and I think that's the real issue here.