Recent events seem to have revealed a central divide within Effective Altruism.
On one side, you have the people[1] who want EA to prioritise epistemics on the basis that if we let this slip, we'll eventually end up in a situation where our decisions will end up being what's popular rather than what's effective.
On the other side, you have the people who are worried that if we are unwilling to trade-off [2] epistemics at all, we'll simply sideline ourselves and then we won't be able to have any significant impact at all.
- How should we navigate this divide?
- Do you disagree with this framing? For example, do you think that the core divide is something else?
- How should cause area play into this divide? For example, it appears to me, that those who prioritise AI Safety tend to fall into the first camp more often and those who prioritise global poverty tend to fall into the second camp. Is this a natural consequence of these prioritisation decisions or is this a mistake?
Update: A lot of people disliked the framing which seems to suggest that I haven't found the right framing here. Apologies, I should have spent more time figuring out what framing would have been most conducive to moving the discussion forwards. I'd like to suggest that someone else should post a similar question with framing that they think is better (although it might be a good idea to wait a few days or even a week).
In terms of my current thoughts on framing, I wish I had more explicitly worded this as "saving us from losing our ability to navigate" vs. "saving us from losing our ability to navigate". After reading the comments, I'm tempted to add a third possible highest priority: "preventing us from directly causing harm".
I think you ask two questions with this framing (1) a descriptive question on whether or not this divide exists currently int the EA Community (2) a normative question on whether or not this divide should exist. I think it is useful to separate the two questions, as some of the comments seem to use responses to (2) as a response to (1). I don't know if (1) is true. I don't think I've noticed it in the EA Community but I'm willing to have my mind changed on this
On (2), I think this can be easily resolved. I don't think we should (and I don't think we can) have non-epistemic* reasons for belief. However, we can have non-epistemic reasons on why we would want to act on a certain proposition. I'm not really falling into either "camp" here, and I don't think it necessitates us to fall into any "camp". There's a wide wealth of literature in Epistemology on this.
*I think sometimes EAs use the word "epistemic" differently then what I conventionally see in academic philosophy, but this comment is based on conventional interpretations of "epistemic" in Philosophy.
Yeah, I agree, there is a good reason they exist.
I don't think they are unreasonable either as individuals or in essays and conversations.
Further they are trying to do things to change the world in ways that we both agree would make it a better place. Possibly the movement is strongly net positive for the world.
But they also make people who are emotionally obsessed with the truth content of the things they say and believe feel excluded and unwelcome.