Recent events seem to have revealed a central divide within Effective Altruism.
On one side, you have the people[1] who want EA to prioritise epistemics on the basis that if we let this slip, we'll eventually end up in a situation where our decisions will end up being what's popular rather than what's effective.
On the other side, you have the people who are worried that if we are unwilling to trade-off [2] epistemics at all, we'll simply sideline ourselves and then we won't be able to have any significant impact at all.
- How should we navigate this divide?
- Do you disagree with this framing? For example, do you think that the core divide is something else?
- How should cause area play into this divide? For example, it appears to me, that those who prioritise AI Safety tend to fall into the first camp more often and those who prioritise global poverty tend to fall into the second camp. Is this a natural consequence of these prioritisation decisions or is this a mistake?
Update: A lot of people disliked the framing which seems to suggest that I haven't found the right framing here. Apologies, I should have spent more time figuring out what framing would have been most conducive to moving the discussion forwards. I'd like to suggest that someone else should post a similar question with framing that they think is better (although it might be a good idea to wait a few days or even a week).
In terms of my current thoughts on framing, I wish I had more explicitly worded this as "saving us from losing our ability to navigate" vs. "saving us from losing our ability to navigate". After reading the comments, I'm tempted to add a third possible highest priority: "preventing us from directly causing harm".
As you are someone who falls into the "prioritize epistemics" camp, I would have preferred for you to steelman the "camp" you don't fall in, and frame the "other side" in terms of what they prioritize (like you do in the title), rather than characterizing them as compromising epistemics.
This is not intended as a personal attack-I would make a similar comment to someone who asked a question from "the other side" (e.g.: "On one side, you have the people who prioritize making sure EA is not racist. On the other, you have the people who worried that if we don't compromise at all, we'll simply end up following what's acceptable instead of what's true".)
In general, I think this kind of framing risks encouraging tribal commentary that assumes the worst in each other, and is unconstructive to shared goals. Here is how I would have asked a similar question:
"It seems like there is a divide on the forum around whether Nick Bostrom/CEA's statements were appropriate. (Insert some quotes of comments that reflect this divide). What do people think are the cruxes that are driving differences in opinion? How do we navigate these differences and work out when we should prioritize one value (or sets of values) over others?"
Hmm... the valence of the word "compromise" is complex. It's negative in "compromising integrity", but "being unwilling to compromise" is often used to mean that someone is being unreasonable. However, I suppose I should have predicted this wording wouldn't have been to people's liking. Hopefully my new wording of "trade-offs" is more to your liking.