Recent events seem to have revealed a central divide within Effective Altruism.
On one side, you have the people[1] who want EA to prioritise epistemics on the basis that if we let this slip, we'll eventually end up in a situation where our decisions will end up being what's popular rather than what's effective.
On the other side, you have the people who are worried that if we are unwilling to trade-off [2] epistemics at all, we'll simply sideline ourselves and then we won't be able to have any significant impact at all.
- How should we navigate this divide?
- Do you disagree with this framing? For example, do you think that the core divide is something else?
- How should cause area play into this divide? For example, it appears to me, that those who prioritise AI Safety tend to fall into the first camp more often and those who prioritise global poverty tend to fall into the second camp. Is this a natural consequence of these prioritisation decisions or is this a mistake?
Update: A lot of people disliked the framing which seems to suggest that I haven't found the right framing here. Apologies, I should have spent more time figuring out what framing would have been most conducive to moving the discussion forwards. I'd like to suggest that someone else should post a similar question with framing that they think is better (although it might be a good idea to wait a few days or even a week).
In terms of my current thoughts on framing, I wish I had more explicitly worded this as "saving us from losing our ability to navigate" vs. "saving us from losing our ability to navigate". After reading the comments, I'm tempted to add a third possible highest priority: "preventing us from directly causing harm".
I think it is trivially true that we sometimes face a tradeoff between utilitarian concerns arising from social capital costs and epistemic integrity (see this comment).
But I don't think the Bostrom situation boils down to this tradeoff. People like me believe Bostrom's statement and its defenders don't stand on solid epistemic ground. But the argument for bad epistemics has a lot of moving parts, including (1) recognizing that the statement and its defenses should be interpreted to include more than their most limited possible meanings, and that its omissions are significant, (2) recognizing the broader implausibility of a genetic basis for the racial IQ gap, and (3) recognizing the epistemic virtue in some situations of not speculating about empirical facts without strong evidence.
All of this is really just too much trouble to walk through for most of us. Maybe that's a failing on our part! But I think it's understandable. To convincingly argue points (1) through (3) above I would need to walk through all the subpoints made on each link. That's one heck of a comment.
So instead I find myself leaving the epistemic issues to the side, and trying to convince people that voicing support for Bostrom's statement is bad on consequentialist social capital grounds alone. This is understandably less convincing, but I think the case for it is still strong in this particular situation (I argue it here and here).