Recent events seem to have revealed a central divide within Effective Altruism.
On one side, you have the people[1] who want EA to prioritise epistemics on the basis that if we let this slip, we'll eventually end up in a situation where our decisions will end up being what's popular rather than what's effective.
On the other side, you have the people who are worried that if we are unwilling to trade-off [2] epistemics at all, we'll simply sideline ourselves and then we won't be able to have any significant impact at all.
- How should we navigate this divide?
- Do you disagree with this framing? For example, do you think that the core divide is something else?
- How should cause area play into this divide? For example, it appears to me, that those who prioritise AI Safety tend to fall into the first camp more often and those who prioritise global poverty tend to fall into the second camp. Is this a natural consequence of these prioritisation decisions or is this a mistake?
Update: A lot of people disliked the framing which seems to suggest that I haven't found the right framing here. Apologies, I should have spent more time figuring out what framing would have been most conducive to moving the discussion forwards. I'd like to suggest that someone else should post a similar question with framing that they think is better (although it might be a good idea to wait a few days or even a week).
In terms of my current thoughts on framing, I wish I had more explicitly worded this as "saving us from losing our ability to navigate" vs. "saving us from losing our ability to navigate". After reading the comments, I'm tempted to add a third possible highest priority: "preventing us from directly causing harm".
Sure! My post definitely refers to Bostrom, and I think your original question does as well, if I am not mistaken.
Which part of his statement do you think he disliked? If he disliked the whole thing and was embarrassed by it, why do include a paragraph making sure everyone understands that you are uncertain of the scientific state of whether or not black people have a genetic disposition to be less intelligent than white people? Why ask that at all, in any circumstances, let alone an apology where it appears that you are apologizing for saying black people are less intelligent than white people, do you ask if there might be a genetic disposition to inferior intelligence?
If he truly believes that was just the epistemically right thing to do, then he needs to check his privilege and reflect on whether that was the appropriate place to have the debate and also consider what I write below:
I would suggest looking at his statement as:
1. I regret what I said.
2. I actually care a lot for the group that I wrote offensive things about.
3. But was I right in the first place? I don't know, I am not an expert.
This is exactly a type of "apology" that Donald Trump or any other variety of "anti-authority" sceptics provide when making a pseudo-scientific claim. There is no epistemic integrity here, there is an attempt to create ambiguity to deflect criticism, blow a dogwhistle, or to make sure that the question remains in the public debate.
Posing the question is not an intellectual triumph, it is a rhetorical tool.
This is all true even if he does not do so with overt intent. You can be racist even if you do not intend to be racist or see yourself as racist.
Does Donald Trump have epistemic integrity because he doesn't back down when presented with facts or arguments that show his beliefs to be incorrect? No, he typically retreats into a position where he and his supporters claim that the science is more complicated than it really is and he is being silenced by a mysterious authority (greater than POTUS, somehow) and that they need to hold fast in the face of adversity so that the truth can prevail.
That is doubling down on pseudoscience, not epistemic integrity. Bostrom is not Galileo here, he is not being imprisoned for his science, he is being criticised for defending racism a, pointedly pseudo-scientific concept.
There is no room for racism in EA.