In this new podcast episode, I discuss with Will MacAskill what the Effective Altruism community can learn from the FTX / SBF debacle, why Will has been limited in what he could say about this topic in the past, and what future directions for the Effective Altruism community and his own research Will is most enthusiastic about:
I feel like it's more relevant what a person actually believes than whether they think of themselves as uncertain. Moral certainty seems directly problematic (in terms of risks of recklessness and unilateral action) only when it comes together with moral realism: If you think you know the single correct moral theory, you'll consider yourself justified to override other people's moral beliefs and thwart the goals they've been working towards.
By contrast, there seems to me to be no clear link from "anti-realist moral certainty in some subjectivist axiology" to "considers themselves justified to override other people's life goals." On the contrary, unless someone has an anti-social personality to begin with, it seems only intuitive/natural to me to go from "anti-realism about morality is true" to "we should probably treat moral disagreements between morally certain individuals more like we'd ideally treat political disagreements." How would we want to ideally treat political disagreements? I'd say we want to keep political polarization at a low, accept that there'll be view differences, and we'll agree to play fair and find positive-sum compromises. If some political faction goes around thinking it's okay to sabotage others or use their power unfairly (e.g., restricting free expression of everyone who opposes their talking points), the problem is not that they're "too politically certain in what they believe." The problem is that they're too politically certain that what they believe is what everyone ought to believe. This seems like an important difference!
There's also something else that I find weird about highlighting uncertainty as a solution to recklessness/fanaticism. Uncertainty can transition to increased certainty later on, as people do more thinking. So, it doesn't feel like a stable solution. (Not to mention that, as EAs tell themselves it's virtuous to remain uncertain, this impedes philosophical progress at the level of individuals.)
So, while I'm on board with cautioning against overconfidence and would probably concede that there's often a link between overconfidence and unjustified moral or metaehtical confidence, I feel like it's misguided in more than one way to highlight "moral certainty" as the thing that's directly bad here.
(You're of course free to disagree.)