In this new podcast episode, I discuss with Will MacAskill what the Effective Altruism community can learn from the FTX / SBF debacle, why Will has been limited in what he could say about this topic in the past, and what future directions for the Effective Altruism community and his own research Will is most enthusiastic about:
The 3% figure for utilitarianism strikes me as a bit misleading on it's own, given what else Will said. (I'm not accusing Will of intent to mislead here, he said something very precise that I, as a philosopher, entirely followed, it was just a bit complicated for lay people.) Firstly, he said a lot of the probability space was taken up by error theory, the view that there is no true morality. So to get what Will himself endorses, whether or not there is a true morality, you have to basically subtract an unknown but large amount for his credence in error theory from 1, and then renormalize his other credence so that they add up to 1 on their own. Secondly, there's the difference between utilitarianism where only the consequences of your actions matter morally, and only consequences for (total or average) pain and pleasure and/or fulfilled preferences matter as consequence, and consequentialism where only the consequences of your actions matter morally, but it's left open what those consequences are. My memory of the podcast (could be wrong, only listened once!) is that Will said that, conditional on error theory being false, his credence in consequentialism, is about 0.5. This really matters in the current context, because many non-utilitarian forms of consequentialism can also promote maximizing in a dangerous way, they just disagree with utilitarianism about exactly what you are maximizing. So really, Will's credence in a view that, interpreted naively recommends dangerous maximizing is functionally (i.e. ignoring error theory in practice) more like 0.5 than 0.03, as I understood him in the podcast. Of course, he isn't actually recommending dangerous max-ing regardless of his credence in consequentialism (at least in most contexts*), because he warns against naivety.
(Actually, my personal suspicion is that 'consequentialism' on its own is basically vacuous, because any view gives a moral preferability ordering over choices in situations, and really all that the numbers in consequentialism do is help us represent such orderings in a quick and easily manipulable manner, but that's a separate debate.)
*Presumably sometimes dangerous, unethical-looking maximizing actually is best from a consequentialist point of view, because the dangers of not doing so, or the upside of doing so if you are right about the consequences of your options outweigh the risk that you are wrong about the consequences of different options, even when you take into account higher-order evidence that people who think intuitively bad actions maximize utility are nearly always wrong.