In this new podcast episode, I discuss with Will MacAskill what the Effective Altruism community can learn from the FTX / SBF debacle, why Will has been limited in what he could say about this topic in the past, and what future directions for the Effective Altruism community and his own research Will is most enthusiastic about:
I don't think the "3% credence in utilitarianism" is particularly meaningful; doubting the merits of a particular philosophical framework someone uses isn't an obvious reason to be suspicious of them. Particularly not when Sam ostensibly reached similar conclusions to Will about global priorities, and MacAskill himself has obviously been profoundly influenced by utilitarian philosophers in his goals too.
But I do think there's one specific area where SBF's public philosophical statements were extremely alarming even at the time, and he was doing so whilst in "explain EA" mode. That's when Sam made it quite clear that if he had a 51% chance of doubling world happiness vs a 49% of ending it, he'd accept the bet.... a train to crazytown not many utilitarians would jump on and also one which sounds a lot like how he actually approached everything.
Then again, SBF isn't a professional philosopher and never claimed to be, other people have said equally dumb stuff and not gambled away billions of other people's money, and I'm not sure MacAskill himself would even have read or heard Sam utter those words.