A lot of commentators on the SBF saga seem to have jumped on the opportunity to stick their knives in effective altruism and longtermism. The large interest in What We Owe the Future earlier this year perhaps triggered a lot of people to look for an opportunity to discredit this fairly new philosophy.
For them, seemingly, the SBF situation means that the core of effective altruism is wrong, i.e. that it would be wrong to apply a moral calculus at all to one’s decisions. They almost seem to argue that because SBF acted the way he did, it means effective altruism is doomed and we must go back to gut feelings about morals, barely-evidenced heuristics and a time horizon not extending past our nose.
And clearly, what SBF did was completely wrong according to common-sense and here-and-now morality. Gambling with the deposits of regular people and threatening their livelihoods means that whatever upside there may have been and that SBF may have banked on pales in comparison. Our current governance and judicial system works perfectly well for these cases and SBF will hopefully suffer appropriate consequences.
However, even if you somehow try to put that to the side and evaluate SBF purely as a longtermist, he is still just wrong and in fact as guilty of short-termism as any other Ponzi schemer. If we look at the end result of his actions, he basically threw a bunch of money at effective altruism for a very short period of time and then blew up. His donations to current global health and well-being will have had some short-term benefits, but those are tiny compared to existing donations in those areas.
His donations to longtermist causes will have led to the founding of some new organizations which may have attracted some smart people to the EA and longtermist cause. However, far more people in EA will now be losing their faith in it because of his actions. And even more people who might have been positively inclined to EA and longtermism will now hear about it in the SBF context first, and therefore be forever put off. Further, any projects that SBF-funded organizations would have kicked off would barely have had time to yield any benefits.
The long term is by definition a long game. An infinite game, we may even say, using Carse’s terminology. Therefore, St Petersburging oneself over and over is actually counter-productive to long-term benefits. We can see this if we look at various variants of longtermism. If we look at Bostrom’s maxipok rule, SBF constantly St Petersburging his bets does not maximize the likelihood that humanity’s future will be ok. And if we look at the variant of longtermism that I’m currently developing – Epistemism – where I’m trying to make the decisions even more simple, SBF’s actions are also a fail.
Epistemism is a philosophy where the decision rule is to always choose the option that maximizes humanity’s knowledge (specifically to maximize the optionality of knowledge that will help sentience persevere in the universe for the long term, but for this purpose, this can be simplified to just maximizing knowledge). The small amount of relevant knowledge that was created during the short period of SBF largesse to longtermist causes is completely dwarfed by the loss of opportunity that his implosion imposes on the overall community of those whose actions may have led to creation of that much needed knowledge. (For more details on Epistemism, the full framework can be found here).
So even assuming that SBF was perfectly moral, we can see that his actions were not rational. And of course, it’s a big if to assume that he was perfectly moral. It’s too early to tell, but at this point, it seems more likely that he was not. Both given the Kelsey Piper texts and just the scale of the debacle.
Therefore, this is not an occasion to throw out any attempts at doing moral calculus and go back to living in the moral dark ages where we operate with folk wisdom of actions being good and evil or state there is too much uncertainty to even try to choose. Rather, it’s a good reminder to always apply the moral parliament view and not lean to heavily on one specific moral goal.
And, most importantly, to realize that we probably had put SBF in the wrong reference class – the very small group of EA billionaires whose actions are necessarily positive for the world – rather than the in hindsight much more likely reference class of young people too quickly becoming billionaires, having palaces in the Bahamas and being sufficiently seduced by the feeling of being larger than life to quickly give up on any desire to stay within legal and moral bounds in order to do so. The base rate for disasters involving the latter is, of course, infinitely higher.