Cross-posted from Epistemism.

A lot of commentators on the SBF saga seem to have jumped on the opportunity to stick their knives in effective altruism and longtermism. The large interest in What We Owe the Future earlier this year perhaps triggered a lot of people to look for an opportunity to discredit this fairly new philosophy.

For them, seemingly, the SBF situation means that the core of effective altruism is wrong, i.e. that it would be wrong to apply a moral calculus at all to one’s decisions. They almost seem to argue that because SBF acted the way he did, it means effective altruism is doomed and we must go back to gut feelings about morals, barely-evidenced heuristics and a time horizon not extending past our nose.

And clearly, what SBF did was completely wrong according to common-sense and here-and-now morality. Gambling with the deposits of regular people and threatening their livelihoods means that whatever upside there may have been and that SBF may have banked on pales in comparison. Our current governance and judicial system works perfectly well for these cases and SBF will hopefully suffer appropriate consequences.

However, even if you somehow try to put that to the side and evaluate SBF purely as a longtermist, he is still just wrong and in fact as guilty of short-termism as any other Ponzi schemer. If we look at the end result of his actions, he basically threw a bunch of money at effective altruism for a very short period of time and then blew up. His donations to current global health and well-being will have had some short-term benefits, but those are tiny compared to existing donations in those areas.

His donations to longtermist causes will have led to the founding of some new organizations which may have attracted some smart people to the EA and longtermist cause. However, far more people in EA will now be losing their faith in it because of his actions. And even more people who might have been positively inclined to EA and longtermism will now hear about it in the SBF context first, and therefore be forever put off. Further, any projects that SBF-funded organizations would have kicked off would barely have had time to yield any benefits.

The long term is by definition a long game. An infinite game, we may even say, using Carse’s terminology. Therefore, St Petersburging oneself over and over is actually counter-productive to long-term benefits. We can see this if we look at various variants of longtermism. If we look at Bostrom’s maxipok rule, SBF constantly St Petersburging his bets does not maximize the likelihood that humanity’s future will be ok. And if we look at the variant of longtermism that I’m currently developing – Epistemism – where I’m trying to make the decisions even more simple, SBF’s actions are also a fail.

Epistemism is a philosophy where the decision rule is to always choose the option that maximizes humanity’s knowledge (specifically to maximize the optionality of knowledge that will help sentience persevere in the universe for the long term, but for this purpose, this can be simplified to just maximizing knowledge). The small amount of relevant knowledge that was created during the short period of SBF largesse to longtermist causes is completely dwarfed by the loss of opportunity that his implosion imposes on the overall community of those whose actions may have led to creation of that much needed knowledge. (For more details on Epistemism, the full framework can be found here).

So even assuming that SBF was perfectly moral, we can see that his actions were not rational. And of course, it’s a big if to assume that he was perfectly moral. It’s too early to tell, but at this point, it seems more likely that he was not. Both given the Kelsey Piper texts and just the scale of the debacle.

Therefore, this is not an occasion to throw out any attempts at doing moral calculus and go back to living in the moral dark ages where we operate with folk wisdom of actions being good and evil or state there is too much uncertainty to even try to choose. Rather, it’s a good reminder to always apply the moral parliament view and not lean to heavily on one specific moral goal.

And, most importantly, to realize that we probably had put SBF in the wrong reference class – the very small group of EA billionaires whose actions are necessarily positive for the world – rather than the in hindsight much more likely reference class of young people too quickly becoming billionaires, having palaces in the Bahamas and being sufficiently seduced by the feeling of being larger than life to quickly give up on any desire to stay within legal and moral bounds in order to do so. The base rate for disasters involving the latter is, of course, infinitely higher.


New Comment
2 comments, sorted by Click to highlight new comments since: Today at 6:43 AM

I agree that calls to "throw out [all] attempts at doing moral calculus" are overreactions. The badness of fraud (etc.) is not a reason to reject longtermism, or earning to give, or other ambitious attempts to promote the good that are perfectly compatible with integrity and good citizenship.  It merely suggests that these good goals should be pursued with integrity, rather than fraudulently.

But I do think it would be a mistake for people to only act with integrity conditional on explicit calculations yielding that result on each occasion.  Rather, I think we really should throw out some attempts at doing moral calculus. (It's never been a part of mainstream EA that you should try to violate rights for the greater good, for example.  So I think the present discussions are really just making more explicit constraints on utilitarian reasoning that were really there all along.)

When folks wonder whether EA has the philosophical resources to condemn fraud etc., I think it's worth flagging three (mutually compatible) options:

(1) Some EAs accept (non-instrumental) deontic constraints, compatibly with a utilitarian-flavoured conception of beneficence that applies within these constraints.

(2) Others may at least give sufficient credence to deontological views that they end up approximating such acceptance when taking account of moral uncertainty (as you suggest).

(3) But in any case, we shouldn't believe that fraud and bad citizenship are effective means to promoting altruistic goals, so even pure utilitarianism can straightforwardly condemn such behaviour.

I don't think that (2) alone will necessarily do much, because I don't think we really ought to give much credence to deontology.  I give deontology negligible--much less than 1%--credence myself (for many reasons, including this argument).  Since my own uncertainty effectively only ranges over different variants of consequentialism, this uncertainty doesn't seem sufficient to make much practical difference  in this respect.

So I'm personally inclined to put more weight on heuristics that firmly warn against fraud and dishonesty on instrumental grounds.  I'm not sure whether this is what you're dismissing as "barely-evidenced heuristics", as I think our common-sense grasp of social life and agential unreliability actually provides very strong grounds (albeit not explicit quantitative evidence) for believing that these particular heuristics are more reliable than first-pass calculations favouring fraud.  And the FTX disaster (at least if truly motivated by naive utilitarianism, which I agree is unclear) constitutes yet further evidence in support of this, and so ought to prompt rethinking from those who disagree on this point.

As you say, we can see that any recent fraudulent actions were not truly rational (or prudent) given utilitarian goals. But that's a point in support of explanation (3), not (2).

I think it is more interesting to think about other people as of rational agents. If bitcoin grew to 100K as it was widely expected in 2021, SBF bets will pay off and he will become the first trillioner. He will also be able to return all money he took from creditors. 

He may understood that there was only like 10 per cent chance to become trillioner, but if he thought that trillion dollars for preventing x-risks is the only chance to save humanity, then he knew that he should bet on this opportunity.

Now we live in a timeline where he lost and it is more tempting to say that he was irrational or mistaken. But maybe he was not.