I care deeply about the future of humanity—more so than I care about anything else in the world. And I believe that Sam and others at FTX shared that care for the world.
Nevertheless, if some hypothetical person had come to me several years ago and asked “Is it worth it to engage in fraud to send billions of dollars to effective causes?”, I would have said unequivocally no.
At this stage, it is quite unclear just from public information exactly what happened to FTX, and I don't want to accuse anyone of anything that they didn't do. However, I think it is starting to look increasingly likely that, even if FTX's handling of its customer's money was not technically legally fraudulent, it seems likely to have been fraudulent in spirit.
And regardless of whether FTX's business was in fact fraudulent, it is clear that many people—customers and employees—have been deeply hurt by FTX's collapse. People's life savings and careers were very rapidly wiped out. I think that compassion and support for those people is very important. In addition, I think there's another thing that we as a community have an obligation to do right now as well.
Assuming FTX's business was in fact fraudulent, I think that we—as people who unknowingly benefitted from it and whose work for the world was potentially used to whitewash it—have an obligation to condemn it in no uncertain terms. This is especially true for public figures who supported or were associated with FTX or its endeavors.
I don't want a witch hunt, I don't think anyone should start pulling out pitchforks, and so I think we should avoid a focus on any individual people here. We likely won't know for a long time exactly who was responsible for what, nor do I think it really matters—what's done is done, and what's important now is making very clear where EA stands with regards to fraudulent activity, not throwing any individual people under the bus.
Right now, I think the best course of action is for us—and I mean all of us, anyone who has any sort of a public platform—to make clear that we don't support fraud done in the service of effective altruism. Regardless of what FTX did or did not do, I think that is a statement that should be clearly and unambiguously defensible and that we should be happy to stand by regardless of what comes out. And I think it is an important statement for us to make: outside observers will be looking to see what EA has to say about all of this, and I think we need to be very clear that fraud is not something that we ever support.
In that spirit, I think it's worth us carefully confronting the moral question here: is fraud in the service of raising money for effective causes wrong? This is a thorny moral question that is worth nuanced discussion, and I don't claim to have all the answers.
Nevertheless, I think fraud in the service of effective altruism is basically unacceptable—and that's as someone who is about as hardcore of a total utilitarian as it is possible to be.
When we, as humans, consider whether or not it makes sense to break the rules for our own benefit, we are running on corrupted hardware: we are very good at justifying to ourselves that seizing money and power for own benefit is really for the good of everyone. If I found myself in a situation where it seemed to me like seizing power for myself was net good, I would worry that in fact I was fooling myself—and even if I was pretty sure I wasn't fooling myself, I would still worry that I was falling prey to the unilateralist's curse if it wasn't very clearly a good idea to others as well.
Additionally, if you're familiar with decision theory, you'll know that credibly pre-committing to follow certain principles—such as never engaging in fraud—is extremely advantageous, as it makes clear to other agents that you are a trustworthy actor who can be relied upon. In my opinion, I think such strategies of credible pre-commitments are extremely important for cooperation and coordination.
Furthermore, I will point out, if FTX did engage in fraud here, it was clearly in fact not a good idea in this case: I think the lasting consequences to EA—and the damage caused by FTX to all of their customers and employees—will likely outweigh the altruistic funding already provided by FTX to effective causes.
Even if they weren't infinitely advantageous, it seems like you'd have to be unrealistically sure that you can get away with shadiness and no bad consequences before risking it. If the downsides of getting caught are bad enough, then you can never be sufficiently confident in practice. And if the downside risk of some action isn't quite as devastating as "maybe the entire EA movement has its reputation ruined," then it might anyway be the better move to come clean right away. For instance, if you're only 0.5 billion in the hole out of 30 billion total assets (say), and you've conducted your business with integrity up to that point, why not admit that you fucked up and ask for a bailout? The fact that you come clean should lend you credibility and goodwill, which would mitigate the damage. Doubling down, on the other hand, makes things a lot worse. Gambling to get back multiple billions really doesn't seem wise because if it was risk-free to make billions then a lot more people would be billionaires...
In any case, faced with the choice of whether to precommit to always act with integrity, it's not necessary for the pro-integrity arguments to be "infinitely strong." The relevant question is "is the precommitment better in EV or not?" (given the range of circumstances you expect in your future). And the answer here seems"yes." (Somewhat separately, I think people tend to underestimate how powerful and motivating it can be to have leadership with high integrity – it opens doors that would otherwise stay closed.)
You might say "That's a false dilemma, that choice sounds artificially narrow. What if I can make a sophisticated precommitment that says I'll act with integrity under almost all circumstances, except if the value at stake is (e.g.) 100 billion and I'm ultra-sure I can get away with it?" Okay, decent argument. But I don't think it goes through. Maybe if you were a perfect utilitarian robot with infinitely malleable psychology and perfect rationality, maybe then it would go through. Maybe you'd have some kind of psychological "backdoor" programmed in where you activate "deceitful mode" if you ever find yourself in a situation where you can get away with >100 billion in profits. The problem though, in practice, is "when do you notice whether it's a good time to activate 'deceitful mode'?" To know when to activate it, you have to think hypothetically-deceitful-thoughts even earlier than the point of actually triggering the backdoor. Moreover, you have to take actions to preserve your abilities to be a successful deceiver later on. (E.g., people who deceive others tend to have a habit of generally not proactively sharing a lot of information about their motives and "reasons for acting," while high-integrity people do the opposite. This is a real tradeoff – so which side do you pick?) These things aren't cost free! (Not even for perfect utilitarian robots, and certainly not for humans where parts of our cognition cannot be shut off at will.) In reality, the situation is like this: you either train your psychology, your "inner elephant in the brain," to have integrity to the very best of your abilities (it's already hard enough!), or you do not. Retaining the ability to turn into a liar and deceitful manipulator "later on" doesn't come cost-free; it changes you. If you're planning to do it when 100 billion are at stake, that'll reflect on how you approach other issues, too. (See also my comment in this comment section for more reasons why I don't think it's psychologically plausible for people to simultaneously be great liars and deceivers but also act perfectly as though they have high integrity.)