This is a repost from a Twitter thread I made last night. It reads a little oddly when presented as a Forum post, but I wanted to have the content shared here for those not on Twitter.
This is a thread of my thoughts and feelings about the actions that led to FTX’s bankruptcy, and the enormous harm that was caused as a result, involving the likely loss of many thousands of innocent people’s savings.
Based on publicly available information, it seems to me more likely than not that senior leadership at FTX used customer deposits to bail out Alameda, despite terms of service prohibiting this, and a (later deleted) tweet from Sam claiming customer deposits are never invested.
Some places making the case for this view include this article from Wall Street Journal, this tweet from jonwu.eth, this article from Bloomberg (and follow on articles).
I am not certain that this is what happened. I haven’t been in contact with anyone at FTX (other than those at Future Fund), except a short email to resign from my unpaid advisor role at Future Fund. If new information vindicates FTX, I will change my view and offer an apology.
But if there was deception and misuse of funds, I am outraged, and I don’t know which emotion is stronger: my utter rage at Sam (and others?) for causing such harm to so many people, or my sadness and self-hatred for falling for this deception.
I want to make it utterly clear: if those involved deceived others and engaged in fraud (whether illegal or not) that may cost many thousands of people their savings, they entirely abandoned the principles of the effective altruism community.
If this is what happened, then I cannot in words convey how strongly I condemn what they did. I had put my trust in Sam, and if he lied and misused customer funds he betrayed me, just as he betrayed his customers, his employees, his investors, & the communities he was a part of.
For years, the EA community has emphasised the importance of integrity, honesty, and the respect of common-sense moral constraints. If customer funds were misused, then Sam did not listen; he must have thought he was above such considerations.
A clear-thinking EA should strongly oppose “ends justify the means” reasoning. I hope to write more soon about this. In the meantime, here are some links to writings produced over the years.
These are some relevant sections from What We Owe The Future:
Here is Toby Ord in The Precipice:
Here is Holden Karnofsky: https://forum.effectivealtruism.org/posts/T975ydo3mx8onH3iS/ea-is-about-maximization-and-maximization-is-perilous
Here are the Centre for Effective Altruism’s Guiding Principles: https://forum.effectivealtruism.org/posts/Zxuksovf23qWgs37J/introducing-cea-s-guiding-principles
If FTX misused customer funds, then I personally will have much to reflect on. Sam and FTX had a lot of goodwill – and some of that goodwill was the result of association with ideas I have spent my career promoting. If that goodwill laundered fraud, I am ashamed.
As a community, too, we will need to reflect on what has happened, and how we could reduce the chance of anything like this from happening again. Yes, we want to make the world better, and yes, we should be ambitious in the pursuit of that.
But that in no way justifies fraud. If you think that you’re the exception, you’re duping yourself.
We must make clear that we do not see ourselves as above common-sense ethical norms, and must engage criticism with humility.
I know that others from inside and outside of the community have worried about the misuse of EA ideas in ways that could cause harm. I used to think these worries, though worth taking seriously, seemed speculative and unlikely.
I was probably wrong. I will be reflecting on this in the days and months to come, and thinking through what should change.
I think coming back to this, my point isn't straightforwardly fair. My post above uses a lot of evidence in a way that makes it seem like the point is very obvious.
I think that bars like "does the person have public writing showing they deeply understand EA principles" are generally pretty decent and often have worked decently well.
The case with SBF does seem extremely unusual to me. Protecting against it isn't just some "obvious set of regular measures". It might take a fair deal of thought and effort.
I think that we should be thinking about how to that thought of effort. I think we should be working to find and assume ways of verification that would have at least caught some lite-SBF.
So, the example of SBF seemed too good to not share, but it is extreme, so can't be taken too much as a typical example to be worried about.
I still think that we should set the bar higher than a few blog posts for situations like this though, and assume that Will would agree. (He meant this much more as a quick public statement, and not real evidence of innocence to EAs, I assume)