I care deeply about the future of humanity—more so than I care about anything else in the world. And I believe that Sam and others at FTX shared that care for the world.
Nevertheless, if some hypothetical person had come to me several years ago and asked “Is it worth it to engage in fraud to send billions of dollars to effective causes?”, I would have said unequivocally no.
At this stage, it is quite unclear just from public information exactly what happened to FTX, and I don't want to accuse anyone of anything that they didn't do. However, I think it is starting to look increasingly likely that, even if FTX's handling of its customer's money was not technically legally fraudulent, it seems likely to have been fraudulent in spirit.
And regardless of whether FTX's business was in fact fraudulent, it is clear that many people—customers and employees—have been deeply hurt by FTX's collapse. People's life savings and careers were very rapidly wiped out. I think that compassion and support for those people is very important. In addition, I think there's another thing that we as a community have an obligation to do right now as well.
Assuming FTX's business was in fact fraudulent, I think that we—as people who unknowingly benefitted from it and whose work for the world was potentially used to whitewash it—have an obligation to condemn it in no uncertain terms. This is especially true for public figures who supported or were associated with FTX or its endeavors.
I don't want a witch hunt, I don't think anyone should start pulling out pitchforks, and so I think we should avoid a focus on any individual people here. We likely won't know for a long time exactly who was responsible for what, nor do I think it really matters—what's done is done, and what's important now is making very clear where EA stands with regards to fraudulent activity, not throwing any individual people under the bus.
Right now, I think the best course of action is for us—and I mean all of us, anyone who has any sort of a public platform—to make clear that we don't support fraud done in the service of effective altruism. Regardless of what FTX did or did not do, I think that is a statement that should be clearly and unambiguously defensible and that we should be happy to stand by regardless of what comes out. And I think it is an important statement for us to make: outside observers will be looking to see what EA has to say about all of this, and I think we need to be very clear that fraud is not something that we ever support.
In that spirit, I think it's worth us carefully confronting the moral question here: is fraud in the service of raising money for effective causes wrong? This is a thorny moral question that is worth nuanced discussion, and I don't claim to have all the answers.
Nevertheless, I think fraud in the service of effective altruism is basically unacceptable—and that's as someone who is about as hardcore of a total utilitarian as it is possible to be.
When we, as humans, consider whether or not it makes sense to break the rules for our own benefit, we are running on corrupted hardware: we are very good at justifying to ourselves that seizing money and power for own benefit is really for the good of everyone. If I found myself in a situation where it seemed to me like seizing power for myself was net good, I would worry that in fact I was fooling myself—and even if I was pretty sure I wasn't fooling myself, I would still worry that I was falling prey to the unilateralist's curse if it wasn't very clearly a good idea to others as well.
Additionally, if you're familiar with decision theory, you'll know that credibly pre-committing to follow certain principles—such as never engaging in fraud—is extremely advantageous, as it makes clear to other agents that you are a trustworthy actor who can be relied upon. In my opinion, I think such strategies of credible pre-commitments are extremely important for cooperation and coordination.
Furthermore, I will point out, if FTX did engage in fraud here, it was clearly in fact not a good idea in this case: I think the lasting consequences to EA—and the damage caused by FTX to all of their customers and employees—will likely outweigh the altruistic funding already provided by FTX to effective causes.
TLDR because I got long-winded: If you ever find yourself planning to commit some morally horrible thing in the name of a good outcome, stop. Those kinds of choices aren't made in the real world, they are a thought exercise (normally a really stupid one too.)
Long version:
Sorry that you got downvoted hard, keep in mind that knee-jerk reactions are probably pretty strong right now. While the disagrees are justified, the downvotes are probably not (I'm assuming this is a legit question.)
I'm constantly looking to learn more about ethics, philosophy, etc and I recently got introduced to this website: What is Utilitarianism? | Utilitarianism.net which I really liked. There are a few things that I disagree with or feel could have been more explored, but I think it's overall good.
To restate and make sure that I understand where you're coming from, I think that you're framing the current objections like a trolley problem, or its more advanced version the transplant case. (Addressed in 8. Objections to Utilitarianism and Responses – Utilitarianism.net second paragraph under "General Ways of Responding to Objections to Utilitarianism") if I was going to reword it, I would put it something like this:
"When considered in large enough situations, the ideal of precommitment would be swamped by the potential utility gains for defecting."
This is the second response commonly used in defense of the utilitarian framework "debunk the moral intuition" (paragraph 5 in the same chapter and section.)
I believe, and I think most of us believe that this isn't the appropriate response (to this situation) because in this case, the moral intuition is correct. Any misbehavior on this scale results in a weaker economic system, harms thousands if not millions of people, and erodes trust in society itself.
A response you might think would be something like "but if the stakes were even higher."
And I agree, it would be pretty ridiculous if after the Avengers saved NYC from a chitauri invasion someone tried to sue the Hulk for using his car to crush an alien or something. We would all agree with you there, the illegal action (crushing a car) is justified by the alternative (aliens killing us all.)
The problem with that kind of scale, however, is that if you ever find yourself in a situation where you think "I'm the only one that can save everyone, all it takes is 'insert thing that no one else wants me to do.'" stop what you're doing and do what the people around you tell you to do.
If you think you're Jesus, you're probably not Jesus. (or in this case the Hulk.)
That's why the discussions of corrupted hardware and the unilateralist's curse (links provided by OP) are so important.
For more discussion on this you can look in Elements and Types of Utilitarianism – Utilitarianism.net "Multi-level Utilitarianism Versus Single-level Utilitarianism."
One must-read section says that "In contrast, to our knowledge no one has ever defended single-level utilitarianism, including the classical utilitarians.26 Deliberately calculating the expected consequences of our actions is error-prone and risks falling into decision paralysis."
I would encourage you to read that whole section (and the one that follows it if you think much of rule utilitarianism) as I think one of the most common problems with most people's understanding of utilitarianism is the single-level vs multi-level distinction.