We were shocked and immensely saddened to learn of the recent events at FTX. Our hearts go out to the thousands of FTX customers whose finances may have been jeopardized or destroyed.
We are now unable to perform our work or process grants, and we have fundamental questions about the legitimacy and integrity of the business operations that were funding the FTX Foundation and the Future Fund. As a result, we resigned earlier today.
We don’t yet have a full picture of what went wrong, and we are following the news online as it unfolds. But to the extent that the leadership of FTX may have engaged in deception or dishonesty, we condemn that behavior in the strongest possible terms. We believe that being a good actor in the world means striving to act with honesty and integrity.
We are devastated to say that it looks likely that there are many committed grants that the Future Fund will be unable to honor. We are so sorry that it has come to this. We are no longer employed by the Future Fund, but, in our personal capacities, we are exploring ways to help with this awful situation. We joined the Future Fund to support incredible people and projects, and this outcome is heartbreaking to us.
We appreciate the grantees' work to help build a better future, and we have been honored to support it. We're sorry that we won't be able to continue to do so going forward, and we deeply regret the difficult, painful, and stressful position that many of you are now in.
To reach us, grantees may email grantee-reachout@googlegroups.com. We know grantees must have many questions, and in our personal capacities we will try to answer them as best as we can given the circumstances.
Nick Beckstead
Leopold Aschenbrenner
Avital Balwit
Ketan Ramakrishnan
Will MacAskill
Thanks for the reply!
In terms of public interviews, I think the most interesting/relevant parts are him expressing willingness to bite consequentialist/utilitarian bullets in a way that's a bit on the edge of the mainstream Overton window, but I believe would've been within the EA Overton window prior to recent events (unsure about now). BTW I got these examples from Marginal Revolution comments/Twitter.
This one seems most relevant -- the first question Patrick asks Sam is whether the ends justify the means.
In this interview, search for "So why then should we ever spend a whole lot of money on life extension since we can just replace people pretty cheaply?" and "Should a Benthamite be risk-neutral with regard to social welfare?"
In any case, given that you think people should put hardly any weight on your assessment, it seems to me that as a community we should be doing a fair amount of introspection. Here are some things I've been thinking about:
We should update away from "EA exceptionalism" and towards self-doubt. (EDIT: I like this thread about "EA exceptionalism", though I don't agree with all the claims.) It sounds like you think more self-doubt would've been really helpful for Sam. IMO, self-doubt should increase in proportion to one's power. (Trying to "more than cancel out" the normal human tendency towards decreased self-doubt as power increases.) This one is tricky, because it seems bad to tell people who already experience Chidi Anagonye-style crippling self-doubt that they should self-doubt even more. But it certainly seems good for our average level of self-doubt to increase, even if self-doubt need not increase in every individual EA. Related: Having the self-awareness to know where you are on the self-doubt spectrum seems like an important and unsolved problem.
I'm also wondering if I should think of "morality" as being two different things: A descriptive account of what I value, and (separately) a prescriptive code of behavior. And then, beyond just endorsing the abstract concept of ethical injunctions, maybe it would be good to take a stab at codifying exactly what they should be. The idea seems a bit under-operationalized, although it's likely there are relevant blog posts that aren't coming to my mind. Like, I notice that the EA who's most associated with the phrase "ethical injunctions" is also the biggest advocate of drastic unilateral action, and I'm not sure how to reconcile that (not trying to throw shade -- genuinely unsure). EDIT: This is a great tweet; related.
Institutional safeguards are also looking better, but I was already very in favor of those and puzzled by lack of EA interest, so I can't say it was a huge update for me personally.