There remains a large amount of uncertainty about what exactly happened inside FTX and Alameda. I do not offer any new takes on what occurred. However, there are the inklings of some "lessons" from the situation that are incorrect regardless of how the details flesh out. Pointing these out now may seem in poor taste while many are still coming to terms with what happened, but it important to do so before they become the canonical lessons.
1. Ambition was a mistake
There have been a number of calls for EA to "go back to bed nets." It is notable that this refrain conflates the alleged illegal and unethical behavior from FTX/Alameda with the philosophical position of longtermism. Rather than evaluating the two issues as distinct, the call seems to assume both were born out of the same runaway ambition.
Logically, this is obviously not the case. Longtermism grew in popularity during SBFs rise, and Future Fund did focus on longtermist projects, but FTX/Alameda situation has no bearing on the truth of longtermism and associated project's prioritization.
To the extent that both becoming incredibly rich and affecting the longterm trajectory of humanity are "ambitious" goals, ambition is not the problem. Committing financial crimes is a problem. Longtermism has problems, like knowing how to act given uncertainty about the future. But an enlightened understanding of ambition accommodates these problems: We should be ambitious in our goals while understanding our limitations in solving them.
There is an uncomfortable reality that SBF symbolized a new level of ambition for EA. That sense of ambition should be retained. His malfeasance should not be.
This is not to say that there may be lessons to learn about transparency, overconfidence, centralized power, trusting leaders, etc from these events. But all of these are distinct from a lesson about ambition, which depends more on vague allusions to Icarus than argument.
2. No more earning to give
I am not sure that this is being learned as a "lesson" or if this situation simply "leaves a bad taste" in EA's mouths about earning to give. The alleged actions of the FTX/Alameda team in no way suggest that earning money to donate to effective causes is a poor career path.
Certain employees of FTX/Alameda seem to have been doing distinctly unethical work, such as grossly mismanaging client funds. One of the only arguments for why one might be open to working a possibly unethical job like trading crypto currency is because an ethically motivated actor would do that job in a more ethical way than the next person would (from Todd and MacAskill). Earning to give never asked EAs to pursue unethical work, and encouraged them to pursue any line of work in an ethically upstanding way.
(I emphasize Todd and MacAskill conclude in their 2017 post: "We believe that in the vast majority of cases, it’s a mistake to pursue a career in which the direct effects of the work are seriously harmful").
Earning to give in the way it has always been advised-by doing work that is basically ethical-continues to be a highly promising route to impact. This is especially true when total EA assets have significantly depreciated.
There is a risk that EAs do not continue to pursue earning to give, thinking either that it is icky post-FTX or that someone else has it covered. This is a poor strategy. It is imperative that some EAs who are well suited for founding companies shift their careers into entrepreneurship as soon as possible.
3. General FUD from nefarious actors
As the EA community reals from the FTX/Alameda blow up, a number of actors with histories of hating EA have chimed in with threads about how this catastrophe is in line with X thing they already believed or knew was going on within EA.
I believe, at least in many cases, this is being done in straightforward bad faith. In some cases, it seems like a deliberate effort to sow division within the EA movement. This is the sort of speculative and poorly founded claim that is very unpopular on the Forums, but the possibility this may be happening should be taken seriously. When the same accounts are liking threads taking glee in this disaster or mocking the appearances and personal writing of FTX/Alameda employees, it seems unlikely that they are simply interested in helping EA.
EAs, in a reasonable aim to avoid tribalism, have taken these additional allegations very seriously. After all-one could, in theory learn productive lessons from someone trying to antagonizing them. Yet this is largely a mistake. Tribalism makes for poor reasoning, but so does naively engaging with a dishonest interlocutor who is trying to manipulate you.
There are legitimate questions here, like the nature of the early Alameda split. I encourage EAs to have conversations internally or to sympathetic counter-parties, who are actually working on solving the newly revealed problems with EA, rather than those celebrating EA's alleged downfall.
4. Most hot takes on ethics
Events are not evidence to the truth of philosophical positions. There was already a critique of expected value reasoning and utilitarianism could drive people to do certain immoral actions or take too large of risks. The FTX/Alameda catastrophe gives no more evidence against utilitarianism than the famous utilitarian hospital, and no more evidence lying to the hypothetical murder at the door gives evidence against deontology.
Those frustrated or appalled by the situation should reflect on their ethical framework, but they should not make hasty and poorly founded jumps to "rule utilitarianism" or "virtue ethics" or newly invented blends of ethical systems. I appreciate the work of Richard Chappell and others in tempering these impulses. As Richard reminds us: "Remember, folks: If your conception of utilitarianism renders it *predictably* harmful, then you're thinking about it wrong." Deep ethical reflection is worth doing, but it tends to be better done in a reflective place rather than a reactionary one.
There is a separate set of critiques, such as how closely one should try to adhere to an ethical system. For example, Eliezer Yudkowsky recommends we "Go three-quarters of the way from deontology to utilitarianism and then stop . . . Stay there at least until you have become a god." There are also outstanding questions to whether EA or utilitarianism could have encouraged SBF's actions. These lines of inquiry seem sensible in response to recent events, but they are distinct from analyzing the ground level truth of ethical systems. And while these should be investigated, they should be investigated thoughtfully as they are large and important questions, not hastily without even the facts properly established.
Finally, it's worth pointing out that ethical uncertainty - the position that Will MacAskill invented and advocates for - looks reasonably good.
In general, I hope that any lessons taken from recent events come after due reflection and many deep breaths. And I hope everyone is doing OK and wish everyone who was negatively affected outside and inside the EA community my best.
Downvoted. I disagree quite strongly on points one and four, but that's a discussion for another day; I downvoted because point three is harmful.
If people with a long history of criticising EA have indeed claimed X for a long time, while EA-at-large has said not-X; and X is compatible with the events of the past week, while not-X is not (or is less obviously compatible, or renders those events more unexpected); then rational Bayesians should update towards the people with the long history of criticising EA. Just apply Bayes' rule: if P(events of the last week | X) > P(events of the last week | not-X), then you should increase your credence in X upon observing the events of the last week.
This reasoning holds whether or not these critics are speaking in bad faith, have personal issues with EA, or are acting irrationally. If being a bad-faith critic of EA provides you with better predictive power than being a relatively-uncritical member of the movement, then you should update so that you are closer to being a bad-faith critic of EA than to being a relatively-uncritical member of the movement. You probably shouldn't go all the way there (better to stop in the middle, somewhere around 'good-faith critic' or 'EA adjacent' or 'EA but quite suspicious of the movement's leadership'), but updating in that direction is the rational Bayesian thing to do.
To be sure, there's always a worry that the critics have fudged or falsified their predictions, saying something vaguely critical in the past which has since been sharpened into 'Several months ago, I predicted that this exact thing would happen!' This is the 'predicting the next recession' effect, and we should be vigilant about it. But while this is definitely happening in a lot of cases, in some of the most high-profile ones I don't think it applies: I think there were relatively concrete predictions made that a crisis of power and leadership of pretty much this kind was enabled by EA structures, and these predictions seem to have been closer to the mark than anything EA-at-large thought might happen.
I think there is a further sense, that many EAs seem to feel that their error was less one of prediction than of creativity: it's less that they made the wrong call on a variety of questions, but simply that they didn't ask those questions. This is obviously not true of all EAs, but it is definitely true of some. In cases like this, listening more closely to critics - even bad faith ones! - can open your mind up to a variety of different positions and reasoning styles that previously were not even present in your mind. This is not always inherently good, of course, but if an EA has reason to think that they have made a failure of creativity then it seems like a very positive way to go.
For more context about my worries: I think that it is possible that OP might be including me, and some things I have tweeted, in point three. I have quite a small follower count and nothing I wrote 'blew up' or anything, so it's definitely very unlikely; but I did tweet out several things pretty heavily critical of the movement in recent days which very strongly pattern-match the description given above, including pointing out that prior criticisms predicted these events pretty well, and having relatively well-known EAs reaching out to me about what I had written. Certainly, I 'felt seen' (as it were) while reading this post.
I don't think I am a 'nefarious actor', or have a history of 'hating EA', but I worry that in some segments of EA (not the whole of EA - some people have gone full self-flagellation, but in some segments) these kinds of terms are getting slung around far too liberally as part of a more general circling-the-wagons trend. And I worry that posts like this one legitimise slinging these terms around in this manner, by encouraging the thought that EA critics who are engaging in some (sometimes fully-justified) 'told you so' are just bad actors trying to destroy their tribe. EA needs to be more, not less, open to listening to critics - even bad-faith critics - after a disaster like this one. This is good Bayesianism, but it's also just proper humility.
I'm not sure what you mean by saying that my Bayesian argument fails in some cases? 'P(X|E)>P(X) if and only if P(E|X)>P(E|not-X)' is a theorem in the probability calculus (assuming no probabilities with value zero or one). If the likelihood ratio of X given E is greater than one, then upon observing E you should rationally update towards X.
If you just mean that there are some values of X which do not explain the events of the last week, such that P(events of the last week | X) ≤ P(events of the last week | not-X), this is true but trivial. Your post was about cases where 'this catastrophe is in line with X thing [critics] already believed'. In these cases, the rational thing to do is to update toward critics.