Hide table of contents

There remains a large amount of uncertainty about what exactly happened inside FTX and Alameda. I do not offer any new takes on what occurred. However, there are the inklings of some "lessons" from the situation that are incorrect regardless of how the details flesh out. Pointing these out now may seem in poor taste while many are still coming to terms with what happened, but it important to do so before they become the canonical lessons.

1. Ambition was a mistake

There have been a number of calls for EA to "go back to bed nets." It is notable that this refrain conflates the alleged illegal and unethical behavior from FTX/Alameda with the philosophical position of longtermism. Rather than evaluating the two issues as distinct, the call seems to assume both were born out of the same runaway ambition. 

Logically, this is obviously not the case. Longtermism grew in popularity during SBFs rise, and Future Fund did focus on longtermist projects, but FTX/Alameda situation has no bearing on the truth of longtermism and associated project's prioritization. 

To the extent that both becoming incredibly rich and affecting the longterm trajectory of humanity are "ambitious" goals, ambition is not the problem. Committing financial crimes is a problem. Longtermism has problems, like knowing how to act given uncertainty about the future. But an enlightened understanding of ambition accommodates these problems: We should be ambitious in our goals while understanding our limitations in solving them. 

There is an uncomfortable reality that SBF symbolized a new level of ambition for EA. That sense of ambition should be retained. His malfeasance should not be. 

This is not to say that there may be lessons to learn about transparency, overconfidence, centralized power, trusting leaders, etc from these events. But all of these are distinct from a lesson about ambition, which depends more on vague allusions to Icarus than argument. 

2. No more earning to give

I am not sure that this is being learned as a "lesson" or if this situation simply "leaves a bad taste" in EA's mouths about earning to give. The alleged actions of the FTX/Alameda team in no way suggest that earning money to donate to effective causes is a poor career path.

Certain employees of FTX/Alameda seem to have been doing distinctly unethical work, such as grossly mismanaging client funds. One of the only arguments for why one might be open to working a possibly unethical job like trading crypto currency  is because an ethically motivated actor would do that job in a more ethical way than the next person would (from Todd and MacAskill). Earning to give never asked EAs to pursue unethical work, and encouraged them to pursue any line of work in an ethically upstanding way. 

(I  emphasize Todd and MacAskill conclude in their 2017 post: "We believe that in the vast majority of cases, it’s a mistake to pursue a career in which the direct effects of the work are seriously harmful"). 

Earning to give in the way it has always been advised-by doing work that is basically ethical-continues to be a highly promising route to impact. This is especially true when total EA assets have significantly depreciated. 

There is a risk that EAs do not continue to pursue earning to give, thinking either that it is icky post-FTX or that someone else has it covered. This is a poor strategy. It is imperative that some EAs who are well suited for founding companies shift their careers into entrepreneurship as soon as possible. 

3. General FUD from nefarious actors

As the EA community reals from the FTX/Alameda blow up, a number of actors with histories of hating EA have chimed in with threads about how this catastrophe is in line with X thing they already believed or knew was going on within EA. 

I believe, at least in many cases, this is being done in straightforward bad faith. In some cases, it seems like a deliberate effort to sow division within the EA movement. This is the sort of speculative and poorly founded claim that is very unpopular on the Forums, but the possibility this may be happening should be taken seriously. When the same accounts are liking threads taking glee in this disaster or mocking the appearances and personal writing of FTX/Alameda employees, it seems unlikely that they are simply interested in helping EA. 

EAs, in a reasonable aim to avoid tribalism, have taken these additional allegations very seriously. After all-one could, in theory learn productive lessons from someone trying to antagonizing them. Yet this is largely a mistake. Tribalism makes for poor reasoning, but so does naively engaging with a dishonest interlocutor who is trying to manipulate you.

There are legitimate questions here, like the nature of the early Alameda split. I encourage EAs to have conversations internally or to sympathetic counter-parties, who are actually working on solving the newly revealed problems with EA, rather than those celebrating EA's alleged downfall. 

4. Most hot takes on ethics 

Events are not evidence to the truth of philosophical positions. There was already a critique of expected value reasoning and utilitarianism could drive people to do certain immoral actions or take too large of risks. The FTX/Alameda catastrophe gives no more evidence against utilitarianism than the famous utilitarian hospital, and no more evidence lying to the hypothetical murder at the door gives evidence against deontology. 

Those frustrated or appalled by the situation should reflect on their ethical framework, but they should not make hasty and poorly founded jumps to "rule utilitarianism" or "virtue ethics" or newly invented blends of ethical systems. I appreciate the work of Richard Chappell and others in tempering these impulses. As Richard reminds us: "Remember, folks: If your conception of utilitarianism renders it *predictably* harmful, then you're thinking about it wrong." Deep ethical reflection is worth doing, but it tends to be better done in a reflective place rather than a reactionary one. 

There is a separate set of critiques, such as how closely one should try to adhere to an ethical system. For example, Eliezer Yudkowsky recommends we "Go three-quarters of the way from deontology to utilitarianism and then stop . . .  Stay there at least until you have become a god." There are also outstanding questions to whether EA or utilitarianism could have encouraged SBF's actions. These lines of inquiry seem sensible in response to recent events, but they are distinct from analyzing the ground level truth of ethical systems. And while these should be investigated, they should be investigated thoughtfully as they are large and important questions, not hastily without even the facts properly established. 

Finally, it's worth pointing out that ethical uncertainty - the position that Will MacAskill invented and advocates for - looks reasonably good. 

In general, I hope that any lessons taken from recent events come after due reflection and many deep breaths. And I hope everyone is doing OK and wish everyone who was negatively affected outside and inside the EA community my best.  
 

Comments20
Sorted by Click to highlight new comments since: Today at 4:47 PM

Excellent post. I hope everybody reads it and takes it onboard.

One failure mode for EA will be over-reacting to black swan events like this that might not carry as much information about our organizations and our culture as we think they do. 

Sometimes a bad actor who fools people is just a bad actor who fools people, and they're not necessarily diagnostic of a more systemic organizational problem. They might be, but they might not be. 

We should be open to all possibilities at this point, and if EA decides it needs to tweak, nudge, update, or overhaul its culture and ethos, we should do so intelligently, carefully, strategically, and wisely -- rather than in a reactive, guilty, depressed, panicked, or self-flagellating panic.

 I do think EA is above treating this as a black swan event. Fraud in unregulated finance (crypto even more so) even if at least initially guided by good (no to speak of naively utilitarian) intentions is to be expected.  Most people did not expect this to happen with SBF/FTX, but some did. There's a lot of potential to learn from this and make the movement more resilient against future cases of funder's fraud via guidelines, practices. E.g. clarifying that dirty money won't work towards achieving EA aims. And that EA credibility should not be lent to dubious practices. 

Other than that I agree with the gist of this post & comment but it's also important to gradually update views. Upvoted the comment of John_Maxwel

Sabs
1y12
22
11

This is laughably disingenuous. An unregulated offshore crypto exchange that had hired the Ultimate Bet lawyer and publicly ran a hedge fund on the side collapsing is like the exact opposite of a black swan, it's  not even a grey swan, it's that swan that's definitely no longer a cygnet and is almost grown up but still has a few bits of darker fluff. Maybe no one thought it was likely, and I think almost no thought FTX was $8bn in the hole or had quite such an atrocious balance sheet (IMO not even CZ thought this), but come on!

Sabs -- your tone here isn't really in the spirit of EA Forum norms, IMHO.

I've followed crypto news pretty closely for the last couple of years, and the consensus in crypto and finance generally was that FTX was a big, serious, secure, respectable operation, vetted and backed by many of the most prominent VCs and investors in the industry, and doing great work lobbying for crypto acceptance in Washington DC. 

That's my honest assessment of what most crypto insiders and investors thought, up until last week. If there had been big red flags around FTX that were commonly discussed in the crypto industry, I think I probably would have known about it. (I'm about 70% confident in this; but I could be wrong.)

Sure, there were some skeptics who pointed out potential problems with FTX. There are always skeptics and FUD-promotors, regarding any crypto protocol, exchange, or business. If they happen to be right, they pop up later and say 'See, I told you so!'. 

Hindsight is always easy in these cases. But it's important to be empirically correct about whether there were, in fact, big warning signs about FTX that were being widely discussed in crypto and finance news and social media. There were a few, but not nearly as many red flags as there were around Luna, or Tether.

I agree with your critique that there were important red flags and am glad you pointed them out, but I think it's inappropriate to call the comment "laughably disingenuous", since that's a claim that the author is not being sincere. Most of us were blindsided by this, even if there were red flags we should have been paying attention to and worrying about. I think the definition of black swan could still apply to the EA community based on our subjective (but badly calibrated or poorly informed) beliefs about FTX:

The black swan theory or theory of black swan events is a metaphor that describes an event that comes as a surprise, has a major effect, and is often inappropriately rationalized after the fact with the benefit of hindsight.

(...)

Based on the author's criteria:

  1. The event is a surprise (to the observer).
  2. The event has a major effect.
  3. After the first recorded instance of the event, it is rationalized by hindsight, as if it could have been expected; that is, the relevant data were available but unaccounted for in risk mitigation programs. The same is true for the personal perception by individuals.

It was a black swan to many of us, but should not have been, and may not have been a black swan to outsiders who were paying more attention and giving more weight to the red flags.

And even if it isn't definitionally a black swan, that doesn't call for "laughably disingenuous", and you should assume good faith.

maybe not everyone deserves the assumption of good faith? I feel like this forum perhaps just got a reminder about the value of not always thinking the best of people, but no, everyone just wants to forget all over again maybe 

Lizka
1yModerator Comment16
7
7

The moderation team has noticed a trend of comments from Sabs that break Forum norms. Specifically, instances of rudehostile, or harsh language are strongly discouraged (and may be deleted) and do not adhere to our norm of keeping a generous and collaborative mindset.

This is a warning, please do better in the future; continued violation of Forum norms may result in a temporary ban.

want to be clear that this warning is in response to the tone and approach of the comments, not the stances taken by the commenter. We believe it’s really important to be able to discuss all perspectives on the situation with an open mind and without censoring any perspectives. We would like Sabs to continue contributing to these discussions in a respectful way.

I don't see how the third comment is objectionably 'harsh'? It is a straightforward description of how many conventional financial firms operate, relevant to the topic at hand, combined with (accurately) calling the parent comment nonsense. Is the objection that it contains a swear word? If that is the rule it should probably be made explicit. (Also, 'harsh' does not appear in Guide To Norms, with good reason, as the truth can be harsh!)

I don't think everyone deserves the assumption of good faith at all times, but you haven't given enough reason to believe Geoffrey Miller doesn't, and I'm pretty sure you can't. If you're going to make accusations, you should have good reasons to do so and explain them.  Merely contradicting something someone said is not nearly enough; people can be wrong without being disingenuous. Accusations make productive conversation more difficult, can be hurtful, can push people away from the community and may have other risks, so we shouldn't have a low bar for making them.

You also don't even have to privately assume the best of someone or good faith; just keep conversations civil and charitable, and don't make unsubstantiated accusations. If you want to argue that we should be more skeptical of people's motives, that's plausible and that can be a valuable discussion, but shouldn't be started by attacking another user without good reason.

With FTX, there were important red flags, including the ones you pointed out.

Downvoted. I disagree quite strongly on points one and four, but that's a discussion for another day; I downvoted because point three is harmful.

If people with a long history of criticising EA have indeed claimed X for a long time, while EA-at-large has said not-X; and X is compatible with the events of the past week, while not-X is not (or is less obviously compatible, or renders those events more unexpected); then rational Bayesians should update towards the people with the long history of criticising EA. Just apply Bayes' rule: if P(events of the last week | X) > P(events of the last week | not-X), then you should increase your credence in X upon observing the events of the last week.

This reasoning holds whether or not these critics are speaking in bad faith, have personal issues with EA, or are acting irrationally. If being a bad-faith critic of EA provides you with better predictive power than being a relatively-uncritical member of the movement, then you should update so that you are closer to being a bad-faith critic of EA than to being a relatively-uncritical member of the movement. You probably shouldn't go all the way there (better to stop in the middle, somewhere around 'good-faith critic' or 'EA adjacent' or 'EA but quite suspicious of the movement's leadership'), but updating in that direction is the rational Bayesian thing to do.

To be sure, there's always a worry that the critics have fudged or falsified their predictions, saying something vaguely critical in the past which has since been sharpened into 'Several months ago, I predicted that this exact thing would happen!' This is the 'predicting the next recession' effect, and we should be vigilant about it. But while this is definitely happening in a lot of cases, in some of the most high-profile ones I don't think it applies: I think there were relatively concrete predictions made that a crisis of power and leadership of pretty much this kind was enabled by EA structures, and these predictions seem to have been closer to the mark than anything EA-at-large thought might happen.

I think there is a further sense, that many EAs seem to feel that their error was less one of prediction than of creativity: it's less that they made the wrong call on a variety of questions, but simply that they didn't ask those questions. This is obviously not true of all EAs, but it is definitely true of some. In cases like this, listening more closely to critics - even bad faith ones! - can open your mind up to a variety of different positions and reasoning styles that previously were not even present in your mind. This is not always inherently good, of course, but if an EA has reason to think that they have made a failure of creativity then it seems like a very positive way to go.

For more context about my worries: I think that it is possible that OP might be including me, and some things I have tweeted, in point three. I have quite a small follower count and nothing I wrote 'blew up' or anything, so it's definitely very unlikely; but I did tweet out several things pretty heavily critical of the movement in recent days which very strongly pattern-match the description given above, including pointing out that prior criticisms predicted these events pretty well, and having relatively well-known EAs reaching out to me about what I had written. Certainly, I 'felt seen' (as it were) while reading this post.

I don't think I am a 'nefarious actor', or have a history of 'hating EA', but I worry that in some segments of EA (not the whole of EA - some people have gone full self-flagellation, but in some segments) these kinds of terms are getting slung around far too liberally as part of a more general circling-the-wagons trend. And I worry that posts like this one legitimise slinging these terms around in this manner, by encouraging the thought that EA critics who are engaging in some (sometimes fully-justified) 'told you so' are just bad actors trying to destroy their tribe. EA needs to be more, not less, open to listening to critics - even bad-faith critics - after a disaster like this one. This is good Bayesianism, but it's also just proper humility.

I think it is very difficult to litigate point three further without putting certain people on trial and getting into their personal details, which I am not interested in doing and don't think is a good use of the Forum. For what it's worth, I haven't seen your Twitter or anything from you.

I should have emphasized more that there are consistent critics of EA who I don't think are acting in bad faith at all. Stuart Buck seems to have been right early on a number of things, for example. 

Your Bayesian argument may apply in some cases but it fails in others (for instance, when X = EAs are eugenicists).

Just apply Bayes' rule: if P(events of the last week | X) > P(events of the last week | not-X), then you should increase your credence in X upon observing the events of the last week.

I also emphasize there are a few people who I have strong reason to believe are "deliberate effort to sow division within the EA movement" and this was the focus of my comment, publicly evidenced (NB: this is a very small part of my overall evidence) by them "taking glee in this disaster or mocking the appearances and personal writing of FTX/Alameda employees." I do not think a productive conversation is possible in these cases. 

I'm not sure what you mean by saying that my Bayesian argument fails in some cases? 'P(X|E)>P(X) if and only if P(E|X)>P(E|not-X)' is a theorem in the probability calculus (assuming no probabilities with value zero or one). If the likelihood ratio of X given E is greater than one, then upon observing E you should rationally update towards X.

If you just mean that there are some values of X which do not explain the events of the last week, such that P(events of the last week | X) ≤ P(events of the last week | not-X), this is true but trivial. Your post was about cases where 'this catastrophe is in line with X thing [critics] already believed'. In these cases, the rational thing to do is to update toward critics.

Strongly agree with all of these points.

On point 2: The EA movement urgently needs more earners-to-give, especially now. One lesson that I think is correct, however, is that we should be wary of making any one billionaire donor the face of the EA movement. The downside risk—a loss of credibility for the whole movement due to unknown information about the billionaire donor—is generally too high.

(Upvoted)

Events are not evidence to the truth of philosophical positions.

Are you sure? How about this position from Richard Chappell's post?

(3) Self-effacing utilitarian: Ex-utilitarian, gave up the view on the grounds that doing so would be for the best.

Psychological effects of espousing a moral theory are empirical in nature. Observations about the world could cause a consequentialist to switch to some other theory on consequentialist grounds, no?

Not sure there's a clean division between moral philosophy and moral psychology.

I agree hastily jumping to a different theory while experiencing distress seems bad, but it seems reasonable to update a bit on the margin.

I agree investigation should be thoughtful, but now seems as good as any opportunity to discuss. You say we should wait until facts are properly established, but I think discussion now can help establish facts, the same way a detective would want to visit the scene of a crime soon after it was committed.

My understanding is that the self-effacing utilitarian is not strictly an 'ex-utilitarian', in that they are still using the same types of rightness criteria as a utilitarian (at least with respect to world-states). Although they may try to deceive themselves into actually believing another theory, since this would better achieve their rightness criterion, that is not the same as abandoning utilitarianism on the basis that it was somehow refuted by certain events. In other words, as you say, they're switching theories "on consequentialist grounds". Hence they're still a consequentialist in the sense that is philosophically important here.

Yitz
1y18
8
2

+1 from me.

 I was talking about the whole situation with my parents, and they mentioned that their local synagogue experienced a very similar catastrophe, with the community's largest funder turning out to be a con-man. Everybody impacted had a lot of soul-searching to do, but ultimately in retrospect, there was really nothing they could or should have done differently—it was a black-swan event that hasn't repeated in the quarter of a century or so since it happened, and there were no obvious red flags until it was too late. Yes, we can always find details to agonize over, but ultimately, I doubt it will be very productive to change our whole modus operandi to prevent this particular black swan event from repeating (with a few notable exceptions).

I think many of these lessons have more merrit to them than you assume. To speak specifically about the ‘earning to give’ one, yes EA has pointed out that you should not do harm with your job to give it away. However I also think it is a bit psychologically naïeve to think that what happened with FTX is the last time that giving people the advice of earning to give is the last time it will lead to people doing harm to make money.

Trade-offs between ethical principles and monetary gain are not rare, and once we have established making as much money as possible (to give it away) as a goal in itself and something that gives status, it can be hard to make these trade-offs the way you are supposed to. It is not easy to accept a setback in wealth, power and (moral) status so lying to yourself or others to think that what you are doing is ethical becomes easy. It is also generally risky for individuals to become incredibly rich or powerful, especially if that depends on a misguided believe that some group membership (ea) makes you inherently ethical and therefore more trustworthy, since power tends to corrupt.

At the minimum I would like EA to talk more about how to jointly maximize the ethics of how you earn and spend your money, making sure that we promote people to gain their wealth in ways that add value to the world.

I disagree that events can't be evidence for or against philosophical positions. If empirical claims about human behaviour or the real-world operation of ethical principles are relevant to the plausibility of competing ethical theories, then I think events can provide evidential value for philosophical positions. Of course that raises a much broader set of issues and doesn't really detract from the main point of this post, but I thought I would push back on that specific aspect.

Brilliant post. Thanks for writing it. I just want to add to what you said about ethics. It seems that evaluating whether an action / event is good or bad itself presupposes an ethical theory.[1] Hence I think a lot of the claims that are being made can be described as either (a) this event shows vividly how strongly utilitarianism can conflict with 'common-sense morality' (or our intuitions)[2] or (b) trying to follow[3] utilitarianism tends to lead to outcomes which are bad by the lights of utilitarianism (or perhaps some other theory). The first of these seems not particularly interesting to me, as suggested in your post, and the second is a separate point entirely - but is nonetheless often being presented as a criticism of utilitarianism.

  1. ^

    Someone else made this point before me in another post but I can’t find their comment.

  2. ^

    But note that this applies mostly to naive act utilitarianism.

  3. ^

    By which I mean 'act in accordance' with, but it's worth noting that this is pretty underdetermined. For instance, doing EV calculations is not the only way to act in accordance with utilitarianism.

I have to disagree with point 3. I think, due to imbalances in incentive structures, you sometimes have to take the "haters" criticisms into account. 

Take cryptocurrency and NFT's for example. Proponents of NFT's, who were financially invested, had a huge financial incentive to talk up their usefulness. They can make gargantuan sums of money from hype, and hence can full-time employ very smart people, fund their own media ecosystem, etc.  You can find any number of highly researched and polished literature talking up the 

But what if you (correctly imo) thought that NFT's  were a useless bubble? There were no big venture capitalists throwing out money for criticising NFT's, and shorting them was not really a valid financial path for a variety of reasons. Debunking NFT proponents is a lot of work, which doesn't make a lot of sense to do if you have a day job. 

But there is one incentive to critique NFT's that is powerful enough to motivate that kind of work and effort: Hatred of NFT's. This comes from someone who's been scammed, knows someone who's been scammed, or just pure "someone is wrong on the internet" energy. So of course they come off as "FUD" and haters with a dose of schadenfreude. The non-haters didn't bother writing lengthy critiques, they just shrugged, said it looked kinda dumb, and went on with their lives. 

I hope the analogies to FTX and EA are clear here. EA is highly obscure, with a similar financial incentive imbalance in place. If you want people to put actual effort into critiquing it, you either need to pay out more to skeptics (I fully support the critique competitions), or put up with some haters who may still have legitimate points.  

More from burner
Curated and popular this week
Relevant opportunities