Edit: Eli has a great comment on this which I suggest everyone read. He corrects me on a few things, and gives his far more informed takes.
I'm slightly scared that EA will overcorrect in an irrelevant direction to the FTX situation in a way I think is net harmful, and I think a major reason for this fear is seeing lots of people espousing conclusions about solutions to problems without us actually knowing what the problems are yet.
Some examples of this I've seen recently on the forum follow.
It is uncertain whether SBF intentionally committed fraud, or just made a mistake, but people seem to be reacting as if the takeaway from this is that fraud is bad.
These articles are mostly saying things of the form 'if FTX engaged in fraud, then EA needs to make sure people don't do more fraud in the service of utilitarianism.' from a worrying-about-group-think perspective, this is only a little less concerning than directly saying 'FTX engaged in fraud, so EA should make sure people don't do more fraud'.
Even though these articles aren't literally saying that FTX engaged in fraud in the service of utilitarianism, I worry these articles will shift the narrative EA tells itself towards up-weighting hypotheses which say FTX engaged in fraud in the service of utilitarianism, especially in worlds where it turned out that FTX did commit fraud, but it was motivated by pride, or other selfish desires.
Some have claimed FTX's downfall happened as a result of everyone sleeping with each other, and this interpretation is not obviously unpopular on the forum. This seems quite unlikely compared to alternative explanations, and the post Women and Effective Altruism takes on a tone & content I find toxic to community epistemics, and anticipate wouldn't fly on the forum a week ago.
I worry the reason we see this post now is that EA is confused, wants to do something, and is really searching for anything to blame for the FTX situation. If you are confused about what your problems are, you should not go searching for solutions! You should ask questions, make predictions, and try to understand what's going on. Then you should ask how you could have prevented or mitigated the bad events, and ask whether those prevention and mitigation efforts would be worth their costs.
I think this problem is important to address, and am uncertain about whether this post is good or bad on net. The point is that I'm seeing a bunch of heated emotions on the forum right now, this is not like the forum I'm used to, and lots of these heated discussions seem to be directed towards pushing new EA policy proposals rather than trying to figure out what's going on.
We could immediately launch a costly investigation to see who had knowledge of fraud that occurred before we actually know if fraud occured or why. In worlds where we’re wrong about whether or why fraud occurred this would be very costly. My suggestion: wait for information to costlessly come out, discuss what happened when not in the midst of the fog and emotions of current events, and then decide whether we should launch this costly investigation.
Adjacently, some are arguing EA could have vetted FTX and Sam better, and averted this situation. This reeks of hindsight bias! Probably EA could not have done better than all the investors who originally vetted FTX before giving them a buttload of money!
Maybe EA should investigate funders more, but arguments for this are orthogonal to recent events, unless CEA believes their comparative advantage in the wider market is high-quality vetting of corporations. If so, they could stand to make quite a bit of money selling this service, and should possibly form a spinoff org.
EA is not thinking straight right now, and everyone should stop it with putting their ill-informed conclusions about the takeaways from recent events on the forum, and discuss the object-level events more in the hopes the community can actually update on information once it gets in, instead of getting stuck into an incorrect and unhelpful narrative about what happened.
In particular, it ties together observations and policy proposals so that in order to disagree with the policy proposals, you have to trip over your words in order to also not call the poster a liar. ↩︎
I thought I would like this post based on the title (I also recently decided to hold off for more information before seriously proposing solutions), but I disagree with much of the content.
A few examples:
I think we can safely say with at this point >95% confidence that SBF basically committed fraud even if not technically in the legal sense (edit: but also seems likely to be fraud in the legal sense), and it's natural to start thinking about the implications of this and in particular be very clear about our attitude toward the situation if fraud indeed occurred as looks very likely. Waiting too long has serious costs.
If we were to wait until we close to fully knew "whether or why fraud occurred" this might take years as the court case plays out. I think we should get on with it reasonably quickly given that we are pretty confident some really bad stuff went down. Delaying the investigation seems generally more costly to me than the costs of conducting it, e.g. people's memories decay over time and people have more time to get alternative stories straight.
This seems wrong, e.g. EA leadership had more personal context on Sam than investors. See e.g. Oli here with a personal account and my more abstract argument here.
I don't say this often, but thanks for your comment!
Interesting! You have changed my mind on this. You clearly know more about this than I. I want you to write a better post arguing for the same overall point if you agreed with the title, hopefully with more context than mine.
The fact that we have such different pictures I think may be an effect of what I'm seeing on the forum. So many top level posts not talking about object level data made it difficult for me to know what object level data we had.
My main thought is that I don't know why he committed fraud. Was it actually to utility maximize, or because he was just seeking status, or got too prideful or what?
I'd agree with this if I thought EA right now had a cool head. Maybe I should have said we should wait until EA has a cooler head before launching investigations.
Appreciate the quick, cooperative response.
Not feeling up to it right now and not sure it needs a whole top-level post. My current take is something like (very roughly/quickly written):
I think either way most of the articles you point to do more good than harm. Being more silent on the matter would be worse.
I'd hope that the investigation would be conducted mostly by an independent, reputable entity even if commissioned by EA organizations. Also, "EA" isn't a fully homogeneous entity and I'd hope that the people commissioning the investigation might be more cool-headed than the average Forum poster.
Just for some perspective here, the DOJ could be pursuing SBF for wire fraud, which comes with a maximum sentence of twenty years. FTX's bankruptcy couldn't be construed as a mistake past the first day or so of last week, and this is still very generous. I find that this forum has consistently underestimated how grievous the actions taken by SBF, Alameda and FTX have been compared to the individuals I know who work in finance or crypto.
As far as I was concerned, by 24-36 hours(D+1-D+1.5) after the liquidity crisis turned fraud, I already raised the hypothesis of outright stealing and fraud to serious attention, and 48-72 hours after the crisis first broke (D+2-D+3) I assumed fraud and stealing was happening there. I also was right to distrust crypto several months before, because it's very structure lends it to fraud, with no other legal use.
That's a max of 20 years per count, by the way. If there was massive fraud, hard to accomplish that in only a single count of wire fraud...
Which part of my comment did you find as underestimating how grievous SBF/Alameda/FTX's actions were? (I'm genuinely unsure)
Sorry for the confusion, I was adding on to your comment. I agree with you obviously. It was more a statement on the forum over the past five-six days.
Just because you feel uncertain doesn't mean the appropriate epistemic state for everyone to be in is to be uncertain too. You have to be pretty confident that you've researched all the available information if you're going to tell others that they're prematurely coming to conclusions. Sometimes it really is possible to tell with high confidence whether fraud was committed or not (in some way or another), even if the details aren't out. I think this is one such instance.
For instance, were you aware of this WSJ article linked in this tweet: https://twitter.com/AutismCapital/status/1591454455203262467
(I wouldn't immediately trust anything I read on the topic since things are evolving quickly, but there's also a complete lack of exonerating information or theories about how it wasn't fraud. Also some insiders have appeared on streams by now and I haven't seen anyone defend FTX leadership so far.)
I agree that we should be cautious about over-correcting too quickly based on incomplete, biased, and dubious information.
There are huge numbers of unverified rumors flying around, especially on Twitter and social media. The exact nature and scope of the FTX/Alameda problems remain unclear. It's hard to know what the moral and organizational lessons are when many of the key facts of the matter aren't yet publicly known.
There's a temptation in any sudden, surprising crisis to try to react on the same time scale, with sudden, dramatic action. In response to any catastrophe, there's a bias to 'do something, anything, take action now!'
Partly this is an evolutionary mismatch grounded in human psychology, because prehistoric crises typically involved direct, immediate, physical threats (predators, warfare, natural disasters, plagues, starvation) that required direct, immediate, physical action.
Partly it's a public relations issue, where organizations are expected to respond within a few 24-hour news cycles of the crisis arising.
But I think, given the low proportion of information we have to information we'd need to have to make informed course-corrections, we should have the epistemic and moral humility to go a little slower in trying to figure out what lessons we should learn from all this.
This is only very loosely related to your post, but the current situation somewhat reminds me of Scott's Cyclic Theory Of Subcultures. In that model, the sudden loss of resources available to EA might push the movement from the Growth phase into the Involution phase, so there are suddenly new incentives to critizice one another and vie for the remaining resources and status:
I think we should just stop overreacting, period. This guy's money doesn't mean he is EA. No one person is EA.
If we spent as much time figuring out how to better be more effective as we do on self-loathing and self-over-analysis, we'd be further along.
IMHO. Of course, I could be wrong.
I've seen this mentioned quite a few times, most prominently by Eliezer Yudowsky . I take the point that there were sophisticated investors such as Sequioa, and BlackRock who researched the company and could not detect FTX's possible self-dealing with Alameda. I think it's fair to say that EA probably could not have detected this FTX situation would blow up in exactly the way it did, even with more due diligence.
I also think that it's rational to expect that you apply due diligence to where you are investing your money than where you are receiving it from - and my understanding is that EA (on the whole) was not actually investing in FTX.
However, what I think should not be lost sight of is that FTX funding made up a very significant amount of EA's funding as a whole: in 2021, it was estimated FTX team's funding made up $16.5 billion of $46.1 billion (roughly 36%). (Moskovitz's funding was even larger - roughly 49%.)
This is incredibly undiversified, especially given how volatile SBF and Moskovitz's wealth is . I am sure that this is far more undiversified than any large investor who actually put money in FTX. EA therefore stands to lose a lot more if the funding from FTX (or Moskovitz) fell away. I don't want this point to get lost in the debate.
Now, I'm not sure that the answer was that EA should have vetted its funding more. When people are offering you "free" money, I don't think there is too much obligation to vet it (above any legal obligations that might exist). I think the answer is probably that EA should have thought about its risk exposure more, given how undiversified and volatile its funding is. In particular:
For example, I did see somewhere that there had been a statement somewhere suggesting that EAs personally should diversify away from crypto, given how exposed EA is to it generally, but that did not seem to be a very prominent, widely-advertised piece of advice.
I know also that there is some general career advice for people to build up a decent financial runway for themselves. Perhaps there should have been greater emphasis to the community that if they rely on funding (grants, salaries) that is undiversified, they should factor that risk in and weigh it with their personal risk appetite.
I leave aside the question of whether SBF was using EA to "launder" his reputation and therefore arguably the money was not entirely "free". I don't have an informed view on that.
EA should absolutely be vetting its funding more. You already gave three reasons: risk of the funding drying up (diversification being a possible solution), legal obligations (as evidenced by the possibility of clawbacks), and the reputational effects of EA laundering its funders' reputations. There are also significant reputational effects going the other way, as evidenced by the costs of SBF's fall to EA's reputation.
I guess it depends on what we mean by "vetting funding". EA should definitely do more to understand and manage the nature and extent of the risks it is exposed to - i.e. general risk management. I don't think we need to wait for much more information to make such an assessment - the way this has unfolded with so many grant recipients, etc seeming to have been caught completely unprepared is enough evidence that the EA community's risk management and communication were lacking.
But some people also seem to suggest "vetting funding" means EA should be trying to find fraud or other malfeasance in its donors. (That is suggested by the OP's post and is what I meant by "vetting funding" in my previous comment). I'm less sure about this claim. It's not clear how much due diligence is required in these cases vs how much due diligence EA actually did. So this is something that, as the OP suggests, would benefit from more information before coming to a conclusion.
Edit: Put another way, I think there are two questions:
I've seen some discussions conflating the two. Even if the answer to the second is "no "or "unclear", EA's risk management practices could still be improved. That's all I'm saying.
Risk management as a field (or component of internal auditing) has ground rules where it can determine the key areas of how a certain organization should function and what are the areas where such org may fail. At its basic form, it will Risk managers are assessing inflows and outflows of cash and the policies behind those functions. They will also see how the management process is being performed and potential conflicts of interest issues.
Setting up an internal audit function that regularly assess the risk landscape of any EA org the soonest can help avert future fraud. As I have mentioned in my other posts, fraud is very hard to detect especially when collusion is in play - yet I again strongly point out that this is the best practice in traditional systems of conducting transactions when large sums of money are involved. Not following best practices will always leave possibilities for gaps to where fraudulent actors may exploit.
All the best,