A brief and belated update: When I resigned from the board of EV US last year, I was planning on writing about that decision. But I ultimately decided against doing that for a variety of reasons, including that it was very costly to me, and I believed it wouldn’t make a difference. However, I want to make it clear that I resigned last year due to significant disagreements with the board of EV and EA leadership, particularly concerning their actions leading up to and after the FTX crisis.

While I certainly support the boards’ decision to pay back the FTX estate, spin out the projects as separate organizations, and essentially disband EV, I continue to be worried that the EA community is not on track to learn the relevant lessons from its relationship with FTX. Two things that I think would help (though I am not planning to work on either myself):

  1. EA needs an investigation, done externally and shared publicly, on mistakes made in the EA community’s relationship with FTX.[1] I believe there were extensive and significant mistakes made which have not been addressed. (In particular, some EA leaders had warning signs about SBF that they ignored, and instead promoted him as a good person, tied the EA community to FTX, and then were uninterested in reforms or investigations after the fraud was revealed). These mistakes make me very concerned about the amount of harm EA might do in the future.
  2. EA also needs significantly more clarity on who, if anyone, “leads” EA and what they are responsible for. I agree with many of Will MacAskill’s points here and think confusion on this issue has indirectly resulted in a lot of harm.

CEA is a logical place to house both of these projects, though I also think leaders of other EA-affiliated orgs, attendees of the Meta Coordination Forum, and some people at Open Philanthropy would also be well-suited to do this work. I continue to be available to discuss my thoughts on why I left the board, or on EA’s response to FTX, individually as needed.

  1. ^

     Although EV conducted a narrow investigation, the scope was far more limited than what I’m describing here, primarily pertaining to EV’s legal exposure, and most results were not shared publicly.

Comments112
Sorted by Click to highlight new comments since: Today at 9:47 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Here's a post with me asking the question flat out: Why hasn't EA done an SBF investigation and postmortem?

This seems like an incredibly obvious first step from my perspective, not something I'd have expected a community like EA to be dragging its heels on years after the fact.

We're happy to sink hundreds of hours into fun "criticism of EA" contests, but when the biggest disaster in EA's history manifests, we aren't willing to pay even one investigator to review what happened so we can get the facts straight, begin to rebuild trust, and see if there's anything we should change in response? I feel like I'm in crazytown; what the heck is going on?

Update Apr. 4: I’ve now spoken with another EA who was involved in EA’s response to the FTX implosion. To summarize what they said to me:

  • They thought that the lack of an investigation was primarily due to general time constraints and various exogenous logistical difficulties. At the time, they thought that setting up a team who could overcome the various difficulties would be extremely hard for mundane reasons such as:
    • thorough, even-handed investigations into sensitive topics are very hard to do (especially if you start out low-context);
    • this is especially true when they are vaguely scoped and potentially involve a large number of people across a number of different organizations;
    • “professional investigators” (like law firms) aren’t very well-suited to do the kind of investigation that would actually be helpful;
    • legal counsels were generally strongly advising people against talking about FTX stuff in general;
    • various old confidentiality agreements would make it difficult to discuss what happened in some relevant instances (e.g. meetings that had Chatham House Rules);
    • it would be hard to guarantee confidentiality in the investigation when info might be subpoenaed or something like that;
    • a
... (read more)
JWS
25d43
6
4
1

(I'm going to wrap up a few disparate threads together here, and will probably be my last comment on this post ~modulo a reply for clarification's sake. happy to discuss further with you Rob or anyone via DMs/Forum Dialogue/whatever)

(to Rob & Oli - there is a lot of inferential distance between us and that's ok, the world is wide enough to handle that! I don't mean to come off as rude/hostile and apologies if I did get the tone wrong)

Thanks for the update Rob, I appreciate you tying this information together in a single place. And yet... I can't help but still feel some of the frustrations of my original comment. Why does this person not want to share their thoughts publicly? Is it because they don't like the EA Forum? Because their scared of retaliation? It feels like this would be useful and important information for the community to know.

I'm also not sure what to make of Habryka's response here and elsewhere. I think there is a lot of inferential distance between myself and Oli, but it does seem to me to come off as a "social experiment in radical honesty and perfect transparency" , which is a vibe I often get from the Lightcone-adjacent world. And like, with all due respect... (read more)

the choice is like "should I pour in a ton of energy to try to set up this investigation that will struggle to get off the ground to learn kinda boring stuff I already know?"

I'm not the person quoted, but I agree with this part, and some of the reasons for why I expect the results of an investigation like this to be boring aren’t based on any private or confidential information, so perhaps worth sharing.

One key reason: I think rumor mills are not very effective fraud detection mechanisms.

(This seems almost definitionally true: if something was clear evidence of fraud then it would just be described as "clear evidence of fraud"; describing something as a "rumor" seems to almost definitionally imply a substantial probability that the rumor is false or at least unclear or hard to update on.[1])

E.g. If I imagine a bank whose primary fraud detection mechanism was "hope the executives hear rumors of malfeasance," I would not feel very satisfied with their risk management. If fraud did occur, I wouldn’t expect that their primary process improvement to be "see if the executives could have updated from rumors better." I am therefore somewhat confused by how much interest ther... (read more)

I feel like "people who worked with Sam told people about specific instances of quite serious dishonesty they had personally observed" is being classed as "rumour" here, which whilst not strictly inaccurate, is misleading, because it is a very atypical case relative to the image the word "rumour" conjures. Also, even if people only did receive stuff that was more centrally rumour, I feel like we still want to know if any one in leadership argued "oh, yeah, Sam might well be dodgy, but the expected value of publicly backing him is high because of the upside". That's a signal someone is a bad leader in my view, which is useful knowledge going forward. (I'm not saying it is instant proof they should never hold leadership positions ever again: I think quite a lot of people might have said something like that in similar circumstances. But it is a bad sign.) 

I feel like "people who worked with Sam told people about specific instances of quite serious dishonesty they had personally observed" is being classed as "rumour" here, which whilst not strictly inaccurate, is misleading, because it is a very atypical case relative to the image the word "rumour" conjures.

I agree with this.

[...] I feel like we still want to know if any one in leadership argued "oh, yeah, Sam might well be dodgy, but the expected value of publicly backing him is high because of the upside". That's a signal someone is a bad leader in my view, which is useful knowledge going forward.

I don't really agree with this. Everyone has some probability of turning out to be dodgy; it matters exactly how strong the available evidence was. "This EA leader writes people off immediately when they have even a tiny probability of being untrustworthy" would be a negative update about the person's decision-making too!

2
Rebecca
19d
I took that second quote to mean ‘even if Sam is dodgy it’s still good to publicly back him’
2
David Mathers
19d
I meant something in between "is" and "has a non-zero chance of being", like assigning significant probability to it (obviously I didn't have an exact number in mind), and not just for base rate reasons about believing all rich people to be dodgy. 
2[comment deleted]20d

I'm not the person quoted, but I agree with this part, and some of the reasons for why I expect the results of an investigation like this to be boring aren’t based on any private or confidential information, so perhaps worth sharing.

One key reason: I think rumor mills are not very effective fraud detection mechanisms.

Huh, the same reason you cite for why you are not interested in doing an investigation is one of the key reasons why I want an investigation. 

It seems to me that current EA leadership is basically planning to continue a "our primary defense against bad actors is the rumor mill" strategy. Having an analysis of how that strategy did not work, and in some sense can't work for things like this seems like it's one of the things that would have the most potential to give rise to something better here.

Interesting! I'm glad I wrote this then.

Do you think "[doing an investigation is] one of the things that would have the most potential to give rise to something better here" because you believe it is very hard to find alternatives to the rumor mill strategy? Or because you expect alternatives to not be adopted, even if found?

9
Habryka
19d
My current sense is that there is no motivation to find an alternative because people mistakenly think it works fine enough and so there is no need to try to find something better (and also in the absence of an investigation and clear arguments about why the rumor thing doesn't work, people probably think they can't really be blamed if the strategy fails again)

Suppose I want to devote some amount of resources towards finding alternatives to a rumor mill. I had been interpreting you as claiming that, instead of directly investing these resources towards finding an alternative, I should invest these resources towards an investigation (which will then in turn motivate other people to find alternatives).

Is that correct? If so, I'm interested in understanding why – usually if you want to do a thing, the best approach is to just do that thing.

It seems to me that a case study of how exactly FTX occurred, and where things failed, would be among one of the best things to use to figure out what thing to do instead. 

Currently the majority of people who have an interest in this are blocked by not really knowing what worked and didn't work in the FTX case, and so probably will have trouble arguing compellingly for any alternative, and also lack some of the most crucial data. My guess is you might have the relevant information from informal conversations, but most don't. 

I do think also just directly looking for an alternative seems good. I am not saying that doing an FTX investigation is literally the very best thing to do in the world, it just seems better than what I see EA leadership spending their time on instead. If you had the choice between "figure out a mechanism detecting and propagating information about future adversarial behavior" and "do an FTX investigation", I would feel pretty great about both, and honestly don't really know which one I would prefer. As far as I can tell neither of these things is seeing much effort invested into it.

Okay, that seems reasonable. But I want to repeat my claim[1] that people are not blocked by "not really knowing what worked and didn't work in the FTX case" – even if e.g. there was some type of rumor which was effective in the FTX case, I still think we shouldn't rely on that type of rumor being effective in the future, so knowing whether or not this type of rumor was effective in the FTX case is largely irrelevant.[2]

I think the blockers are more like: fraud management is a complex and niche area that very few people in EA have experience with, and getting up to speed with it is time-consuming, and also ~all of the practices are based under assumptions like "the risk manager has some amount of formal authority" which aren't true in EA.

(And to be clear: I think these are very big blockers! They just aren't resolved by doing an investigation.)

  1. ^

    Or maybe more specifically: would like people to explicitly refute my claim. If someone does think that rumor mills are a robust defense against fraud but were just implemented poorly last time, I would love to hear that!

  2. ^

    Again, under the assumption that your goal is fraud detection. Investigations may be more or less useful for other g

... (read more)

Again, under the assumption that your goal is fraud detection. 

It seems like a goal of ~"fraud detection" not further specified may be near the nadir of utility for an investigation.

  • If you go significantly narrower, then how EA managed (or didn't manage) SBF fraud seems rather important to figuring out how to deal with the risk of similar fraudulent schemes in the future.[1]
  • If you go significantly broader (cf. Oli's reference to "detecting and propagating information about future adversarial behavior"), the blockers you identify seem significantly less relevant, which may increase the expected value of an investigation.

My tentative guess is that it would be best to analyze potential courses of action in terms of their effects on the "EA immune system" at multiple points of specificity, not just close relations of a specific known pathogen (e.g., SBF-like schemes), a class of pathogens (e.g., "fraud"), or pathogens writ large (e.g., "future adversarial behavior").

  1. ^

    Given past EA involvement with crypto, and the base rate of not-too-subtle fraud in crypto, the risk of similar fraudulent schemes seems more than theoretical to me.

I have played around with the idea of some voluntary pledge for earning to give companies where they could opt into additional risk management and transparency policies (e.g. selecting some processes from Sarbanes-Oxley). My sense is that these policies do actually substantially reduce the risk of fraud (albeit at great expense), and might be worth doing.

I think that would be worth exploring. I suspect you are correct that full Sarbanes-Oxley treatment would be onerous. 

On the other hand, I don't see how a reasonably competent forensic accountant or auditor could have spent more than a few days at FTX (or at Madoff) without having a stroke. Seeing the commingled bank accounts would have sent alarm bells racing through my head, at least. (One of the core rules of legal ethics is that you do not commingle your money with that of your clients because experience teaches all sorts of horrible things can and often do happen.)

I certainly don't mean to imply that fraud against sophisticated investors and lenders is okay, but there is something particularly bad about straight-up conversion of client funds like at FTX/Madoff. At least where hedge funds and big banks are concerned, they ... (read more)

I guess the question I have is, if the fraud wasn't noticed by SBF's investors, who had much better access to information and incentives to find fraud, why would anyone expect the recipients of his charitable donations to notice it? If it was a failure of the EA movement not to know that FTX was fraudulent, isn't it many times more of a failure that the fraud was unnoticed by the major sophisticated investment firms that were large FTX shareholders?

Habryka
20d56
10
3
3
3

I think investing in FTX was genuinely a good idea, if you were a profit maximizer, even if you strongly suspected the fraud. As Jason says, as an investor losing money due to fraud isn't any worse than losing money because a company fails to otherwise be profitable, so even assigning 20%-30% probability to fraud for a high-risk investment like FTX where you are expecting >2x returns in a short number of years will not make a huge difference to your bottomline.

In many ways you should expect being the kind of person who is willing to commit fraud to be positively associated with returns, because doing illegal and fradulent things means that the people who run the organization take on massive risk where you are not exposed to the downside, but you are exposed to the upside. It's not worth it to literally invest in fraud, but it is worth it to invest into the kind of company where the CEO is willing to go to prison, since you don't really have any risk of going to prison, but you get the upside of the legal risk they take on (think of Uber blatantly violating laws until they established a new market, which probably exposed leadership to substantial legal risk, but investors just got to reap the profits).

I wasn't suggesting we should expect this fraud to have been found in this case with the access that was available to EA sources. (Perhaps the FTXFF folks might have caught the scent if they were forensic accountants -- but they weren't. And I'm not at all confident on that in any event.) I'm suggesting that, in response to this scandal, EA organizations could insist on certain third-party assurances in the future before taking significant amounts of money from certain sources. 

Why the big money was willing to fork over nine figures each to FTX without those assurances is unclear to me. But one observation: as far as a hedge fund or lender is concerned, a loss due to fraud is no worse than a loss due to the invested-in firm being outcompeted, making bad business decisions, experiencing a general crypto collapse, getting shut down for regulatory issues, or any number of scenarios that were probably more likely ex ante than a massive conversion scheme. In fact, such a scheme might even be less bad to the extent that the firm thought it might get more money back in a fraud loss than from some ordinarily-business failure modes. Given my understanding that these deals often move ve... (read more)

6
Rebecca
20d
Re your footnote 4, CE/AIM are starting an earning-to-give incubation program, so that is likely to change pretty soon
4
Ben_West
20d
Oh good point! That does seem to increase the urgency of this. I'd be interested to hear if CE/AIM had any thoughts on the subject.

Update Apr. 15:  I talked to a CEA employee and got some more context on why CEA hasn't done an SBF investigation and postmortem. In addition to the 'this might be really difficult and it might not be very useful' concern, they mentioned that the Charity Commission investigation into EV UK is still ongoing a year and a half later. (Google suggests that statutory inquiries by the Charity Commission take an average of 1.2 years to complete, so the super long wait here is sadly normal.)

Although the Commission has said "there is no indication of wrongdoing by the trustees at this time", and the risk of anything crazy happening is lower now than it was a year and a half ago, I gather that it's still at least possible that the Commission could take some drastic action like "we think EV did bad stuff, so we're going to take over the legal entity that includes the UK components of CEA, 80K, GWWC, GovAI, etc.", which may make it harder for CEA to usefully hold the steering wheel on an SBF investigation at this stage.

Example scenario: CEA tries to write up some lessons learned from the SBF thing, with an EA audience in mind; EAs tend to have unusually high standards, and a CEA... (read more)

The pendency of the CC statutory inquiry would explain hesitancy on the part of EVF UK or its projects to conduct or cooperate with an "EA" inquiry. A third-party inquiry is unlikely to be protected by any sort of privilege, and the CC may have means to require or persuade EVF UK to turn over anything it produced in connection with a third-party "EA" inquiry. However, it doesn't seem that this should be an impediment to proceeding with other parts of an "EA inquiry," especially to the extent this would be done outside the UK.

However, in the abstract -- if any charity's rationale for not being at least moderately open and transparent with relevant constituencies and the public is "we are afraid the CC will shut us down," that is a charity most people would run away from fast, and for good reason. If the choice is between having a less-than "soul-searching postmortem" or none at all, I'll take the former. Also, I strongly suspect everything EVF has said about the whole FTX situation has been vetted by lawyers, so the idea that someone is going to write an "official" postmortem without legal vetting is doubtful. Finally, I worry the can is going to continue being kicked down the road until EVF is far into the process of being dismantled, at which time the rationale may evolve into "we're disbanding anyway, what's the point?"

8
Michael_PJ
11d
I do think a subtext of the reported discussion above is that the CC is not considered to be a necessarily trustworthy or fair arbiter here. "If we do this investigation then the CC may see things and take them the wrong way" means you don't trust the CC to take them the right way. Now, I have no idea whether that is justified in this case, but it's pretty consistent with my impression of government bureaucracies in general.  So it perhaps comes down to whether you previously considered the charity or the CC more trustworthy. In this case I think I trust EVF more.

I trust EV more than the charity commission about many things, but whether EV behaved badly over SBF is definitely not one of them. One judgment here is incredibly liable to distortion through self-interest and ego preservation, and it's not the charity commission's. (That's not a prediction that the charity commission will in fact harshly criticize EV. I wouldn't be surprised either way on that.) 

When I looked at past CC actions, I didn't get the impression that they were in the habit of blowing things out of proportion. But of course I didn't have the full facts of each investigation.

One reason I don't put much stock in the CC may not "necessarily [be a] trustworthy or fair arbiter" possibility is that it has to act with reasoning transparency because it is accountable to a public process. Its actions with substance (as opposed to issuing warnings) are reviewable in the UK courts, in proceedings where the charity -- a party with the right knowledge and incentives -- can call them out on dubious findings. The CC may not fear litigation in the same sense that a private entity might, but an agency's budget/resources don't generally go up because it is sued and agencies tend not to seek to create extra work for themselves for the thrill of it.

Moreover, the rationale of non-disclosure due to CC concerns operates at the margin. There are particular things we shouldn't disclose in public because the CC might badly misinterpret those statements is one thing. There is nothing else useful we can disclose because all of those statements pose an unacceptable risk of the CC badly misinterpreting any further detail is another.

I haven't heard any arguments against doing an investigation yet, and I could imagine folks might be nervous about speaking up here. So I'll try to break the ice by writing an imaginary dialogue between myself and someone who disagrees with me.

Obviously this argument may not be compelling compared to what an actual proponent would say, and I'd guess I'm missing at least one key consideration here, so treat this as a mere conversation-starter.


Hypothetical EA: Why isn't EV's 2023 investigation enough? You want us to investigate; well, we investigated.

Rob: That investigation was only investigating legal risk to EV. Everything I've read (and everything I've heard privately) suggests that it wasn't at all trying to answer the question of whether the EA community made any moral or prudential errors in how we handled SBF over the years. Nor was it trying to produce common-knowledge documents (either private or public) to help any subset of EA understand what happened. Nor was it trying to come up with any proposal for what we should do differently (if anything) in the future.

I take it as fairly obvious that those are all useful activities to carry out after a crisis, especially when there... (read more)

I think I agree with Hypothetical EA that we basically know the broad picture.

  • Probably nobody was actually complicit or knew there was fraud; and
  • Various people made bad judgement calls and/or didn't listen to useful rumours about Sam

I guess I'm just... satisfied with that? You say:

But there are plenty of people, both within EA and outside of it, who legitimately just want to know what happened, and will be very reassured to have a clearer picture of the basic sequence of events, which orgs did a better or worse job, which processes failed or succeeded.

.. why? None of this seems that important to me? Most of it seems like a matter for the person/org in question to reflect/improve on. Why is it important for "plenty of people" to learn this stuff, given we already know the broad picture above?

I would sum up my personal position as: 

  • We got taken for a ride, so we should take the general lesson to be more cautious of charismatic people with low scruples, especially bearing large sums of money.
  • If you or your org were specifically taken for a ride you should reflect on why that happened to you and why you didn't listen to the people who did spot what was going on.
9
George Noether
26d
EA is compelling insofar as it is about genuinely making the world a better place, ie we care about the actual consequences. Just because there are probably no specific people/processes to blame, doesn't mean we should be satisfied with how things are. There is now decent evidence that EA might cause considerable harm in the world, so we should be strongly motivated to figure out how to change that. Maybe EA's failures are just the cost of ambition and agency, and come along with the good it does, but I think that's both untrue and worryingly defeatist. I care about the end result of all of this, and the fact that we're okay with some serious Ls happening (and not being willing to fix the root cause of those errors) is concerning.
0
SiebeRozendal
26d
Random idea: Maybe we should - after this question of investigation or not has been discussed in more detail - organize community-wide vote on whether there should be an investigation or not?
5
Manuel Allgaier
21d
It's easy to vote for something you don't have to pay for. If we do anything like this, an additional fundraiser to pay for it might be appropriate.
4
RobBensinger
21d
Knowing what people think is useful, especially if it's a non-anonymous poll aimed at sparking conversations, questions, etc. (One thing that might help here is to include a field for people to leave a brief explanation of their vote, if the polling software allows for it.) Anonymous polls are a bit trickier, since random people on the Internet can easily brigade such a poll. And I wouldn't want to assume that something's a good idea just because most EAs agree with it; I'd rather focus on the arguments for and against.
4
RobBensinger
21d
"Just focus on the arguments" isn't a decision-making algorithm, but I think informal processes like "just talk about it and individually do what makes sense" perform better than rigid algorithms in cases like this. If we want something more formal, I tend to prefer approaches like "delegate the question to someone trustworthy who can spend a bunch of time carefully weighing the arguments" or "subsidize a prediction market to resolve the question" over "just run an opinion poll and do whatever the majority of people-who-see-the-poll vote for, without checking how informed or wise the respondents are".
1
JWS
25d
People, the downvote button is not a disagree button. That's not really what it should be used for.
2
Guy Raveh
25d
I disagree, and in this case I don't think the forum team should have a say in the matter. Each user has their own interpretation of the upvote/downvote button and that's ok. Personally I don't use it as "I disagree" but rather as "this comment shouldn't have been written", but there's certainly a correlation. For instance, I both disagree-voted and downvoted your comment (since I dislike the attempt to police this).
2
SiebeRozendal
24d
Thanks Maybe quite some people don't like random ideas being shared on the Forum?

I'm against doing further investigation. I expressed why I think we have already spent too much time on this here.

I also think your comments are falling into the trap of referring to "EA" like it was an entity. Who specifically should do an investigation, and who specifically should they be investigating? (This less monolithic view of EA is also part of why I don't feel as bothered by the the whole thing: so maybe some people in "senior" positions made some bad judgement calls about Sam. They should maybe feel bad. I'm not sure we should feel much collective guilt about that.)

While recognizing the benefits of the anti-"EA should" taboo, I also think it has some substantial downsides and needs to be invoked after consideration of the specific circumstances at hand.

One downside is that the taboo can impose significant additional burdens on a would-be poster, discouraging them from posting in the first place. If it takes significant time investment to write "X should be done," it is far from certain others will agree, and then additional significant time to figure out/write "and it should be done by Y," then the taboo would require someone who wants to write the former to invest in writing the latter before knowing if the former will get any traction. Being okay with the would-be poster deferring certain subquestions (like "who") means that effort can be saved if there's not enough traction on the basic merits.

Another downside is that a would-be poster may have expertise, knowledge, or resources relevant to part of a complex question. If we taboo efforts by those who can only answer some of the issues effectively, we will lose the benefit of their insight.

I also think your comments are falling into the trap of referring to "EA" like it was an entity. Who s

... (read more)

Thanks, I think this is all right. I think I didn't write what I meant. I want more specificity, but I do agree with you that it's wrong to expect full specificity (and that's what I sounded like I was asking for).

What I want something more like "CEA should investigate the staff of EVF for whether they knew about X and Y", not "Alice should investigate Bob and Carol for whether they knew about X and Y".

I do think that specificity raises questions, and that this can be a good thing. I agree that it's not reasonable for someone to work out e.g. exactly where the funding comes from, but I do think it's reasonable for them to think in enough detail about what they are proposing to realise that a) it will need funding, b) possibly quite a lot of funding, c) this trades off against other uses of the money, so d) what does that mean for whether this is a good idea. Whereas if "EA" is going to do it, then we don't need to worry about any of those things. I'm sure someone can just do it, right?

2
David M
24d
I get a ‘comment not found’ response to your link.

Not to state the obvious but the 'criticism of EA' posts didn't pose a real risk to the power structure. It is uhhhhh quite common for 'criticism' to be a lot more encouraged/tolerated when it isnt threatening.

I mostly agree with this, and upvoted strongly, but I don't think the scare quotes around "criticism" is warranted. Improving ideas and projects through constructive criticism is not the same thing as speaking truth to power, but it is still good and useful, it's just a different good and useful thing. 

Overall I feel relatively supportive of more investigation and (especially) postmortem work. I also don't fully understand why more wasn't shared from the EV investigation[1].

However, I think it's all a bit more fraught and less obvious than you imply. The main reasons are:

  • Professional external investigations are expensive
    • Especially if they're meaningfully fact-finding and not just interviewing a few people, I think this could easily run into hundreds of thousands of dollars
    • Who is to pay for this? If a charity is doing it, I think it's important that their donors are on board with that use of funds
      • I kind of think someone should fundraise for this specifically; I'm genuinely unsure about donor appetite to support it
  • I'm somewhat worried about the "re-victimizing" effect you allude to of just sharing everything transparently
    • Worry that it would cause in-my-view-unjust headaches for people is perhaps the main inhibitory force on my just publicly sharing the pieces of what I know (there's also sometimes feeling like something isn't mine to share)
    • If there were an investigation which was going to make all its factual findings public, I'd expect this to be an inhibitory force on people choo
... (read more)

I've made a first attempt at this here: To what extent & how did EA indirectly contribute to financial crime - and what can be done now? One attempt at a review

I'd highlight that I found taking quite a structured approach helpful: breaking things down chronologically, and trying to answer specific questions like what's the mechanism, how much did this contribute, and what's a concrete recommendation?

"I’ll suggest a framework for how that broader review might be conducted: for each topic the review could:

  • Establish the details of EA involvement, 
  • Indicate a mechanism for how this could have indirectly contributed to the eventual financial crime, 
  • Provide some assessment of to what extent that mechanism may have indirectly contributed, and 
  • Provide a concrete recommendation for what the EA community could do differently to prevent any recurrence.

I’ll also provide some preliminary thoughts below, as an indication of what could be done in the full review. One way to approach a review is chronological, covering eight touchstone or 'gate' moments:

  1. Bankman-Fried starts earning to give
  2. Alameda founding 
  3. FTX founding
  4. Early political donations through Bankman-Fried’s family
  5. FT
... (read more)

We're happy to sink hundreds of hours into fun "criticism of EA" contests, but when the biggest disaster in EA's history manifests, we aren't willing to pay even one investigator to review what happened so we can get the facts straight, begin to rebuild trust, and see if there's anything we should change in response? 

I disagree with this framing.

Something that I believe I got wrong pre-FTX was base rates/priors: I had assumed that if a company was making billions of dollars, had received investment from top-tier firms, complied with a bunch of regulations, etc. then the chance of serious misconduct was fairly low.

I have now spent a fair amount of time documenting that this is not true, in data sets of YCombinator companies and major philanthropists.

It's hard to measure this, but at least anecdotally some other people (including in "EA leadership" positions) tell me that they were updated by this work and think that they similarly had incorrect priors.

I think what you are calling an "investigation" is fine/good, but it is not the only way to "get the facts straight" or "see if there's anything we should change in response".

4
RobBensinger
26d
Fair! I definitely don't want to imply that there's been zero reflection or inquiry in the wake of FTX. I just think "what actually happened within EA networks, and could we have done better with different processes or norms?" is a really large and central piece of the puzzle.

To be fair, this could trigger lawsuits. I hope someone is reflecting on FTX, but I wouldn't expect anyone to be keen on discussing their own involvement with FTX publicly and in great detail.

I think that's right, although I would distinguish between corporate and personal exposure here to some extent:

  • I'm most hesitant to criticize people for not personally taking actions that could increase their personal legal exposure.  
  • I'm most willing to criticize people and organizations for not taking actions that could increase organizational legal exposure. Non-profit organizations are supposed to exist in the public interest, while individuals do not carry any above-average obligations in that way. Organizations are not moral persons whose welfare is important to me. Moreover, organizations are better able to manage risk than individuals. For purposes of the norm that s/he who benefits from an action should also generally expect to bear the attendant costs, I am more willing to ascribe the benefits of action to an organization than to an individual doing their job.[1]
  • Organizational decisions to remain silent to avoid risk to individuals pose thornier questions for me. I'd have to think more about that intuition after my lunch break, but some of it relate to reasonable expectations of privacy. For example, disclosure of the contents of an organizational e-mail account (whe
... (read more)

Who would be able to sue? Would it really be possible for FTX customers/investors to sue someone for not making public "I heard Sam lies a lot and once misplaced money at Alameda early on it and didn't seem too concerned, and reneged on a verbal agreement to share ownership". Just because someone worked at the Future Fund? Or even someone who worked at EV? 

I'd note that Nick Beckstead was in active litigation with the Alameda bankruptcy estate until that was dismissed last month (Docket No. 93). I think it would be very reasonable for anyone who worked at FTXFF to be concerned about their personal legal exposure here. (I am not opining as to whether exposure exists, only that I would find it extremely hard to fault anyone who worked at FTXFF for believing that they were at risk. After all, Nick already got sued!)

It's harder to assess exposure for other groups of people. To your question, there may be a difference between mere silence in the face of knowledge/suspicion and somewhat supportive statements/actions in the face of the same knowledge. As a reference point, there was that suit against Tom Brady et al. (haven't seen a recent status update). Obviously, the promotional activity is more explicit there than anything I expect an EA-associated person did. However, the theory against Brady et al. may rely more on generic failure to investigate, while one could perhaps dig for a stronger case against certain EA-related persons on actual knowledge of suspicious facts. I can only encourage people with concerns to consult their own pers... (read more)

As am aside, this isn't really action relevant, but insofar as being involved with the legal system is a massive punishment even when the legal system itself is very likely going to eventually come to the conclusion you've done nothing legally wrong, that seems bad? Here it also seems to be having a knock on effect of making it harder to find out what actually happened, rather than being painful but producing useful information.

The suit against Brady also sounds like a complete waste of society's time and money to me.

8
Jason
1mo
The legal system doesn't know ex ante whether you've done anything wrong, though. It's really hard to set up a system that balances out all the different ways a legal system can be imbalanced. If you don't give plaintiffs enough leeway to discover evidence for their claims, then tortfeasors will be insufficiently deterred from committing torts. If you go too far (the current U.S. system), you incentivize lawfare, harassment, and legalized extortion of some defendants. Imposing litigation costs / attorney fees on the losers often harms the little guy due to lower ability to shoulder risk & the marginal utility of money. Having parties bear their own costs / fees (generally, the U.S. system) encourages tactics that run up the bill for the other guy. And defendants are more vulnerable to that than plaintiffs as a general rule. Maybe. Maybe people would talk but for litigation exposure. Or maybe people are using litigation exposure as a convenient excuse to cover the fact that they don't want to (and wouldn't) talk anyway. I will generally take individuals at face value given the difficulty of discerning between the two, though.

Would it be possible to set up a fund that pays people for the damages they incurred for a lawsuit where they end up being innocent? That way the EA community could make it less risky for those who haven’t spoken up, and also signal how valuable their information is to them.

Yes, although it is likely cheaper (in expected costs) and otherwise superior to make a ~unconditional offer to cover at least the legal fees for would-be speakers. The reason is that an externally legible, credible guarantee of legal-expense coverage ordinarily acts as a strong deterrent to bringing a weak lawsuit in the first place. As implied by my prior comment, one of the main tools in the plaintiff's arsenal is to bully a defendant in a weak case to settle by threatening them with liability for massive legal bills. If you take that tactic way by making the defendant ~insensitive to the size of their legal bills, you should stop a lot of suits from ever being brought in the first place. Rather, one would expect would-be plaintiffs to sue only if the expected value of their suit (e.g., the odds of winning and collecting on a judgment multiplied by judgment size) exceed the expected costs of litigating to trial (or to a point at which the defendant decides to settle without factoring in legal bills). If you think the odds of plaintiff success at trial are low and/or that the would-be individual defendant doesn't have a ton of assets to collect from, then the most likely number o... (read more)

6
Elizabeth
1mo
How big is the legal risk for a high profile EA person who, say: *  knew SBF was an asshole, incautious, and lived in a luxury villa, but had no knowledge of any specific fraud * publicly promoted him as a moral and frugal person ? Is this automatically tort-worthy, but hard to prove? Laughed out of court no matter what? Does speaking about it publicly extend the court case, so it's more expensive even if the promoter will ultimately win?

If I am betting $5 of play money on Manifold (meaning off-the-cuff gut check with no research) I would generally bet low as long as the person did not ~specifically promote FTX. If there was specific promotion of FTX, you could see claims like these which would be beyond my willingness to speculate $5 of play money at this time.

Here are some off-the-cuff questions I might want to ask (again, no research) if I were thinking about a specific case:

  1. Could anyone potentially show that they actually and reasonably relied on the statements that were made to transact business with FTX? 
  2. How relevant were the statements to a reasonable person who might be considering transacting business with FTX? For example, one might think "Joe told me SBF was frugal despite knowing that was a quarter-truth at best, I wouldn't have opened an FTX account had he not said that, and it was reasonable for me to rely on SBF's frugality to decide whether to open an account" sounds like a stretch. On the other hand, reliance on "Jane had very good reason to believe SBF had done shady and illegal stuff, yet forcefully presented him as a trustworthy paragon of moral virtue on her podcast" starts feeling a littl
... (read more)
2
Elizabeth
18d
(understanding you are a guy betting $5 on manifold) re: #3. Does this get blurred if the company made an explicit marketing push about what a great guy their CEO was? I imagine that still wouldn't affect statements on him as a role model[1] , but might matter if they said many positive statements about him on a platform aimed at the general public.  1. ^ legally
4
Jason
18d
Not a crypto-focused platform (e.g., Joe's Crypto Podcast?) No particular reason to know or believe that the company (had / was going to) use something Person said as part of their marketing campaign? If negative to both, it doesn't affect my $5 Manifold bet.
2
Elizabeth
17d
thanks, I appreciate all this info. 
9
JWS
1mo
I guess I kinda want to say fiat justitia ruat caelum here 🤷
3
David Thorstad
1mo
You folks impress me! But seriously, that's a big ask.
5
RobBensinger
1mo
I'm a pretty big fan of Nate's public write-up on his relationship to Sam and FTX. Though, sure, this is going to be scarier for people who were way more involved and who did stuff that twitter mobs can more easily get mad about. This is part of why the main thing I'm asking for is a professional investigation, not a tell-all blog post by every person involved in this mess (though the latter are great too). An investigation can discover useful facts and share them privately, and its public write-up can accurately convey the broad strokes of what happened, and a large number of the details, while taking basic steps to protect the innocent.

I want to flag for Forum readers that I am aware of this post and the associated issues about FTX, EV/CEA, and EA. I have also reached out to Becca directly. 

I started in my new role as CEA’s CEO about six weeks ago, and as of the start of this week I’m taking a pre-planned six-week break after a year sprinting in my role as EV US’s CEO[1]. These unusual circumstances mean our plans and timelines are a work in progress (although CEA’s work continues and I continue to be involved in a reduced capacity).

Serious engagement with and communication about questions and concerns related to these issues is (and was already) something I want to prioritize, but I want to wait to publicly discuss my thoughts on these issues until I have the capacity to do so thoroughly and thoughtfully, rather than attempt to respond on the fly. I appreciate people may want more specific details, but I felt that I’d at least respond to let people know I’ve acknowledged the concerns rather than not responding at all in the short-term.

  1. ^

     It’s unusual to take significant time off like this immediately after starting a new role, but this is functionally a substitute for me not taking an extended break bet

... (read more)

but I want to wait to publicly discuss my thoughts on these issues until I have the capacity to do so thoroughly and thoughtfully, rather than attempt to respond on the fly

You did speak publicly about them, in a large newspaper nonetheless: https://www.washingtonpost.com/opinions/2024/03/28/sam-bankman-fried-effective-truism-fraud

To be clear, I think it's still fine to take some time, but it does seem like you made claims that the EA community has engaged in successful investigation and reflection here, and so saying that you want to hold off on engaging unless you can do so "thoroughly and thoughtfully" rings a bit hollow and sounds a bit like avoiding critical conversation while actively trying to spread beliefs this post calls into question (though again, I recognize you have a hard job and I don't want to be too nitpicky about this, but I do feel like the confluence of releasing an article in a major newspaper combined with saying you want to hold off publicly discussing these issues feels off).

My guess is the timing of Becca's post is related to your Washington Post article, though that's really just a random guess.

Jason
1mo28
15
0
3

You don't deserve negative karma for this comment (was at -1 when I corrected that), but I think it's fair to recognize that the timing of the op-ed was indirectly dictated by the date Judge Kaplan set for sentencing. Publishing it probably wouldn't make sense at any other time, so Zach may have been stuck between being rushed into publishing it too early or not responding to the public-interest event at all. Also, it seems unlikely he booked six weeks off right after SBF's sentencing for that reason.

I'm not opining that I would have published all of the language in the op-ed if I didn't think I had done enough work to be able to communicate "thoroughly and thoughtfully" to the EA community. But I do feel some sympathy for the position Zach found himself in with respect to a hard external deadline.

Also, it seems unlikely he booked six weeks off right after SBF's sentencing for that reason.

Totally, to be clear, I think it's totally fine for Zack to take time off, and wasn't intending to comment on that at all. I was just responding to the (what I perceived to be a separate thread) of wanting to hold off on engaging until he had formed considered opinions.

7
Habryka
1mo
Yeah, agree, that makes sense. I do think it was the wrong call, but I can understand the perceived urgency.
Jason
1mo81
11
1
4

But I ultimately decided against doing that for a variety of reasons, including that it was very costly to me,

Epistemic status: not fleshed out

(This comment is not specifically directed to Rebecca's situation, although it does allude to her situation in one point as an example.)

I observe that the powers-that-be could make it less costly for knowledgeable people to come forward and speak out. For example, some people may have legal obligations, such as the duties a board member owes a corporation (extending in some cases to former board members).[1] Organizations may be able to waive those duties by granting consent. Likewise, people may have concerns[2] about libel-law exposure (especially to the extent they have exposure to the world's libel-tourism capital, the UK). Individuals and organizations can mitigate these concerns by, for instance, agreeing not to sue any community member for libel or any similar tort for FTX/SBF-related speech. (One could imagine an exception for suits brought in the United States in which the individual or organization concedes their status as a public figure, and does not present any other claims that would allow a finding of liability witho... (read more)

JWS
1mo53
14
3

edit: As always, disagree/downvoters, would be good to hear why you disagree, as I'm not sure what I've written below merits either a disagree and especially not a downvote.

Thanks for sharing your thoughts Rebecca.

I do find myself wishing that some of these discussions from the core/leadership of EA[1] were less vague. I noticed this with Habrkya's reaction to the recent EA column in the Washington Post - where he mentions 'people he's talked to at CEA'. Would be good to know who those people at CEA are.

I accept some people are told things informally, and in confidence etc., but it would seem to be useful to have as much as is possible/reasonable in the public domain, especially since these discussions/decisions seem to have such a large impact on the rest of the community in terms of reputational impact, organisational structure and hiring, grantmaking priorities and decisions etc.

For example, I again respect you said that your full thoughts would be 'highly costly' to share, but it'd be enlightening to know which members of the EV board you disagreed with so much that you felt you had to resign. If you can't share that, knowing why you can't share that. Or if not that, knowi... (read more)

For example, I again respect you said that your full thoughts would be 'highly costly' to share, but it'd be enlightening to know which members of the EV board you disagreed with so much that you felt you had to resign. 

At the time of Rebecca's resignation, how many members did the EVF USA board have? As of January 2023, the board was [Beckstead, Kagan, and Ross] with Beckstead recused from FTX-related matters for obvious reasons. In April, her resignation and the addition of Eli Rose & Zach Robinson were concurrently announced (although it is not clear if she decided to resign prior to these appointments to the board).

My sense is the EV UK board mattered a good amount as well during this period, and Claire Zabel was also on the board during the relevant period (I do not know which board members Becca was thinking about in the above post, if any).

2
Arepo
1mo
Rebecca's comments seem consistent with Beckstead being part of her concern, though.

Also, I feel mean for pressing the point against someone who is clearly finding this stressful and is no more responsible for it than anyone else in the know, but I really want someone to properly explain what the warning signs the leadership saw were, who saw them, and what was said internally in response to them. I don't even know how much that will help with anything, to be honest, so much as I just want to know. But at least in theory, anyone who behaved really badly should be removed from positions of power. (And I do mean just that: positions where they run big orgs: I'm not saying they should be shunned or they can't be allowed to contribute to the community intellectually any more.) If Rebecca won't do this, someone else should. But also, depending on how bad the behavior of leaders actually was, in NOT saying more people with inside knowledge are probably either a) helping people escape responsibility for really bad behavior or b) making what were reasonably sympathetic mistakes that many people might have made in the same position sound much worse than they were through vagueness, leading to unfair reputational damage. (EDIT: I should say that sadly, I think a) is much th... (read more)

ICYMI: I wrote this in response to a previous "EA leaders knew stuff" story. [Although I'm not sure if I'm one of the "leaders" Becca is referring to, or if the signs I mentioned are what she's concerned about.]

4
Ulrik Horn
26d
Am I correct in interpreting your comment as something like "Rebecca says it's costly to say more which might imply she is sitting on not yet disclosed information that might put powerful EAs in a bad light"? I did not really pick up on this when reading the OP but your comment got me worried that maybe there is some information that should be made public?
6
David Mathers
26d
'Am I correct in interpreting your comment as something like "Rebecca says it's costly to say more which might imply she is sitting on not yet disclosed information that might put powerful EAs in a bad light"?' Yes, that's what I meant. Maybe not "not all ready disclosed" though. It might just be confirmation that the portraited painted here is indeed fair and accurate: https://time.com/6262810/sam-bankman-fried-effective-altruism-alameda-ftx/  EDIT: I don't doubt that the article is broadly literally accurate, but there's always a big gap between what claims a piece of journalism like this is making if you take it absolutely 100% literally line-by-line and the general impression you'd get about what happened if you fill in the blanks from those facts in the way the piece encourages you to. It's the latter that I think it is currently unclear how accurate it is, though after Rebecca's post I am heavily leaning towards the view that the broad impression painted by the article is indeed accurate. 

Also, I don't know if Spencer Greenberg's podcast with Will is recorded yet, but if it hasn't been I think he absolutely should ask Will what he thinks the phrase about "extensive and significant mistakes" here actually refers to. EDIT: Having listened (vaguely, while working) to most of the Sam Harris interview with Will, as far as I can tell Harris entirely failed to ask anything about this, which is a huge omission. Another question Spencer could ask Will is: did you specify this topic was off-limits to Harris? 

I felt the Sam Harris interview was disappointingly soft and superficial. To be fair to MacAskill, Harris did an unusually bad job of pushing back and taking a harder line, and so MacAskill wasn't forced to get deeper into it.

And basically nothing about how to avoid a similar situation happening again? Except for a few lines about decentralisation. Quite uninspiring.

3
David Mathers
26d
Yes, Harris should have asked Will about this: https://time.com/6262810/sam-bankman-fried-effective-altruism-alameda-ftx/ 

I have not been very closely connected to the EA community the last couple of years, but based on communications, I was expecting:

  • an independent and broad investigation
  • reflections by key players that "approved" and collaborated with SBF on EA endeavors, such as Will MacAskill, Nick Beckstead, and 80K.

For example, Will posted in his Quick Takes 9 months ago:

I had originally planned to get out a backwards-looking post early in the year, and I had been holding off on talking about other things until that was published. That post has been repeatedly delayed, and I’m not sure when it’ll be able to come out. https://forum.effectivealtruism.org/posts/TeBBvwQH7KFwLT7w5/william_macaskill-s-shortform?commentId=yxK8NCxrZQBjAxpCL

It now turns out that this has changed into podcasts, which is better than nothing, but doesn't give room to conversation or accountability.

I think 80K has been most open in reflecting on their mistakes and taking responsibility.

I was also implicitly expecting:

  • a broader conversation in the community (on the Forum and/or at conferences) where everyone could ask questions and some kind of plan of improvement would be made

It is disappointing that too little h... (read more)

It now turns out that this has changed into podcasts, which is better than nothing, but doesn't give room to conversation or accountability.

Formatting error; this is something Siebe is saying, not part of the Will quotation.

2
SiebeRozendal
1mo
Thanks Rob! Fixed it.

the EA community is not on track to learn the relevant lessons from its relationship with FTX. 

In case it helps, here’s some data from Meta Coordination Forum attendees on how much they think the FTX collapse should influence their work-related actions and how much it has influenced their work-related actions:

  • On average, attendees thought the average MCF attendee should moderately change their work-related actions because of the FTX collapse (Mean of 4.0 where 1 = no change and 7 = very significant change; n = 39 and SD = 1.5)
  • On average, attendees reported that the FTX collapse had moderately influenced their work related actions (Mean of 4.2 where 1 = no change and 7 = very significant change; n = 39 and SD = 1.7)


My interpretation of this is that MCF attendees have changed their professional behavior a reasonable amount (according to MCF attendees), although maybe this doesn’t address broader questions of reform (eg., ecosystem-wide work that requires substantial coordination). 

And here’s a summary of responses to the question “what lessons do we need to learn from the past year”, asked directly after the above question:  

  1. Improve govern
... (read more)

I think in any world, including ones where EA leadership is dropping the ball or is likely to cause more future harm like FTX, it would be very surprising if they individually had not updated substantially. 

As an extreme illustrative example, really just intended to get the intuition across, imagine that some substantial fraction of EA leaders are involved in large scale fraud and continue to plan to do so (which to be clear, I don't have any evidence of), then of course the individuals would update a lot on FTX, but probably on the dimensions of "here are the ways Sam got caught, here is what I really need to avoid doing to not get caught myself". 

It would be very surprising if a crisis like FTX would not cause at least moderately high scores on a question like the one you chart above. The key thing that I would want to see is evidence that the leadership has updated in a direction that will likely prevent future harm, and does not push people further into deceptive relationships with the world. 

The concrete list of changes below helps, though as far as I can tell practically none of them have actually been implemented (and the concrete numbers you cite for people w... (read more)

There are no whistleblower systems in place at any major EA orgs as far as I know

I’ve heard this claim repeatedly, but it’s not true that EA orgs have no whistleblower systems. 

I looked into this as part of this project on reforms at EA organizations: Resource on whistleblowing and other ways of escalating concerns

  • Many organizations in EA have whistleblower policies, some of which are public in their bylaws (for example, GiveWell and ACE publish their whistleblower policies among other policies). EV US and EV UK have whistleblower policies that apply to all the projects under their umbrella (CEA, 80,000 Hours, etc.) This is just a normal thing for nonprofits; the IRS asks whether you have one even though they don't strictly require it, and you can look up on a nonprofit’s 990 whether they have such a policy. 
  • Additionally, UK law, state law in many US states, and lots of other countries provide some legal protections for whistleblowers. Legal protection varies by state in the US, but is relatively strong in California.
  • Neither government protections nor organizational policies cover all the scenarios where someone might reasonably want protection from ne
... (read more)

Neither government protections nor organizational policies cover all the scenarios where someone might reasonably want protection from negative effects of bringing a problem to light. But that seems to be the case in all industries, including in the nonprofit field in general, not something unusual about EA.

I think that is correct as far as it goes, but I suspect that the list of things you generally won't get protection from (from your linked post) is significantly more painful in practice in EA than in most industries. 

For example, although individuals dependent on small grants are probably particularly vulnerable to retaliation in ~all industries, that's practically a much bigger hole in EA than elsewhere. The general unavailability of protection for disclosures about entities you don't work for is more stifling in fields with a patchwork of mostly small-to-midsize orgs than in (say) the aerospace industry. Funding centralization could make retaliation easier to pull off. 

So while the scope of coverage might be similar on paper in EA, it seems reasonably possible that the extent of protection as applied is unusually weak in EA.

I’m not aware of any EA organizations that

... (read more)
8
Habryka
1mo
My understanding is that UK law and state law whistleblower protections are extremely weak and only cover knowledge of literal and usually substantial crimes (including in California). I don't think any legally-mandated whistleblower protections make much of a difference for the kind of thing that EAs are likely to encounter.  I checked the state of the law in the FTX case, and unless someone knew specifically of clear fraud going on, they would have not been protected, which seems like it makes them mostly useless for things we care about. They also wouldn't cover e.g. capabilities companies being reckless or violating commitments they made, unless they break some clear law, and even then protections are pretty limited. So I can't really think of any case, except the most extreme, in which at least the US state protections come into play. I was not aware of any CEA or 80k whistleblower systems. If they have some, that seems good! Is there any place that has more details on them? (you also didn't mention them in the article you linked, which I had read recently, so I wasn't aware of them) Also, for the record, organizational whisteblower protections seem not that important to me. I e.g. care more about having norms against libel suits and other litigious behavior, though the norms for that seem mostly gone, so I expect substantially less whistleblowing of that type in the future. I mostly covered them because I was comprehensively covering the list of things people submitted to the Coordination Forum.
3
Josh Jacobson
25d
An alternative take on this (I haven’t researched this topic myself): https://forum.effectivealtruism.org/posts/LttenWwmRn8LHoDgL/josh-jacobson-s-quick-takes?commentId=ZA2N2LNqQteD5dE4g

"Better vet risks from funders/leaders, have lower tolerance for bad behavior, and remove people responsible for the crisis from leadership roles."

I don't think any such removals have happened, and my sense is tolerance of bad behavior of the type that seems to me most responsible for FTX has gone up (in-particular heavy optimization for optics and large tolerance for divergences between public narratives and what is actually going on behind the scenes).

I'd like to single out this part of your comment for extra discussion. On the Sam Harris podcast, Will MacAskill named leadership turnover as his main example of post-FTX systemic change; I'd love to know why you and Will seem to be saying opposite things here.

I'd also love to hear from more people whether they agree or disagree with Oliver on these two points:

  • Was "heavy optimization for optics and large tolerance for divergences between public narratives and what is actually going on behind the scenes" one of the EA behaviors that was most responsible for FTX?
  • Has this behavior increased in EA post-FTX?

So, I think it's clear that a lot of leadership turnover has happened. However, my sense is that the kind of leadership turnover that has occurred is anti-correlated with what I would consider good. Most importantly, it seems to me that the people in EA leadership that I felt were often the most thoughtful about these issues took a step back from EA, often because EA didn't live up to their ethical standards, or because they burned out trying to affect change and this recent period has been very stressful (or burned out for other reasons, unrelated to trying to affect change). 

Below I'll make a concrete list of leadership transitions I know have occurred and judge specific individuals, which I want to be clear on, are my personal judgements and I expect lots of people will disagree with me here: 

Max Dalton left CEA. My sense is despite my many disagreements with him, he still seemed to me the best CEO that CEA has had historically, and he seemed to have a genuine strong interest in acting in high-integrity ways. My understanding is that the FTX stuff burned him out (as well as some of the Owen stuff, though the FTX stuff seemed more important). 

He was replaced by Zac... (read more)

left the EV board

Given that it appears EVF will soon be sent off to the scrapping yards for disassembly, it seems that changes in EVF board composition -- for better or worse -- may be less salient than they would have been been in 2022 or even much of 2023.

So "a lot of leadership turnover has happened" may not be quite as high-magnitude as had those changes had occurred in years past. Furthermore, some of these changes seem less connected to FTX than others, so it's not clear to me how much turnover has happened as a fairly direct result of FTX. The most related change was Will & Nick leaving the EVF board, but I strongly suspect there was little practical choice there and so is weak evidence of some sort of internal change in direction.

All that is to say that I am not sure how much the nominal extent of leadership turnover suggests EA is turning over a new leadership leaf or something.

7
Buck
24d
Who on your list matches this description? Maybe Becca if you think she's thoughtful on these issues? But isn't that one at most?
-1
Habryka
24d
Becca, Nicole and Max all stand out as people who I think burned out trying to make things go better around FTX stuff.  Also Claire leaving her position worsened my expectations of how much Open Phil will do things that seem bad. Alexander also seems substantially worse than Holden on this dimension. I think Holden was on the way out anyways, but my sense was Claire found the FTX-adjacent work very stressful and that played a role in her leaving (I don't thinks she agrees with me on many of these issues, but I nevertheless trusted her decision-making more than others in the space).
JWS
1mo17
5
0

What are you referring to when you say "Naive consequentialism"?[1] Because I'm not sure that it's what others reading might take it to mean?

Like you seem critical of the current plan to sell Wytham Abbey, but I think many critics view the original purchase of it as an act of naive consequentialism that ignored the side effects that it's had, such as reinforcing negative views of EA etc. Can both the purchase and the sale be a case of NC? Are they the same kind of thing?

So I'm not sure the 3 respondents from the MCF and you have the same thing in mind when you talk about naive consequentialism, and I'm not quite sure I am either.

  1. ^

    Both here and in this other example, for instance

The issue is that there are degrees of naiveness. Oliver's view, as I understand it, is that there are at least three positions:

  • Maximally Naive: Buy nice event venues, because we need more places to host events.
  • Moderately Naive: Don't buy nice event venues, because it's more valuable to convince people that we're frugal and humble than it is valuable to host events.
  • Non-Naive: Buy nice event venues, because we need more places to host events, and the value of signaling frugality and humility is in any case lower than the value of signaling that we're willing to do weird and unpopular things when the first-order effects are clearly positive. Indeed, trying to look frugal here may even cause more harm than benefit, since:
    • (a) it nudges EA toward being a home for empty virtue-signalers instead of people trying to actually help others, and
    • (b) it nudges EA toward being a home for manipulative people who are obsessed with controlling others' perceptions of EA, as opposed to EA being a home for honest, open, and cooperative souls who prize doing good and causing others to have accurate models over having a good reputation.

Optimizing too hard for reputation can get you into hot water, becaus... (read more)

4
Habryka
26d
I think this captures some of what I mean, though my model is also that the "Maximally naive" view is not very stable, in that if you are being "maximally naive" you do often end up just lying to people (because the predictable benefits from lying to people outweigh the predictable costs in that moment).  I do think a combination of being "maximally naive" combined with strong norms against deception and in favor of honesty can work, though in-general people want good reasons for following norms, and arguing for honesty requires some non-naive reasoning.
8
David Mathers
26d
'Naive consequentialist plans also seem to have increased since FTX, mostly as a result of shorter AI timelines and much more involvement of EA in the policy space.' This gives me the same feeling as Rebecca's original post: that you have specific information about very bad stuff that you are (for good or bad reasons) not sharing. 
2
Habryka
26d
I don't particularly feel like my knowledge here is confidential, it would just take a bunch of inferential distance to cross. I do have some confidential information, but it doesn't feel that load-bearing to me.  This dialogue has a bit of a flavor of the kind of thing I am worried about: https://www.lesswrong.com/posts/vFqa8DZCuhyrbSnyx/integrity-in-ai-governance-and-advocacy?revision=1.0.0 
6
Jason
1mo
At the risk of over-emphasizing metrics, it seems that at least some of these reforms could and probably should be broken down into SMART goals (i.e., those that are specific, measurable, achievable, relevant, and time-bound). Example: Better vet risks from funders/leaders might be broken down into sub-tasks like (1) Stratify roles and positions by risk level (critical, severe, moderate, etc.); (2) Determine priorities for implementation and the re-vetting schedule; (3) develop adjudication guidelines; (4) decide who investigates and adjudicates suitability; (5) set measurable and time-bound progress indicators (e.g., the holders of 75% of Critical Risk roles/positions have been investigated and adjudicated by the end of 2025). [Note: The specific framework above borrows from the framework for security clearances and public-trust background checks in the US government. Obviously things in EA would need to be different, and the risks are different, so this is meant as an example rather than a specific proposal on this point. Yet, some of the core system needs would be at least somewhat similar.]

@Rebecca Kagan I've sent you a message and think it could be valuable for me and perhaps other new EV board members to get more information from you in order to learn and avoid mistakes. I'd be happy to take you up on your offer for discussion.

'and think confusion on this issue has indirectly resulted in a lot of harm.'

Can you say a bit more about this?

Thank you for posting this publicly. It's useful information for everyone to know.

Wasn't there some law firm that did an investigation? Plus some other projects listed here.

It would be useful for you to clarify exactly what you'd like to see happen and how this differs from the things that did happen, even though this might be obvious to someone who is high-context on the situation like you are. On the other hand, I'd have to do a bit of research to figure out what you're suggesting.

The post has a footnote, which reads:

Although EV conducted a narrow investigation, the scope was far more limited than what I’m describing here, primarily pertaining to EV’s legal exposure, and most results were not shared publicly.

As far as I know, what has been shared publicly from the investigation is that no one at EVF had actual knowledge of SBF's fraud.

My take is:

  • ⁠EA (ie mainly elite EAs) fucked up and have considerable responsibility over the FTX thing
  • EA also fucked up big time with the OpenAI board drama, in a way that blew up less badly than it could have, but reflects even worse on the state of elite EA than FTX does
  • Public investigations and post-mortems won’t help per se. What would help is a display of leadership that convincingly puts to bed any concern of similarly poor epistemics and practices taking place in the future

Wasn't the OpenAI thing basically the opposite of the mistake with FTX though? With FTX people ignored what appears to have been a fair amount of evidence that a powerful, allegedly ethical businessperson was in fact shady. At OpenAI, people seem to have got (what they perceived as, but we've no strong evidence they were wrong) evidence, that a powerful, allegedly ethically motivated businessperson was in fact shady, so they learnt the lessons of FTX and tried to do something about it (and failed.) 

8
George Noether
26d
I think that's why it's informative. If EA radically changes in response to the FTX crisis, then it could easily put itself in a worse position (leading to more negative consequences in the world). The intrinsic problem appears to be in the quality of the governance, rather than a systematic error/blind-spot.
1
George Noether
26d
To be more clear, I am bringing the OpenAI drama up as it is instructive for highlighting what is and is not going wrong more generally. I don't think the specifics of what went wrong with FTX point at the central thing that's of concern. I think the key factor behind EA's past and future failures come down to poor quality decision-making among those with the most influence, rather than the degree to which everybody is sensitive to someone's shadiness. (I'm assuming we agree FTX and the OpenAI drama were both failures, and that failures can happen even among groups of competent, moral people that act according to the expectations set for them.) I don't know what the cause of the poor decision-making is. Social norms preventing people from expressing disagreement, org structures, unclear responsibilities, conflicts of interests, lack of communication, low intellectual diversity — it could be one of these, a combination, or maybe something totally different. I think it should be figured out and resolved, though, if we are trying to change the world. So, if there is an investigation, it should be part of a move to making sure EAs in positions of power will consistently handle difficult situations incredibly well (as opposed to just satisfying people's needs for more specific explanations of what went wrong with FTX). There are many ways in which EA can create or destroy value, and looking just at our eagerness to 'do something' in response to people being shady is a weirdly narrow metric to assess the movement on. EDIT: would really appreciate someone saying what they disagree with
[comment deleted]1mo1
0
0
Curated and popular this week
Relevant opportunities