All of Habryka's Comments + Replies

This is an extremely rich guy who isn't donating any of his money.

FWIW, I totally don't consider "donating" a necessary component of taking effective altruistic action. Most charities seem much less effective than the most effective for-profit organizations, and most of the good in the world seems achieved by for-profit companies. 

I don't have a particularly strong take on Bryan Johnson, but using "donations" as a proxy seems pretty bad to me.

Less than a year ago Deepmind and Google Brain were two separate companies (both making cutting-edge contributions to AI development). My guess is if you broke off Deepmind from Google you would now just pretty quickly get competition between Deepmind and Google Brain (and more broadly just make the situation around slowing things down a more multilateral situation).

But more concretely, anti-trust action makes all kinds of coordination harder. After an anti-trust action that destroyed billions of dollars in economic value, the ability to get people in the same room and even consider coordinating goes down a lot, since that action itself might invite further anti-trust action.

Huh, fwiw I thought this proposal would increase AI risk, since it would increase competitive dynamics (and generally make coordinating on slowing down harder). I at least didn't read this post as x-risk motivated (though I admit I was confused what it's primary motivation was).

9
Karthik Tadepalli
1d
I read it as aiming to reduce AI risk by increasing the cost of scaling. I also don't see how breaking deepmind off from Google would increase competitive dynamics. Google, Microsoft, Amazon and other big tech partners are likely to be pushing their subsidiaries to race even faster since they are likely to have much less conscientiousness about AI risk than the companies building AI. Coordination between DeepMind and e.g. OpenAI seems much easier than coordination between Google and Microsoft.
2
Hauke Hillebrandt
1d
AI labs tend to partner with Big Tech for money, data, compute, scale etc. (e.g. Google Deepmind, Microsoft/OpenAI, and Amazon/Anthropic). Presumably to compete better?  If they they're already competing hard now, then it seems unlikely that they'll coordinate much on slowing down in the future. Also, it seems like a function of timelines: antitrust advocates argue that breaking up firms / preventing mergers would slow industry down in the short-run but speed up in the long-run by increasing competition, but if competition is usually already healthy, as libertarians often argue, then antitrust interventions might slow down industries in the long-run.

Yeah, that's a decent link. I do think this comment is more about whether anti-recommendations for organizations should be held to a similar standard. My comment also included some criticisms of Sean personally, which I think do also make sense to treat separately, though at least I definitely intend to also try to debias my statements about individuals after my experiences with SBF in-particular on this dimension.

Hmm, I agree that there was some aggression here, but I felt like Sean was the person who first brought up direct criticism of a specific person, and very harsh one at that (harsher than mine I think). 

Like, Sean's comment basically said "I think it was directly Bostrom's fault that FHI died a slow painful death, and this could have been avoided with the injection of just a bit of competence in the relevant domain". My comment is more specific, but I don't really see it as harsher. I also have a prior to not go into critiques of individual people, but that's what Sean did in this context (of course Bostrom's judgement is relevant, but I think in that case so is Sean's).

Linch
2d41
12
1
1

Sure, social aggression is a rather subjective call. I do think decoupling/locality norms are relevant here. "Garden variety incompetence" may not have been the best choice of words on Sean's part,[1] but it did seem like a) a locally scoped comment specifically answering a question that people on the forum understandably had, b) much of it empirically checkable (other people formerly at FHI, particularly ops staff, could present their perspectives re: relationship management), and c) Bostom's capacity as director is very much relevant to the discussi... (read more)

Pushback (in the form of arguments) is totally reasonable! It seems very normal that if someone is arguing for some collective path of action, using non-shared assumptions, that there is pushback. 

The thing that feels weirder is to invoke social censure, or to insist on pushback when someone is talking about their own beliefs and not clearly advocating for some collective path of action. I really don't think it's common for people to push back when someone is expressing some personal belief of theirs that is only affecting their own actions. 

In t... (read more)

This also roughly matches my impression. I do think I would prefer the EA community to either go towards more centralized governance or less centralized governance in the relevant way, but I agree that given how things are, the EA Forum team has less leeway with moderation than the LW team. 

 I think this might be one of the LTFF writeups Oli mentions (apologies if wrong), and seems like a good place to start

Yep, that's the one I was thinking about. I've changed my mind on some of the things in that section in the (many) years since I wrote it, but it still seems like a decent starting point.

In the examples you give, the arguments for and against are fairly cached so there’s less of a need to bring them up. That doesn’t apply here. I also think your argument is often false even in your examples - in my experience, the bigger the gap between the belief the person is expressing and that of the ~average of everyone else in the audience, the more likely there is to be pushback (though not always by putting someone on the spot to justify their beliefs, e.g. awkwardly changing the conversation or straight out ridiculing the person for the belief)

This thread doesn't feel great for this, though CSER is an organization for which I do really wish more people shared their assessments. Also happy to have a call if your curiosity extends that far, and you would be welcome to write up the things that I say in that call publicly (though of course that's a lot of work and I don't think you have any obligation to do so). 

6
Nisan
3d
(Thanks, dm sent.)
Habryka
3d17
4
0
1
4

Thanks Sean. I think this is a good comment and I think makes me understand your perspective better. 

I do think we obviously have large worldview differences here, that seem maybe worth exploring at some point, but this comment (as well as some private conversations sparked by these comments with others at FHI) made me feel more sympathetic to the perspective of "there is some history-rewriting happening that seems scary, where the university gets portrayed as this kind of boogeyman, and while it does seem the university did some unreasonable-seeming ... (read more)

Deleting this because on re-reading I think I'm just repeating myself, but in a more annoyed way. Thanks for checking with other people, I'll leave it at that.


Thank you. I’m grateful you checked with other people. Yes, I do think there is some history rewriting and mythologising going on here compared to my own memory of how things were, and this bothers me because I think the truth does matter.

There is a very real sense in which Nick had a pretty sweet setup at Oxford, in terms of having the power and influence to do an unusual thing. And there were a bun... (read more)

Yeah I made a similar point here.

You made hostile claims that weren't following on from prior discussion,[1] and in my view nasty and personal insinuations as well, and didn't have anything to back it up. 

This seems relatively straightforwardly false. In as much as Sean is making claims about the right strategy to follow for FHI, and claiming that the errors at FHI were straightforwardly Bostrom's fault and attributable to 'garden variety incompetence', the degree of historical success of the strategies that Sean seems to be advocating for is of course relevant in assessing whet... (read more)

Yeah, I agree this is a real dynamic. It doesn't sound unreasonable for me to have a standard link that l link to if I criticize people on here that makes it salient that I am aspiring to be less asymmetric in the information I share (I do think the norms are already pretty different over on LW, where if anything I think criticism is a bit less scrutinized than praise, so its not like this is a totally alien set of norms).

6
Will Aldred
3d
Perhaps this old comment from Rohin Shah could serve as the standard link? (Note that it’s on the particular case of recommending people do/don’t work at a given org, rather than the general case of praise/criticism, but I don’t think this changes the structure of the argument other than maybe making point 1 less salient.) Excerpting the relevant part:
Habryka
3d1
11
14
1
1

I don't understand. I do not consider myself to be under the obligation that all negative takes I share about an organization must be accompanied by a full case for why I think those are justified.

Similar to how it would IMO be crazy to request people to justify that all positive comments about an organization must be accompanied by full justifications for ones judgement.

I have written about my feelings about CSER and Leverhulme some in the past (one of my old LTFF writeups for example includes a bunch of more detailed models I have of CSER). I have defini... (read more)

Linch
3d45
15
0
4

...I do not consider myself to be under the obligation that all negative takes I share about an organization...

Fwiw I think part of the issue that I had[1] with your comment is that the comment came across much more aggressively and personally, rather than as a critique of an organization. I do think the bar for critiquing individuals ought to be moderately higher than the bar for critiquing organizations. Particularly when the critique comes from a different place/capacity[2] than strictly necessary for the conversation[3].

I expect some other pe... (read more)

Rebecca
3d16
7
1
1
4

In my experience people update less from positive comments and more from negative comments intuitively to correct for this asymmetry (that it's more socially acceptable to give unsupported praise than unsupported criticism). Your preferred approach to correcting the asymmetry, while I agree is in the abstract better, doesn't work in the context of these existing corrections.

3
JWS
3d
I don't understand your lack of understanding. My point is that you're acting like a right arse. When people make claims, we expect there to be some justification proportional to the claims made. You made hostile claims that weren't following on from prior discussion,[1] and in my view nasty and personal insinuations as well, and didn't have anything to back it up.  I don't understand how you wouldn't think that Sean would be hurt by it.[2] So to me, you behaved like arse, knowing that you'd hurt someone, didn't justify it, got called out, and are now complaining. So I don't really have much interest in continuing this discussion for now, or much opinion at the moment of your behaviour or your 'integrity' 1. ^ Like nobody was discussing CSER/CFI or Sean directly until you came in with it 2. ^ Even if you did think it was justified

For what it's worth, I'm (at least partly) sympathetic to Oli's position here. If nothing else, from my end I'm not confident that the combined time usage of:

[Oli producing book-length critique of CSER/Leverhulme, or me personally, depending] +
[me producing presumably book-length response] +
[further back and forth] +
[a whole lot of forum readers trying to unpick the disagreements]

is overall worth it, particularly given (a) it seems likely to me there are some worldview/cultural differences that would take time to unpick and (b) I will be limited in what I ... (read more)

>"and would update a good amount on reports by people who were actually there, especially in the later years."

For takes from people you might trust more than me, you might consider reaching out to Owen Cotton-Barratt, Niel Bowerman, or Page Hedley, all of whom played relevant roles later than me. 

Nisan
4d21
15
0

CSER and Leverhulme, which I think are institutions that have overall caused more harm than good and I wish didn't exist

I'd love if you could comment on which concrete actions were harmful. (I donated to CSER a long time ago and then didn't pay attention to what they were doing, so I'm curious.)

Habryka
4d4
11
27
1

i am very glad that Bostrom chose the path he chose with FHI in contrast to the path you seem to have chosen with CSER and Leverhulme, so I am inclined to not trust your take here too much (though your closeness and direct experience of course is important and relevant here and makes up for some of my a-priori skepticism). Where FHI successfully continued to produce great intellectual work, I have seen other organizations in similar positions easily fall prey to the demand and pressures and become a hollow shell of political correctness and vapid ideas.

It ... (read more)

Sean_o_h
3d163
20
1
15
7

Thanks Habryka. My reason for commenting is that a one-sided story is being told here about the administrative/faculty relationship stuff, both by FHI and in the discussion here, and I feel it to be misleading in its incompleteness. It appears Carrick and I disagree and I respect his views, but I think many people who worked at FHI felt it to be severely administratively mismanaged for a long time. I felt presenting that perspective was important for trying to draw the right lessons.

I agree with the general point that maintaining independence under this ki... (read more)

JWS
4d101
32
6
1
2

Sorry Oli, but what is up with this (and your following) comment?

From what I've read from you[1] seem to value what you call "integrity" almost as a deontological good above all others. And this has gained you many admirers. But to my mind high integrity actors don't make the claims you've made in both of these comments without bringing examples or evidence. Maybe you're reacting to Sean's use of 'garden variety incompetence' which you think is unfair to Bostrom's attempts to tow the fine line between independence and managing university politics but ... (read more)

Sean is one of the under-sung heroes who helped build FHI and kept it alive. He did this by--among other things--careful and difficult relationship management with the faculty. I had to engage in this work too and it was less like being between a rock and a hard place and more like being between a belt grinder and another bigger belt grinder. 

One can disagree about apportioning the blame for this relationship--and in my mind, I divide it differently than Sean--but after his four years of first-hand experience, my response to Sean is to take his v... (read more)

Habryka
6d103
16
1
13

Thank you Will! This is very much the kind of reflection and updates that I was hoping to see from you and other leaders in EA for a while.

I do hope that the momentum for translating these reflections into changes within the EA community is not completely gone given the ~1.5 years that have passed since the FTX collapse, but something like this feels like a solid component of a post-FTX response. 

I disagree with a bunch of object-level takes you express here, but your reflections seem genuine and productive and I feel like me and others can engage with them in good faith. I am grateful for that.

Yeah, I think just buying Twitter to steer the narrative seems quite bad. But like, I have spent a large fraction of my career trying to think of mechanism design for discussion and social media platforms and so my relation to Twitter is I think a pretty healthy "I think I see lots of ways in which you could make this platform much more sanity-promoting" in a way that isn't about just spreading my memes and ideologies. 

Will has somewhat less of that background, and I think would have less justified confidence in his ability to actually make the platform better from a general sanity perspective, though still seems pretty plausible to me he saw or sees genuine ways to make the platform better for humanity.

6
David Mathers
9d
I must say that, given that I know from prior discussion on here that you are not Will's biggest fan, your attempt to be fair here is quite admirable. There should maybe be an "integrity" react button? 

This link's hypothesis is about people just trying to fit in―but SBF seemed not to try to fit in to his peer group! He engaged in a series of reckless and fraudulent behaviors that none of his peers seemed to want.

(Author of the post) My model is that Sam had some initial tendencies for reckless behavior and bullet-biting, and those were then greatly exacerbated via evaporative cooling dynamics at FTX. 

It sounds like SBF drove away everyone who couldn't stand his methods until only people who tolerated him were left. That's a pretty different way of m

... (read more)
2
Nathan Young
9d
What do you mean by "(Author of the post)" 

Sorry if I sounded redundant. I'd always thought of "evaporative cooling of group beliefs" like "we start with a group with similar values/goals/beliefs; the least extreme members gradually get disengaged and leave; which cascades into a more extreme average that leads to others leaving"―very analogous to evaporation. I might've misunderstood, but SBF seemed to break the analogy by consistently being the most extreme, and actively and personally pushing others away (if, at times, accidentally). Edit: So... arguably one can still apply the evaporative cooling concept to FTX, but I don't see it as an explanation of SBF himself.

Do you have links to people being very worried about gray goo stuff?

(Also, the post you link to makes this clear, but this was a prediction from when Eliezer was a teenager, or just turned 20, which does not make for a particularly good comparison, IMO)

Also, I don't think "completely uninvestigated" is a correct characterization -- they were investigated enough to be presented to a grand jury, which indicted SBF for campaign-finance violations. Federal prosecutors do not generally indict without a pretty good investigation first, especially in high-profile cases. I think we have a pretty decent idea of what he did (see pp. 18-22 of the prosecution's sentencing memo). Moreover, Salame and Singh -- who don't have extradition-related issues -- pled guilty to campaign-finance violations.

Oh, that is interesti... (read more)

I... think I will continue describing this as "weird" though it makes sense that as a lawyer it's not that weird to you.

It feels very off to have a bunch of crimes uninvestigated because someone fled the country and then was extradited, and I am pretty confused why the Bahamas cooperated with Sam here (my guess is that it's a case of political corruption, though I can also imagine other reasons). It's not like the Bahamas had anything obvious to gain from Sam not being convicted of the campaign finance violations.

3
Jason
12d
That's fair. Broader concerns about sovereignty and the integrity of the extradition system likely played a role. Although it's understandable why US prosecutors asked for provisional arrest and extradition despite not having obtained an indictment for campaign-finance violations yet, it's also understandable to me why the Bahamas wants to signal its expectation that the US comply with the treaty as written -- especially where the argument that the first extradition was good enough for these charges is legally weak. Note that the US could have asked for permission to proceed on other charges -- see Article 14(b) of the treaty -- but I think that may have required going back through the Bahamas legal system again, and probably delaying the trial. Hence, the prosecution severed off those charges, and then decided not to proceed after getting its conviction on others. I don't know Bahamas campaign or extradition law, but there would also be difficulties if the offense was deemed one of a political character (Article 3(1)(a)) or was one that was not a criminal offense punishable by more than a year in prison under Bahamas law (Article 2(1)). Also, I don't think "completely uninvestigated" is a correct characterization -- they were investigated enough to be presented to a grand jury, which indicted SBF for campaign-finance violations. Federal prosecutors do not generally indict without a pretty good investigation first, especially in high-profile cases. I think we have a pretty decent idea of what he did (see pp. 18-22 of the prosecution's sentencing memo). Moreover, Salame and Singh -- who don't have extradition-related issues -- pled guilty to campaign-finance violations.

This is great!

Note that this list is not comprehensive. In-particular it doesn't go into detail on any of the campaign finance violations and other policy-stuff that Sam was involved in, which I think never ended up investigated due to a kind of weird agreement with the Bahamas authorities. But during the trial we did hear some pretty clear evidence of at least campaign finance violations (and my guess is there would be a bunch more if one kept digging, as well as stuff that wouldn't necessarily be crimes but stuff I think most people here would still consider highly unethical).

2
Peter Wildeford
9d
I’m confused - elsewhere you identify yourself as the author of this post but here you are commenting as if you have independently reviewed it?

Not weird. If you get a foreign country to extradite someone on charge X, you can't turn around and prosecute them for charge Y (if charge Y is different enough). That's a basic principle of extradition law, and I agree with the Bahamas that the campaign-finance violations were different enough from the original counts.

https://apnews.com/article/bankmanfried-ftx-crypto-charges-ellison-60aa798f4aacb0ddd6a422b72e5a7614

They probably could have recharged after some legal tapdancing, but they decided to ask Judge Kaplan to consider this conduct at sentencing (w... (read more)

It seems to me that a case study of how exactly FTX occurred, and where things failed, would be among one of the best things to use to figure out what thing to do instead. 

Currently the majority of people who have an interest in this are blocked by not really knowing what worked and didn't work in the FTX case, and so probably will have trouble arguing compellingly for any alternative, and also lack some of the most crucial data. My guess is you might have the relevant information from informal conversations, but most don't. 

I do think also just ... (read more)

Okay, that seems reasonable. But I want to repeat my claim[1] that people are not blocked by "not really knowing what worked and didn't work in the FTX case" – even if e.g. there was some type of rumor which was effective in the FTX case, I still think we shouldn't rely on that type of rumor being effective in the future, so knowing whether or not this type of rumor was effective in the FTX case is largely irrelevant.[2]

I think the blockers are more like: fraud management is a complex and niche area that very few people in EA have experience with, and... (read more)

My current sense is that there is no motivation to find an alternative because people mistakenly think it works fine enough and so there is no need to try to find something better (and also in the absence of an investigation and clear arguments about why the rumor thing doesn't work, people probably think they can't really be blamed if the strategy fails again)

Suppose I want to devote some amount of resources towards finding alternatives to a rumor mill. I had been interpreting you as claiming that, instead of directly investing these resources towards finding an alternative, I should invest these resources towards an investigation (which will then in turn motivate other people to find alternatives).

Is that correct? If so, I'm interested in understanding why – usually if you want to do a thing, the best approach is to just do that thing.

Habryka
14d56
10
3
3
3

I think investing in FTX was genuinely a good idea, if you were a profit maximizer, even if you strongly suspected the fraud. As Jason says, as an investor losing money due to fraud isn't any worse than losing money because a company fails to otherwise be profitable, so even assigning 20%-30% probability to fraud for a high-risk investment like FTX where you are expecting >2x returns in a short number of years will not make a huge difference to your bottomline.

In many ways you should expect being the kind of person who is willing to commit fraud to be p... (read more)

I'm not the person quoted, but I agree with this part, and some of the reasons for why I expect the results of an investigation like this to be boring aren’t based on any private or confidential information, so perhaps worth sharing.

One key reason: I think rumor mills are not very effective fraud detection mechanisms.

Huh, the same reason you cite for why you are not interested in doing an investigation is one of the key reasons why I want an investigation. 

It seems to me that current EA leadership is basically planning to continue a "ou... (read more)

Interesting! I'm glad I wrote this then.

Do you think "[doing an investigation is] one of the things that would have the most potential to give rise to something better here" because you believe it is very hard to find alternatives to the rumor mill strategy? Or because you expect alternatives to not be adopted, even if found?

Becca, Nicole and Max all stand out as people who I think burned out trying to make things go better around FTX stuff. 

Also Claire leaving her position worsened my expectations of how much Open Phil will do things that seem bad. Alexander also seems substantially worse than Holden on this dimension. I think Holden was on the way out anyways, but my sense was Claire found the FTX-adjacent work very stressful and that played a role in her leaving (I don't thinks she agrees with me on many of these issues, but I nevertheless trusted her decision-making more than others in the space).

Yeah, I think this would only make sense if you would somehow end up majorly shaping the algorithms and structure of Twitter. I don't think just being a shareholder really does much here.

Thanks, that's very helpful, though if you think it's mostly because the PR cost has already been paid, then that does provide little solace under my worldview. Let's assume the PR costs were still ongoing, do you think it would have then flipped the decision?

FWIW, buying Twitter still seems plausibly like a good idea to me. It sure seems to be the single place that is most shaping public opinion on a large number of topics I care a lot about (like AI x-risk attitudes), and making that go better seems worth a lot.

-2
David Mathers
9d
I think that even if you buy that, Will's behavior is still alarming, just in a different way. Why exactly should we, as a community, think of ourselves as being  fitted to steer public opinion? Weren't we just meant to be experts on charity, rather than everything under the sun? (Not to mention that Musk is not the person I would choose to collaborate with on that, but that's for another day.) Will complains about Sam's hubris, but what could be more hubristic than that?  I remember feeling nervous when I first started working in EA that (otherwise very sober seeming) people were taking it as read that we were somehow important to the future of the whole world. That just seemed crazy and ominous to me. (And quite different from when I first became a GWWC member in 2012, where it was all just "give to good charities, be humble epistemically"; which to be clear was compatible with taking weird ideas seriously and I think people were doing that; I recall long conversations about Pascal's Wager, people already talking about AI risk etc.).  Trying to steer opinion in this way also just seems very manipulative to me. (I probably have unusually strong feelings about this because I'm autistic, but I think the autistic attitude is just better on average here.) In line with Will talking internally about "controlling the narrative" around EA and FTX https://twitter.com/molly0xFFF/status/1712282768091042029 Some people will probably just shrug and say this is just PR, and I get that there is massive hindsight bias here, but reading this made me genuinely cringe. 
9
Jason
19d
I'm struggling to draw the line between owning (a minority stake in) Twitter and having public opinion on certain topics meaningfully flow in a desired direction or path.
7
Stuart Buck
19d
I could imagine making that case, but what's the point of all the Givewell-style analysis of evidence, or all the detailed attempts to predict and value the future, if in the end, what would have been the single biggest allocation of EA funds for all time was being proposed based on vibes? 

I don't particularly feel like my knowledge here is confidential, it would just take a bunch of inferential distance to cross. I do have some confidential information, but it doesn't feel that load-bearing to me. 

This dialogue has a bit of a flavor of the kind of thing I am worried about: https://www.lesswrong.com/posts/vFqa8DZCuhyrbSnyx/integrity-in-ai-governance-and-advocacy?revision=1.0.0 

I think this captures some of what I mean, though my model is also that the "Maximally naive" view is not very stable, in that if you are being "maximally naive" you do often end up just lying to people (because the predictable benefits from lying to people outweigh the predictable costs in that moment). 

I do think a combination of being "maximally naive" combined with strong norms against deception and in favor of honesty can work, though in-general people want good reasons for following norms, and arguing for honesty requires some non-naive reasoning.

My understanding is that UK law and state law whistleblower protections are extremely weak and only cover knowledge of literal and usually substantial crimes (including in California). I don't think any legally-mandated whistleblower protections make much of a difference for the kind of thing that EAs are likely to encounter. 

I checked the state of the law in the FTX case, and unless someone knew specifically of clear fraud going on, they would have not been protected, which seems like it makes them mostly useless for things we care about. They also w... (read more)

3
Josh Jacobson
20d
An alternative take on this (I haven’t researched this topic myself): https://forum.effectivealtruism.org/posts/LttenWwmRn8LHoDgL/josh-jacobson-s-quick-takes?commentId=ZA2N2LNqQteD5dE4g

Publishing it probably wouldn't make sense at any other time, so Zach may have been stuck between being rushed into publishing it too early or not responding to the public-interest event at all.

Yeah, agree, that makes sense. I do think it was the wrong call, but I can understand the perceived urgency.

Also, it seems unlikely he booked six weeks off right after SBF's sentencing for that reason.

Totally, to be clear, I think it's totally fine for Zack to take time off, and wasn't intending to comment on that at all. I was just responding to the (what I perceived to be a separate thread) of wanting to hold off on engaging until he had formed considered opinions.

I wouldn't read too much into the exact ordering in the EA Forum digest. At least if I was making such a digest I would mostly be busy filling it up at all, and it would feel unnecessarily nitpicky and stressful to me to be judged on even the relative ordering of the articles I put in there.

3
Sam_Coggins
21d
Ah fair call I can see how my comment was nitpicky I am still concerned about the promotion of the (well-intentioned) RCT post that seemed to undervalue integrity processes for doing RCTs on vulnerable people (in my view). But I appreciate I could have misinterpreted this. In any case, I can also see that my comment could be experienced as stressful or judgey by the Forum team AND author of the RCT post. I'm genuinely really sorry if this has happened. I appreciate you've taken on difficult and important tasks and trust you have the best of intentions with them :) Thanks for your efforts and I'll keenly be more tactful in future.

but I want to wait to publicly discuss my thoughts on these issues until I have the capacity to do so thoroughly and thoughtfully, rather than attempt to respond on the fly

You did speak publicly about them, in a large newspaper nonetheless: https://www.washingtonpost.com/opinions/2024/03/28/sam-bankman-fried-effective-truism-fraud

To be clear, I think it's still fine to take some time, but it does seem like you made claims that the EA community has engaged in successful investigation and reflection here, and so saying that you want to hold off on engaging u... (read more)

Jason
21d28
15
0
3

You don't deserve negative karma for this comment (was at -1 when I corrected that), but I think it's fair to recognize that the timing of the op-ed was indirectly dictated by the date Judge Kaplan set for sentencing. Publishing it probably wouldn't make sense at any other time, so Zach may have been stuck between being rushed into publishing it too early or not responding to the public-interest event at all. Also, it seems unlikely he booked six weeks off right after SBF's sentencing for that reason.

I'm not opining that I would have published all of the l... (read more)

So, I think it's clear that a lot of leadership turnover has happened. However, my sense is that the kind of leadership turnover that has occurred is anti-correlated with what I would consider good. Most importantly, it seems to me that the people in EA leadership that I felt were often the most thoughtful about these issues took a step back from EA, often because EA didn't live up to their ethical standards, or because they burned out trying to affect change and this recent period has been very stressful (or burned out for other reasons, unrelated to tryi... (read more)

7
Buck
18d
Who on your list matches this description? Maybe Becca if you think she's thoughtful on these issues? But isn't that one at most?

left the EV board

Given that it appears EVF will soon be sent off to the scrapping yards for disassembly, it seems that changes in EVF board composition -- for better or worse -- may be less salient than they would have been been in 2022 or even much of 2023.

So "a lot of leadership turnover has happened" may not be quite as high-magnitude as had those changes had occurred in years past. Furthermore, some of these changes seem less connected to FTX than others, so it's not clear to me how much turnover has happened as a fairly direct result of FTX. Th... (read more)

My sense is people are realizing this, based on the disagree-votes, but just for the record: No, the most important part of any FTX investigation is of course not that we prevent causing further harm to EA, lol. It's to prevent causing further harm to the world. Indeed, this kind of naive-consequentialist reasoning seems like one of the key components of how FTX happened in the first place.

7
Denis
20d
Thank you for this comment.  I really appreciate when someone puts an explanation for why they down-voted something I wrote :D  Indeed, I knew that what I wrote would be unpopular when I wrote it. And maybe it just looks like I'm an old cynic polluting the idealism of youth. But I don't agree that it's naive. If anything, the naivete lies on the other side.  How can an EA not realise that damaging the EA movement is damaging to the world?  So you need to balance the potential damage to the world thought damage to EA vs the potential of avoiding damage to the world from the investigation. I have not seen any comments mentioning this, so I wrote about it, because it is important.  I'm not clear in what sense anything the EA movement did with SBF has damaged the world, unless you believe that SBF would have behaved ethically were it not for the EA movement, and that EA's actively egged him on to commit fraud. I presume that when you refer to "naive-consequentialist reasoning", you are referring to what happened within FTX (in addition to my own reasoning of course!), rather than to something that someone in the EA movement (other than SBF) did?  I don't know the details, but I would expect that the donations that we received from him were spent very effectively and had a positive impact on many people. (If that is not the case, that should be investigated, I'd agree!). So it is highly likely that the impact of the EA movement was to make the net impact of SBF's fraud significantly less negative overall.  Of course, I may be wrong - I am interested to hear any specific ways in which people believe that the EA movement might be responsible for the damage SBF caused to investors, or to anyone other than the EA movement itself.  But my reading of this is that SBF caused damage to EA, and not the other way round. And there was very little that EA could have done to prevent that damage other than somehow realising, unlike plenty of very experienced investors, that he

I think in any world, including ones where EA leadership is dropping the ball or is likely to cause more future harm like FTX, it would be very surprising if they individually had not updated substantially. 

As an extreme illustrative example, really just intended to get the intuition across, imagine that some substantial fraction of EA leaders are involved in large scale fraud and continue to plan to do so (which to be clear, I don't have any evidence of), then of course the individuals would update a lot on FTX, but probably on the dimensions of "her... (read more)

8
David Mathers
20d
'Naive consequentialist plans also seem to have increased since FTX, mostly as a result of shorter AI timelines and much more involvement of EA in the policy space.' This gives me the same feeling as Rebecca's original post: that you have specific information about very bad stuff that you are (for good or bad reasons) not sharing. 
JWS
21d17
5
0

What are you referring to when you say "Naive consequentialism"?[1] Because I'm not sure that it's what others reading might take it to mean?

Like you seem critical of the current plan to sell Wytham Abbey, but I think many critics view the original purchase of it as an act of naive consequentialism that ignored the side effects that it's had, such as reinforcing negative views of EA etc. Can both the purchase and the sale be a case of NC? Are they the same kind of thing?

So I'm not sure the 3 respondents from the MCF and you have the same thing in mind... (read more)

There are no whistleblower systems in place at any major EA orgs as far as I know

I’ve heard this claim repeatedly, but it’s not true that EA orgs have no whistleblower systems. 

I looked into this as part of this project on reforms at EA organizations: Resource on whistleblowing and other ways of escalating concerns

  • Many organizations in EA have whistleblower policies, some of which are public in their bylaws (for example, GiveWell and ACE publish their whistleblower policies among other policies). EV US and EV UK have whistleblower
... (read more)

"Better vet risks from funders/leaders, have lower tolerance for bad behavior, and remove people responsible for the crisis from leadership roles."

I don't think any such removals have happened, and my sense is tolerance of bad behavior of the type that seems to me most responsible for FTX has gone up (in-particular heavy optimization for optics and large tolerance for divergences between public narratives and what is actually going on behind the scenes).

I'd like to single out this part of your comment for extra discussion. On the Sam Harris podcast, Will M... (read more)

My sense is the EV UK board mattered a good amount as well during this period, and Claire Zabel was also on the board during the relevant period (I do not know which board members Becca was thinking about in the above post, if any).

Is your sense that if the cost-effectiveness estimate had come back positive, but not overwhelmingly positive (let's say like a 70th percentile OP grant-dollar in the last year), that this would have flipped the decision? 

Given that not vetoing this grant made Alexander's top list of decisions he regrets, mostly because of the negative optics-aspects, I would be surprised if the cost-effectiveness estimate was actually a crux here.

I don't really know (and have had almost no interactions with Alexander). But it would be unsurprising to me if that would have flipped the decision.

Basically: Alexander's views seem compatible with a real felt regret after the controversy about it a year ago, and this being the first time one can talk about it publicly without undermining a grantee. Since the PR costs have by this stage largely(?) been paid, it seems quite plausible that if the cost-effectiveness analysis had come back mildly positive he'd have continued to feel regret about not having averted the past issue, while now thinking it was right to continue support.

As a critic of many institutions and organizations in EA, I agree with the above dynamic and would like people to be less nitpicky about this kind of thing (and I tried to live up to that virtue by publishing my own quite rough grant evaluations in my old Long Term Future Fund writeups)

What I mean by collaboration is "is willing to share any information with them and allow staff to speak freely". The key obstacle I have faced in trying to do investigations is that nobody is willing to talk or say anything that goes on the record by organizational policy, which of course makes this kind of thing very hard to pull off. 

I also think it would help a lot if CEA were to lend some credibility to the investigation. People don't want to repeat the same thing hundreds of times, and it would IMO be good for CEA/EV to put some social capital on the line to encourage people to talk to the investigators.

Yeah, that's fair. I'll edit it in. 

One final issue is who should sponsor an investigation of "what role EA played in FTX coming into existence"; given the linkages between EVF, EVF insiders, and SBF, EVF would be somewhere in the vicinity of last place on my ideal preference list.

I think CEA or EV collaborating with an external but still within-EA trusted organization seems like the best choice to me here. Hiring someone who is broadly known to be independent (like, IDK, you could choose someone from Rethink, or Tyler Cowen, or someone else in that kind of reference class), seems like a good idea.

2
Jason
24d
[Edit: With "collaboration" as defined in Habryka's response below, my question dissolves.] Is there a reason to prefer CEA or EV collaborating with an investigator versus someone (or several someones) funding an investigator(s), taking a back seat once the proposal is accepted and funded, and deferring to the investigator(s) what to publish? [I may be reading too much into "collaborating" in your first sentence.]

I didn't intend the above to be a straightforward accusation of lying. My sense is that different people at CEA disagree here about how useful this investigation was for reform purposes (which I think is evidence of the claim being false, but not strong evidence of deception).

If I had to make a bet, I would say majority opinion at CEA on the question of "did this investigation obstruct or assist internal reform efforts at CEA (as opposed to e.g. address legal liability and reputational risks independent from their best guess about what appropriate reform i... (read more)

Yeah, I think you are pointing towards something real here. 

Like, I do think a thing that drove my reaction to this was a perspective in which it was obvious that most people in EA didn't literally actively participate in the FTX fraud. I have encountered very extreme and obviously wrong opinions about this in the public (the comment section of the WaPo article provides many examples of this), and there is some value in engaging with that. 

But I do think that is engaging with a position that is extremely shallow, and the mechanism of it seems lik... (read more)

Load more