Kinda pro-pluralist, kinda anti-Bay EA.
I have come here to extend the principle of charity to bad criticisms of EA and kick ass. And I'm all out of charity.
(my opinions are fully my own, and do not represent the views of any close associates or the company I work for)
Sorry Oli, but what is up with this (and your following) comment?
From what I've read from you[1] seem to value what you call "integrity" almost as a deontological good above all others. And this has gained you many admirers. But to my mind high integrity actors don't make the claims you've made in both of these comments without bringing examples or evidence. Maybe you're reacting to Sean's use of 'garden variety incompetence' which you think is unfair to Bostrom's attempts to tow the fine line between independence and managing university politics but still, I feel you could have done better here.
To make my case:
Maybe from your perspective you feel like you're just floating questions here and sharing your personal perspective, but given the content of what you've said I think it would have been better if you had either brought more examples or been less hostile.
(I'm going to wrap up a few disparate threads together here, and will probably be my last comment on this post ~modulo a reply for clarification's sake. happy to discuss further with you Rob or anyone via DMs/Forum Dialogue/whatever)
(to Rob & Oli - there is a lot of inferential distance between us and that's ok, the world is wide enough to handle that! I don't mean to come off as rude/hostile and apologies if I did get the tone wrong)
Thanks for the update Rob, I appreciate you tying this information together in a single place. And yet... I can't help but still feel some of the frustrations of my original comment. Why does this person not want to share their thoughts publicly? Is it because they don't like the EA Forum? Because their scared of retaliation? It feels like this would be useful and important information for the community to know.
I'm also not sure what to make of Habryka's response here and elsewhere. I think there is a lot of inferential distance between myself and Oli, but it does seem to me to come off as a "social experiment in radical honesty and perfect transparency" , which is a vibe I often get from the Lightcone-adjacent world. And like, with all due respect, I'm not really interested in that whole scene. I'm more interested in questions like:
Writing it down, 2.b. strikes me as what I mean by 'naive consequentialism' if it happened. People had information that SBF was a bad character who had done harm, but calculated (or assumed) that he'd do more good being part of/tied to EA than otherwise. The kind of signalling you described as naive consequentialism doesn't really seem pertinent to me here, as interesting as the philosophical discussion can be.
tl'dr - I think there can be a difference between a discussion about what norms EA 'should' have, or senior EAs should act by, especially in the post-FTX and influencing-AI-policy world, but I think that's different from the 'minimal viable information-sharing' that can help the community heal, hold people to account, and help make the world a better place. It does feel like the lack of communication is harming that, and I applaud you/Oli pushing for it, but sometimes I wish you would both also be less vague too. Some of us don't have the EA history and context that you both do!
epilogue: I hope Rebecca is doing well. But this post & all the comments makes me feel more pessimistic about the state of EA (as a set of institutions/organisations, not ideas) post FTX. Wounds might have faded, but they haven't healed 😞
Not that people should have guessed the scale of his wrongdoing ex-ante, but was there enough to start to downplay and disassociate?
People, the downvote button is not a disagree button. That's not really what it should be used for.
My guess is there's something ideological or emotional behind these kind of EA critiques,
Something I've come across while looking into/responding to EA criticism over the last few months is that a lot of EA critics seem to absolutely hate EA[1], like with an absolutely burning zeal. And I'm not really sure why or what to do with it - feels like it's an underexplored question/phenomenon for sure.
Or at least, what they perceive EA/EAs to be
What are you referring to when you say "Naive consequentialism"?[1] Because I'm not sure that it's what others reading might take it to mean?
Like you seem critical of the current plan to sell Wytham Abbey, but I think many critics view the original purchase of it as an act of naive consequentialism that ignored the side effects that it's had, such as reinforcing negative views of EA etc. Can both the purchase and the sale be a case of NC? Are they the same kind of thing?
So I'm not sure the 3 respondents from the MCF and you have the same thing in mind when you talk about naive consequentialism, and I'm not quite sure I am either.
Both here and in this other example, for instance
After listening, here are my thoughts on the podcast (times refer roughly to youtube timestamps):[1]
Recap[2]
Personal Thoughts
So there's not an actual deep-dive into what happened with SBF and FTX, and how much Will or figures in EA actually knew. Perhaps the podcast was trying to cover too much ground in 80 minutes, or perhaps Sam didn't want to come off as too hostile of a host? I feel like both a talking about the whole thing at an oddly abstract level, and not referencing the evidence that's come out in court.
While I also agree with both that EA principles are still good, and that most EAs are doing good in the world, there's clearly a connection between EA - or at least a bastardised, naïvely maximalist view of it - and SBF. The prosecution and the judge seemed to take the view that SBF was a high risk of doing the same or a similar thing again in the future, and that he has not shown remorse. This makes sense if Sam was acting in the way he did because he thought he was doing the right thing, and the fact that it was an attitude rather than a 'rational calculation' doesn't make it less driven by ideas.
So I think that's where I've ended up on this (I'm not an expert on financials, or what precise laws FTX broke, and how their attempted scheme operated or how often they were brazenly lying for. Feels like those with an outside view are pretty damn negative on SBF). I think trying to fix the name 'EA' or 'not EA' to what SBF and the FTX team believed is pretty unhelpful. I think Ellison and SBF had a very naïve, maximalist view of the world and their actions. They believed they had special ability and knowledge to act in the world, and to break existing rules and norms in order to make the world better if they saw it, even if this incurred high risks, if their expectation was that it would work out in EV terms. An additional error here, and perhaps where the 'Hubris' theory does play in, is that there was no error mechanism to correct these beliefs. Even after the whole collapse, and a 25-year sentence, it still seems to me that SBF thinks he made the 'right' call and got unlucky.
My takeaway is that this cluster of beliefs[5] is dangerous and the EA community should develop an immune system to reject these ideas. Ryan Carey refers to this as 'risky beneficentrism', and I think part of 'Third-Wave' EA should be about rejecting this cluster of ideas, making this publicly known, and disassociating EA from the individuals, leaders, or organisations who still hold on to it in the aftermath of this entire debacle.
For clarity Sam refers to Sam Harris, and SBF refers to Sam Bankman-Fried
Not necessarily in order, I've tried to group similar points together
I think this makes some sense if you view EA as a set of ideas/principles, less so if you view EA as a set of people and organisations
During this section especially I kinda wanted to shout at my podcast when Will asked rhetorically "was he lying to me that whole time?" the answer is yes Will, it seems like they were. The code snippets from Nishad Singh and Gary Wang that the prosecution shared are pretty damning, for example.
See the following link in the text to Ryan Carey's post. But I think the main dangerous ideas to my mind are:
1) Naïve consequentialism
2) The ability and desire to rapidly change the world
3) A rejection of existing norms and common-sense morality
4) No uncertainty about whether the values above or the empirical consequences of the actions
5) Most importantly, no feedback or error correction mechanism for any of the above.
edit: As always, disagree/downvoters, would be good to hear why you disagree, as I'm not sure what I've written below merits either a disagree and especially not a downvote.
Thanks for sharing your thoughts Rebecca.
I do find myself wishing that some of these discussions from the core/leadership of EA[1] were less vague. I noticed this with Habrkya's reaction to the recent EA column in the Washington Post - where he mentions 'people he's talked to at CEA'. Would be good to know who those people at CEA are.
I accept some people are told things informally, and in confidence etc., but it would seem to be useful to have as much as is possible/reasonable in the public domain, especially since these discussions/decisions seem to have such a large impact on the rest of the community in terms of reputational impact, organisational structure and hiring, grantmaking priorities and decisions etc.
For example, I again respect you said that your full thoughts would be 'highly costly' to share, but it'd be enlightening to know which members of the EV board you disagreed with so much that you felt you had to resign. If you can't share that, knowing why you can't share that. Or if not that, knowing what the concrete issues were. If you allege that there were "extensive and significant mistakes made which have not been addressed" and that these mistakes "make me very concerned about the amount of harm EA might do in the future" then I really want to know what these mistakes were concretely and who made/is making them. I think the vagueness is another sign that EA's healing process post-FTX still has a way to go.[2]
Above all though, I hope you're doing well, and would be happy to have an individual conversation if you think that would be useful, or if you aren't willing to share things on the Forum.
I don't understand your lack of understanding. My point is that you're acting like a right arse.
When people make claims, we expect there to be some justification proportional to the claims made. You made hostile claims that weren't following on from prior discussion,[1] and in my view nasty and personal insinuations as well, and didn't have anything to back it up.
I don't understand how you wouldn't think that Sean would be hurt by it.[2] So to me, you behaved like arse, knowing that you'd hurt someone, didn't justify it, got called out, and are now complaining.
So I don't really have much interest in continuing this discussion for now, or much opinion at the moment of your behaviour or your 'integrity'
Like nobody was discussing CSER/CFI or Sean directly until you came in with it
Even if you did think it was justified