JWS

3076 karmaJoined Jan 2023

Bio

Kinda pro-pluralist, kinda anti-Bay EA.

I have come here to extend the principle of charity to bad criticisms of EA and kick ass. And I'm all out of charity.

(my opinions are fully my own, and do not represent the views of any close associates or the company I work for)

Posts
6

Sorted by New
4
JWS
· 1y ago · 1m read

Sequences
1

Criticism of EA Criticism

Comments
259

JWS
3d3
10
5

I don't understand your lack of understanding. My point is that you're acting like a right arse.

When people make claims, we expect there to be some justification proportional to the claims made. You made hostile claims that weren't following on from prior discussion,[1] and in my view nasty and personal insinuations as well, and didn't have anything to back it up. 

I don't understand how you wouldn't think that Sean would be hurt by it.[2] So to me, you behaved like arse, knowing that you'd hurt someone, didn't justify it, got called out, and are now complaining.

So I don't really have much interest in continuing this discussion for now, or much opinion at the moment of your behaviour or your 'integrity'

  1. ^

    Like nobody was discussing CSER/CFI or Sean directly until you came in with it

  2. ^

    Even if you did think it was justified

JWS
3d92
30
6
1
2

Sorry Oli, but what is up with this (and your following) comment?

From what I've read from you[1] seem to value what you call "integrity" almost as a deontological good above all others. And this has gained you many admirers. But to my mind high integrity actors don't make the claims you've made in both of these comments without bringing examples or evidence. Maybe you're reacting to Sean's use of 'garden variety incompetence' which you think is unfair to Bostrom's attempts to tow the fine line between independence and managing university politics but still, I feel you could have done better here.

To make my case:

  • When you talk about "other organizations... become a hollow shell of political correctness and vapid ideas" you have to be referring to CSER & Leverhulme here right, like it's the only context that makes sense.
    • If not, I feel like that's very misleadingly phrased.
    • But if it is, then calling those organisations 'hollow shells' of 'vapid ideas' is like really rude, and if you're going to go there at least have the proof to back it up?
  • Now that just might be you having very different politics from CSER & Leverhulme people. But then you say "he [Bostrom] didn't compromise on the integrity of the institution he was building", which again I read as you directly contrasting against CSER & Leverhulme - or even Sean in person.
    • Is this true? Surely organisation can have different politics or even have worse ideas without compromising on integrity?
    • If they did compromise on integrity, feels like you should share what those are.
    • If it is directed at Sean personally, that feels very nasty. Making assertions about someone's integrity without solid proof isn't just speculation, it's harmful to the person and also poor 'epistemic hygiene' for the community at large.
  • You say "the track record here speaks quite badly to Sean's allocation of responsibility by my lights". But I don't know what 'track record' your speaking about here. Is it at FHI? CSER & Leverhulme? Sean himself?
  • Finally, this trio of claims in your second comment really rubbed me[2] the wrong way. You say that you think:
    • "CSER and Leverhulme, which I think are institutions that have overall caused more harm than good and I wish didn't exist"
      • This is a huge claim imo. More harm than good? So much so that you wish it didn't exist? With literally no evidence apart from it being your opinion???
    • "Sean thought were obvious choices were things that would have ultimately had long-term bad consequences"
      • I assume that this is about relationship management with the university perhaps? But I don't know what to make of it because you don't say what these 'obvioous choices are', or why you think they're so likely to have bad consequences
    • "I also wouldn't be surprised if Sean's takes were ultimately responsible for a good chunk of associated pressure and attacks on people's intellectual integrity"
      • This might be the worst one. Why are Sean's takes responsible? What were the attacks on people's integrity? Was this something Sean did on purpose?
      • I don't know what history you're referring to here, and the language used is accusatory and hostile. It feels really bad form to write it without clarifying what you're referring to for people (like me) who don't know what context you're talking about.

Maybe from your perspective you feel like you're just floating questions here and sharing your personal perspective, but given the content of what you've said I think it would have been better if you had either brought more examples or been less hostile.

  1. ^

    And I feel like I've read quite a bit, both here, on LW, and on your Twitter

  2. ^

    And given the votes, a lot of readers including some who may have agreed with your first comment

JWS
19d43
6
4
1

(I'm going to wrap up a few disparate threads together here, and will probably be my last comment on this post ~modulo a reply for clarification's sake. happy to discuss further with you Rob or anyone via DMs/Forum Dialogue/whatever)

(to Rob & Oli - there is a lot of inferential distance between us and that's ok, the world is wide enough to handle that! I don't mean to come off as rude/hostile and apologies if I did get the tone wrong)

Thanks for the update Rob, I appreciate you tying this information together in a single place. And yet... I can't help but still feel some of the frustrations of my original comment. Why does this person not want to share their thoughts publicly? Is it because they don't like the EA Forum? Because their scared of retaliation? It feels like this would be useful and important information for the community to know.

I'm also not sure what to make of Habryka's response here and elsewhere. I think there is a lot of inferential distance between myself and Oli, but it does seem to me to come off as a "social experiment in radical honesty and perfect transparency" , which is a vibe I often get from the Lightcone-adjacent world. And like, with all due respect, I'm not really interested in that whole scene. I'm more interested in questions like:

  1. Were any senior EAs directly involved in the criminal actions at FTX/Alameda?
  2. What warnings were given about SBF to senior EAs before the FTX blowup, particularly around the 2018 Alameda blowup, as recounted here.
    1. If these warnings were ignored, what prevented people from deducing that SBF was a bad actor?[1]
    2. Critically, if these warnings were accepted as true, who decided to keep this a secret and to supress it from the community at large, and not act on it?
  3. Why did SBF end up with such a dangerous set of beliefs about the world? (I think they're best described as 'risky beneficentrism' - see my comment here and Ryan's original post here)
  4. Why have the results of these investigations, or some legally-cleared version, not been shared with the community at large?
  5. Do senior EAs have any plan to respond to the hit to EA-morale as a result of FTX and the aftermath, along with the intensely negative social reaction to EA, apart from 'quietly hope it goes away'?

Writing it down, 2.b. strikes me as what I mean by 'naive consequentialism' if it happened. People had information that SBF was a bad character who had done harm, but calculated (or assumed) that he'd do more good being part of/tied to EA than otherwise. The kind of signalling you described as naive consequentialism doesn't really seem pertinent to me here, as interesting as the philosophical discussion can be.

tl'dr - I think there can be a difference between a discussion about what norms EA 'should' have, or senior EAs should act by, especially in the post-FTX and influencing-AI-policy world, but I think that's different from the 'minimal viable information-sharing' that can help the community heal, hold people to account, and help make the world a better place. It does feel like the lack of communication is harming that, and I applaud you/Oli pushing for it, but sometimes I wish you would both also be less vague too. Some of us don't have the EA history and context that you both do!

epilogue: I hope Rebecca is doing well. But this post & all the comments makes me feel more pessimistic about the state of EA (as a set of institutions/organisations, not ideas) post FTX. Wounds might have faded, but they haven't healed 😞

  1. ^

    Not that people should have guessed the scale of his wrongdoing ex-ante, but was there enough to start to downplay and disassociate?

JWS
19d11
4
0

My guess is there's something ideological or emotional behind these kind of EA critiques,

Something I've come across while looking into/responding to EA criticism over the last few months is that a lot of EA critics seem to absolutely hate EA[1], like with an absolutely burning zeal. And I'm not really sure why or what to do with it - feels like it's an underexplored question/phenomenon for sure.

 

  1. ^

    Or at least, what they perceive EA/EAs to be

JWS
20d17
5
0

What are you referring to when you say "Naive consequentialism"?[1] Because I'm not sure that it's what others reading might take it to mean?

Like you seem critical of the current plan to sell Wytham Abbey, but I think many critics view the original purchase of it as an act of naive consequentialism that ignored the side effects that it's had, such as reinforcing negative views of EA etc. Can both the purchase and the sale be a case of NC? Are they the same kind of thing?

So I'm not sure the 3 respondents from the MCF and you have the same thing in mind when you talk about naive consequentialism, and I'm not quite sure I am either.

  1. ^

    Both here and in this other example, for instance

My deductions were here, there are two main candidates given the information available (if it is reliable).

JWS
21d68
10
1
1
1

After listening, here are my thoughts on the podcast (times refer roughly to youtube timestamps):[1]

Recap[2]

  • Neither Sam nor Will think less of EA principles because of the FTX collapse. Sam says so in the intro (4:40), and towards the end mentions how unhappy he was to see people "dancing on the grave of EA" (59:17) and likens the simple rejection of EA because of SBF to the rejection of vegetarianism because of Hitler's diet (57:10). Will also agrees that FTX hasn't made him reconsider EA principles (56:24), and says that the FTX debacle happened in spite of or not because of EA (54:52).[3]
    • On whether the beliefs of EA are responsible, Will says that longtermism is not the reason, since one could justify maximalist and extreme behaviours for more near-termist concerns as well (1:07:32). He also thinks that EA communicators consistently said that the ends do not justify the means (54:58). 
  • Will is concerned about, in attempting to get into the mind of SBF and explain his behaviour, there's a danger of looking to exculpatory of him and wants to make it clear that the collapse of FTX caused huge damage (14:15) and it's clear in the following section that's he's read a lot of the evidence submitted to the case, including Twitter messages from people who lost dearly-needed savings in FTX.
    • However, some of the podcast does seem to come across that way sometimes. He recounts that his impression of SBF since 2012 was that of "a very thoughtful, incredibly morally motivated person" (37:29), and reflections make him seem quite credulous of the explanations he was given (40:23).[4] He notices that his reflections are sounding defensive of SBF about halfway through the podcast (42:01).
    • The Sam Harris subreddit has had an overwhelmingly negative reaction to the podcast. Many people there are basically Sam and Will of either continuing to be duped, or running PR for SBF. Of course this is a self-selected-overly-online community, but I think it underscores the anger and incredulity over what happened here, and how very careful and tactful messaging needs to be over this.
  • Sam offers two major theories as to why SBF did what he did (11:52):
    1. It was a conscious fraud and long-con from the beginning, where SBF used EA as a cover to gain influence/power/good-will
    2. Due to SBF's beliefs about ethics and approach to risk/probability, he was placing a series of bets that were eventually likely to fail and eventually blew up in his face.
  • Will actually rejects the above (18:17), and doesn't think SBF's behaviour can be framed as a calculated decisions because it simply doesn't make sense (27:31), and that the 'long-con' explanation can't explain why FTX wanted regulation and press attention, and even why it was associated with EA (24:10). Instead, he believes the main culprit was SBF's Hubris (28:11).  He also notes SBF's unusual risk tolerance (23:04) and suggests the cause was not a risk calculation gone wrong but an overall attitude to risk (29:47).
    • Will seemingly derives a lot of his explanations from the work of Eugene Soltes, a Harvard Business School professor, especially his book Why They Do It, which he references throughout the podcast
    • While Will says he rejects both of Sam's original propositions (theory 1 and 2), sometimes he shades into backing theory 2, such as when he says that SBF really did believe in the ideas of EA (49:46)
  • While Sam is more agnostic about what exactly was driving SBF, both make mention of how seriously SBF took ideas throughout. Will notes that SBF found the idea of earn-to-give compelling way before FTX existed (8:16), Sam references SBF's interview with Tyler Cowen where he said he'd accept the St Petersburg paradox forever as a red flag (21:22). He raises SBF's neuro-atypical background as another potential red flag (35:31) but Will didn't seem to buy it and the conversation moves on from that pretty quickly.
  • The discussion ends with a discussion about the effects of the whole saga on EA and what changes there have been. Will says that there has been huge harm to EA and that many now think ill of it (54:23)
    • Will thinks that EA should have been, and should be more decentralised (58:24). He also had legal guidance (or instruction?) telling him not to share information or his perspective during the internal investigation (58:20). The combination of these meant that he wasn't able to provide guidance to the EA community that he wanted to during the crisis(58:47). 
    • Will often refers to GWWC and the 9000+ pledgers as normal people living normal lives (1:03:23), and that EAs are "pretty normal people" that are willing to put their money and time where their mouth is (1:05:31)
    • Will mentions the FTX collapse was incredibly hard on him, that it was the hardest year and a half of his life, and that he nearly lost motivation for the EA project altogether (1:16:44).
    • He thinks that the strongest critique of longtermism he's come across since WWOTF was published isn't the pascal's mugging type stuff, but that AI risks might be much more of a near-term threat than a long-term one (1:10:44), that AI x-risk concerns have been increasingly vindicated in recent years (1:11:38) and that intelligence explosion arguments of the IJ Good variety are holding up and deserve consideration (1:14:57)

 

Personal Thoughts

So there's not an actual deep-dive into what happened with SBF and FTX, and how much Will or figures in EA actually knew. Perhaps the podcast was trying to cover too much ground in 80 minutes, or perhaps Sam didn't want to come off as too hostile of a host? I feel like both a talking about the whole thing at an oddly abstract level, and not referencing the evidence that's come out in court.

While I also agree with both that EA principles are still good, and that most EAs are doing good in the world, there's clearly a connection between EA - or at least a bastardised, naïvely maximalist view of it - and SBF. The prosecution and the judge seemed to take the view that SBF was a high risk of doing the same or a similar thing again in the future, and that he has not shown remorse. This makes sense if Sam was acting in the way he did because he thought he was doing the right thing, and the fact that it was an attitude rather than a 'rational calculation' doesn't make it less driven by ideas.

So I think that's where I've ended up on this (I'm not an expert on financials, or what precise laws FTX broke, and how their attempted scheme operated or how often they were brazenly lying for. Feels like those with an outside view are pretty damn negative on SBF). I think trying to fix the name 'EA' or 'not EA' to what SBF and the FTX team believed is pretty unhelpful. I think Ellison and SBF had a very naïve, maximalist view of the world and their actions. They believed they had special ability and knowledge to act in the world, and to break existing rules and norms in order to make the world better if they saw it, even if this incurred high risks, if their expectation was that it would work out in EV terms. An additional error here, and perhaps where the 'Hubris' theory does play in, is that there was no error mechanism to correct these beliefs. Even after the whole collapse, and a 25-year sentence, it still seems to me that SBF thinks he made the 'right' call and got unlucky.

My takeaway is that this cluster of beliefs[5] is dangerous and the EA community should develop an immune system to reject these ideas. Ryan Carey refers to this as 'risky beneficentrism', and I think part of 'Third-Wave' EA should be about rejecting this cluster of ideas, making this publicly known, and disassociating EA from the individuals, leaders, or organisations who still hold on to it in the aftermath of this entire debacle.

  1. ^

    For clarity Sam refers to Sam Harris, and SBF refers to Sam Bankman-Fried

  2. ^

    Not necessarily in order, I've tried to group similar points together

  3. ^

    I think this makes some sense if you view EA as a set of ideas/principles, less so if you view EA as a set of people and organisations

  4. ^

    During this section especially I kinda wanted to shout at my podcast when Will asked rhetorically "was he lying to me that whole time?" the answer is yes Will, it seems like they were. The code snippets from Nishad Singh and Gary Wang that the prosecution shared are pretty damning, for example.

  5. ^

    See the following link in the text to Ryan Carey's post. But I think the main dangerous ideas to my mind are:

    1) Naïve consequentialism

    2) The ability and desire to rapidly change the world

    3) A rejection of existing norms and common-sense morality

    4) No uncertainty about whether the values above or the empirical consequences of the actions

    5) Most importantly, no feedback or error correction mechanism for any of the above.

I guess I kinda want to say fiat justitia ruat caelum here 🤷

JWS
21d53
14
3

edit: As always, disagree/downvoters, would be good to hear why you disagree, as I'm not sure what I've written below merits either a disagree and especially not a downvote.

Thanks for sharing your thoughts Rebecca.

I do find myself wishing that some of these discussions from the core/leadership of EA[1] were less vague. I noticed this with Habrkya's reaction to the recent EA column in the Washington Post - where he mentions 'people he's talked to at CEA'. Would be good to know who those people at CEA are.

I accept some people are told things informally, and in confidence etc., but it would seem to be useful to have as much as is possible/reasonable in the public domain, especially since these discussions/decisions seem to have such a large impact on the rest of the community in terms of reputational impact, organisational structure and hiring, grantmaking priorities and decisions etc.

For example, I again respect you said that your full thoughts would be 'highly costly' to share, but it'd be enlightening to know which members of the EV board you disagreed with so much that you felt you had to resign. If you can't share that, knowing why you can't share that. Or if not that, knowing what the concrete issues were. If you allege that there were "extensive and significant mistakes made which have not been addressed" and that these mistakes "make me very concerned about the amount of harm EA might do in the future" then I really want to know what these mistakes were concretely and who made/is making them. I think the vagueness is another sign that EA's healing process post-FTX still has a way to go.[2] 

Above all though, I hope you're doing well, and would be happy to have an individual conversation if you think that would be useful, or if you aren't willing to share things on the Forum.

  1. ^

    An infamously slippery term, I'm guessing I'm referring to EV, CEA, OpenPhil, the Meta Coordination Forum attendees etc.

  2. ^

    Not to imply the vagueness is a fault of yours. It's probably attributable to people's concerns of retaliation, legal constraints/NDAs, unequal power structures etc.

Load more