LG

Lukas_Gloor

5383 karmaJoined Jan 2015

Participation

    Sequences
    1

    Moral Anti-Realism

    Comments
    470

    I'm excited about this!

    One question, I notice a bit of a tension with the EA justification of this project ("improving EA productivity") and the common EA mental health issues around feeling pressure to be productive. I know CBT is more about providing thinking tools rather than giving concrete advice on what to do/try, but might there be a risk that people who take part will feel like they are expected to show a productivity increase? Would you still recommend to EA clients to take time off generously if someone is having burnout symptoms? I'm curious to hear your thoughts on this.

    By the way, this discussion (mostly my initial comment and what it's in reaction to; not so much specifics about CEA history) reminded me of this comment about the difficulty of discussing issues around culture and desired norms. Seems like maybe we'd be better off discussing what each of us thinks would be best steps forward to improve EA culture or find a way to promote some kind of EA-relevant message (EA itself, the importance of AI alignment, etc.) and do movement building around that so it isn't at risk of backfiring. 

    Interesting; I didn't remember this about Tara.

    Two data points in the other direction:

    • A few months (maybe up to 9 months, but could be as little as 1 month, I don't remember the timing) before Larissa had to leave CEA, a friend and I talked to a competent-seeming CEA staff member who was about to leave the org (or had recently left – I don't remember details) because the org seemed like a mess and had bad leadership. I'm not sure if Leverage was mentioned there – I could imagine that it was, but I don't remember details and my most salient memory is that I thought of it as "not good leadership for CEA." My friend and I encouraged them to stay and speak up to try to change leadership, but the person had enough for the time being or had some other reason to leave (again, I don't remember details). Anyway, this person left CEA at the time without a plan to voice their view that the org was in a bad state. I don't know if they gave an exit interview or deliberately sought out trustees or if they talked to friends or whether they said nothing at all – I didn't stay in touch. However, I do remember that my friend discussed if maybe we should at some point get back this former CEA staff person and encourage them again to find out if there are more former colleagues who are dissatisfied and if we could cause a wave of change at CEA. We were so far removed and had so few contacts to anyone who actually worked there that it would've been a bit silly for us to get involved in this. And I'm not saying we would've done it – it's easy to talk about stuff like that, but then usually you don't do anything. Still, I feel like this anecdote suggests that there are sometimes more people interested and invested in good community outcomes than one might think, and multiple pathways to beneficial leadership change (like, it's very possible this former staff had nothing to do with the eventual chain of causes that led to leadership change, and that means that multiple groups of people were expressing worried sentiments about CEA at that time independently).
    • At one point somewhere between 2017-2018, someone influential in EA encouraged me to share more about specific stuff that happened in the EA orgs I worked because they were sometimes talking to other people who are "also interested in the health of EA orgs / health of the EA community." (To avoid confusion, this was not the community health team.) This suggests that people somewhat systematically keep an eye on things and even if CEA were to get temporarily taken over by a Silicon Valley cultish community, probably someone would try to do something about it eventually. (Even if it's just writing an EA forum post to create common knowledge that a specific is now taken over and no longer similar to what it was when it was founded. I mean, we see that posts did eventually get written about Leverage, for instance, and the main reasons it didn't happen earlier are probably more because many people thought "oh, everyone knows already" and "like anywhere else, few people actually take the time to do community-useful small bits of work when you can just wait for someone else to do it.") 
       

    Yeah, I should've phrased (3) in a way that's more likely to pass someone like habryka's Ideological Turing Test.

    Basically, I think if EAs were even just a little worse than typical people in positions of power (on the dimension of integrity), that would be awful news! We really want them to be significantly better.

    I think EAs are markedly more likely to be fanatical naive consequentialists, which can be one form of "lacking in integrity" and is the main thing* I'd worry about in terms of me maybe being wrong. To combat that, you need to be above average in integrity on other dimensions.

    *Ideology-induced fanaticism is my main concern, but I can think of other concerns as well. EA probably also attracts communal narcissists to some degree, or people who like the thought that they are special and can have lots of impact. Also, according to some studies, utilitarianism correlates with psychopathy at least in trolley problem examples. However, EA very much also (and more strongly?) attracts people who are unusually caring and compassionate. It also motivates people who don't care about power to seek it, which is an effect with strong potential for making things better.

    That's indeed shocking, and now that you mention it, I also remember the Pareto fellowship Leverage takeover attempt. Maybe I'm too relaxed about this, but it feels to me like there's no nearby possible world where this situation would have kept going? Pretty much everyone I talked to in EA always made remarks about how Leverage "is a cult" and the Leverage person became CEA's CEO not because it was the result of a long CEO search process, but because the previous CEO left abruptly and they had few immediate staff-internal options. The CEO (edit: CEA!) board eventually intervened and installed Max Dalton, who was a good leader. Those events happened long ago and in my view they tentatively(?) suggest that the EA community had a good-enough self-correction mechanism so that schemers don't tend to stay in central positions of power for long periods of time. I concede that we can count these as near misses and maybe even as evidence that there are (often successfully fended off) tensions with the EA culture and who it attracts, but I'm not yet on board with seeing these data points as "evidence for problems with EA as-it-is now" rather than "the sort of thing that happens in both EA and outside of EA as soon as you're trying to have more influence."

    10%.

    Worth noting that it's not the highest of bars.

    Hm. Okay, I buy that argument. But we can still ask whether the examples are representative enough to establish a concerning pattern. I don't feel like they are. Leverage and Nonlinear are very peripheral to EA and they mostly (if allegations are true) harmed EAs rather than people outside the movement. CFAR feels more central, but the cultishness there was probably more about failure modes of the Bay area rationality community rather than anything to do with "EA culture."

    (I can think of other examples of EA cultishness and fanaticism tendencies, including from personal experiences, but I also feel like things turned out fine as EA professionalized itself, for many of these instances anyway, so they can even be interpreted positively as a reason to be less concerned now.)

    I guess you could argue that FTX was such a blatant and outsized negative example that you don't need a lot of other examples to establish the concerning pattern. That's fair. 

    But then what is the precise update we should have made from FTX? Let's compare three possible takeaways:
    (1) There's nothing concerning, per se, with "EA culture," apart from that EAs were insufficiently vigilant of bad actors. 
    (2) EAs were insufficiently vigilant of bad actors and "EA culture" kind of exacerbates the damage that bad actors can do, even though "EA culture" is fine when there isn't a cult-leader-type bad actor in the lead.
    (3) Due to "EA culture," EA now contains way too many power-hungry schemers that lack integrity, and it's a permeating problem rather than something you only find in peripheral groups when they have shady leadership.

    I'm firmly in (2) but not in (3).

    I'm not sure if you're concerned that (3) is the case, or whether you think it's "just" (2) but you think (2) is worrying enough by itself and hard to fix. Whereas I think (2) is among the biggest problems with EA, but I'm overall still optimistic about EA's potential. (I mean, "optimistic" relative to the background of how doomed I think we are for other reasons.) (Though I'm also open to re-branding and reform efforts centered around splitting up into professionalized subcommunities and de-emphasizing the EA umbrella.)

    Why I think (2) instead of (3): Mostly comes down to my experience and gut-level impressions from talking to staff at central EA orgs and reading their writings/opinions and so on. People seem genuinely nice, honest, and reasonable, even though they are strongly committed to a cause. FTX was not the sort of update that would overpower my first-order impressions here, which were based on many interactions/lots of EA experience. (FWIW, it would have been a big negative update for me if the recent OpenAI board drama had been instigated by some kind of backdoors plan about merging with Anthropic, but to my knowledge, these were completely unsubstantiated speculations. After learning more about what went down, they look even less plausible now than they looked at the start.)

    Hm, very good point! I now think that could be his most immediate motivation. Would feel sad to build something and then see it implode (and also the team to be left in limbo). On reflection, that makes me think maybe Sam doesn't necessarily look that bad here. I'm sure Microsoft tried to use their leverage to push for changes, and the OpenAI board stood its ground, so it couldn't have been easy to find a solution that isn't the company falling apart over disagreements and stuff.

    [Edit: I no longer feel confident about this comment; see thread right below]

    Hm, I don't think Altman looks good here either.

    We have to wait and see how public or the media will react, but to me, this looks like it casts doubts on some things he said previously about his specific motivations for building AGI. It's hard to square that working under Microsoft's leadership (and their need to compete with other companies like Alphabet) is a good environment for making AGI breakthroughs and thinking that it'll likely go really well.

    Although, maybe he's planning to only build specific apps for Microsoft rather than intends to build AGI there? That would seem like an atypical reduction of ambition/scope to me. Or maybe the plan is "amass more money and talent and then go back to OpenAI if possible, or otherwise start a new AGI thing with more independence from profit-driven structure. That would be more more understandable, but also feels like he'd be being very agentic about this goal in a way that's scary and like I have to trust this one person's judgment about pulling the brakes when it becomes necessary, even though there's now evidence that many people think he's not been cautious enough already recently.

    I guess we have to wait and see.

    This is admittedly a less charitable take than say, Lukas Gloor's take.

    Haha, I was just going to say that I'd be very surprised if the people on the OpenAI board didn't have access to a lot more info than the people on the EA forum or Lesswrong, who are speculating about the culture and leadership at AI labs from the sidelines.

    TBH, if you put a randomly selected EA from a movement of 1,000s of people in charge of the OpenAI board, I would be very concerned that a non-trivial fraction of them probably would make decisions the way you describe. That's something that EA opinion leaders could maybe think about and address.

    But I don't think most people who hold influential positions within EA (or EA-minded people who hold influential positions in the world at large, for that matter) are likely to be that superficial in their analysis of things. (In particular, I'm strongly disagreeing with the idea that it's likely that the board "basically had no evidence except speculation from the EA/LW forum". I think one thing EA is unusually good at – or maybe I should say "some/many parts of EA are unusually good at" – is hiring people for important roles who think for themselves and have generally good takes about things and acknowledge the possibility of being wrong about stuff. [Not to say that there isn't any groupthink among EAs. Also, "unusually good" isn't necessarily that high of a bar.]) 

    I don't know for sure what they did or didn't consider, so this is just me going off of my general sense for people similar to Helen or Tasha. (I don't know much about Tasha. I've briefly met Helen but either didn't speak to her or only did small talk. I read some texts by her and probably listened to a talk or two.)

    Load more