1029Joined Sep 2014


I am Issa Rice. https://issarice.com/


Topic Contributions

Has Holden written any updates on outcomes associated with the grant?

Not to my knowledge.

I don't think that lobbying against OpenAI, other adversarial action, would have been that hard.

It seems like once OpenAI was created and had disrupted the "nascent spirit of cooperation", even if OpenAI went away (like, the company and all its employees magically disappeared), the culture/people's orientation to AI stuff ("which monkey gets the poison banana" etc.) wouldn't have been reversible. So I don't know if there was anything Open Phil could have done to OpenAI in 2017 to meaningfully change the situation in 2022 (other than like, slowing AI timelines by a bit). Or maybe you mean some more complicated plan like 'adversarial action against OpenAI and any other AI labs that spring up later, and try to bring back the old spirit of cooperation, and get all the top people into DeepMind instead of spreading out among different labs'.

Eliezer's tweet is about the founding of OpenAI, whereas Agrippa's comment is about a 2017 grant to OpenAI (OpenAI was founded in 2015, so this was not a founding grant). It seems like to argue that Open Phil's grant was net negative (and so strongly net negative as to swamp other EA movement efforts), one would have to compare OpenAI's work in a counterfactual world where it never got the extra $30 million in 2017 (and Holden never joined the board) with the actual world in which those things happened. That seems a lot harder to argue for than what Eliezer is claiming (Eliezer only has to compare a world where OpenAI didn't exist vs the actual world where it does exist).

Personally, I agree with Eliezer that the founding of OpenAI was a terrible idea, but I am pretty uncertain about whether Open Phil's grant was a good or bad idea. Given that OpenAI had already disrupted the "nascent spirit of cooperation" that Eliezer mentions and was going to do things, it seems plausible that buying a board seat for someone with quite a bit of understanding of AI risk is a good idea (though I can also see many reasons it could be a bad idea).

One can also argue that EA memes re AI risk led to the creation of OpenAI, and that therefore EA is net negative (see here for details). But if this is the argument Agrippa wants to make, then I am confused why they decided to link to the 2017 grant.

What textbooks would you recommend for these topics? (Right now my list is only “Linear Algebra Done Right”)

I would recommend not starting with Linear Algebra Done Right unless you already know the basics of linear algebra. The book does not cover some basic material (like row reduction, elementary matrices, solving linear equations) and instead focuses on trying to build up the theory of linear algebra in a "clean" way, which makes it enlightening as a second or third exposure to linear algebra but a cruel way to be introduced to the subject for the first time. I think 3Blue1Brown videos → Vipul Naik's lecture notes → 3Blue1Brown videos (again) → Gilbert Strang-like books/Treil's Linear Algebra Done Wrong → 3Blue1Brown videos (yet again) → Linear Algebra Done Right would provide a much smoother experience. (See also this comment that I wrote a while ago.)

Many domains that people tend to conceptualize as "skill mastery, not cult indoctrination" also have some cult-like properties like having a charismatic teacher, not being able to question authority (or at least, not being encouraged to think for oneself), and a social environment where it seems like other students unquestioningly accept the teachings. I've personally experienced some of this stuff in martial arts practice, math culture, and music lessons, though I wouldn't call any of those a cult.

Two points this comparison brings up for me:

  • EA seems unusually good compared to these "skill mastery" domains in repeatedly telling people "yes, you should think for yourself and come to your own conclusions", even at the introductory levels, and also just generally being open to discussions like "is EA a cult?".
  • I'm worried this post will be condensed into people's minds as something like "just conceptualize EA as a skill instead of this cult-like thing". But if even skill-like things have cult-like elements, maybe that condensed version won't help people make EA less cult-like. Or maybe it's actually okay for EA to have some cult-like elements!

He was at UW in person (he was a grad student at UW before he switched his PhD to AI safety and moved back to Berkeley).

Setting expectations without making it exclusive seems good.

"Seminar program" or "seminar" or "reading group" or "intensive reading group" sound like good names to me.

I'm guessing there is a way to run such a group in a way that both you and I would be happy about.

The actual activities that the people in a fellowship engage in, like reading things and discussing them and socializing and doing giving games and so forth, don't seem different from what a typical reading club or meetup group does. I am fine with all of these activities, and think they can be quite valuable.

So how are EA introductory fellowships different from a bare reading club or meetup group? My understanding is that the main differences are exclusivity and the branding. I'm not a fan of exclusivity in general, but especially dislike it when there doesn't seem to be a good reason for it (e.g. why not just split the discussion into separate circles if there are too many people?) or where self-selection would have worked (e.g. making the content of the fellowship more difficult so that the less interested people will leave on their own). As for branding, I couldn't find a reason why these groups are branded as "fellowships" in any of the pages or blog posts I looked at. But my guess is that it is a way to manufacture prestige for both the organizers/movement and for the participants. This kind of prestige-seeking seems pretty bad to me. (I can elaborate more on either point if you want to understand my reasoning.)

I haven't spent too much time looking into these fellowships, so it's quite possible I am misunderstanding something, and would be happy to be corrected.

I didn't. As far as I know, introductory fellowships weren't even a thing in EA back in 2014 (or if they were, I don't remember hearing about them back then despite reading a bunch of EA things on the internet). However, I have a pretty negative opinion of these fellowships so I don't think I would have wanted to start one even if they were around at the time.

(I tried starting the original EA group at UW in 2014. I'm no longer a student at UW and don't even live in the Seattle area currently.)

Seems like you found the Messenger group, which is the most active thing I am aware of. You've also probably seen the Facebook group and could try messaging some of the people there who joined recently.

I don't want to discourage you from trying, but here are some more details: I was unable to start an EA group at UW in 2014 (despite help from Seattle EA organizers). At the time I thought this was mainly due to my poor social skills (and, to be honest, I think my poor social skills were still a significant factor). But then Rohin Shah (who was one of the organizers or creators of the successful group at UC Berkeley) tried starting the group again in 2016 and it still didn't take off. I think a bunch of factors make it pretty difficult to start an EA group at UW (less curious/smart students, people being more narrowly career-oriented, UW being a commuter school, etc.; given how big the school is, I think the people at UW are very unintuitively bad), and this is something I wish I knew better back in 2014 (at the time at least, I had only heard of successful student groups so I thought it would be easy to get a group going and meet Really Cool People).

Scott Garrabrant has discussed this (or some very similar distinction) in some LessWrong comments. There's also been a lot of discussion about babble and prune, which is basically the same distinction, except happening inside a single mind instead of across multiple minds.

Load More