All of juliakarbing's Comments + Replies

I really appreciate this kind of post :) Agree that no one has AIS field-building figured out and that more experimentation of different models would be great! 

One of my main uncertainties about putting these kinds of research projects early on in the pipeline (and indeed one of the main reasons that the Oxford group has been putting it after a round of AGISF) is that having one early on makes it much harder to filter for people who are actually motivated by safety. Because there is such demand for getting to do research projects among ML students, we... (read more)

2
Joshc
1y
That's a good point. Here's another possibility: Require that students go through a 'research training program' before they can participate in the research program. It would have to actually help prepare them for technical research though. Relabeling AGISF as a research training program would be misleading, so you would want to add a lot more technical content (reading papers, coding assignments, etc.)  It would probably be pretty easy to gauge how much the training program participants care about X-risk / safety and factor that in when deciding whether to accept them into the research program. The social atmosphere can also probably go a long way in influencing people's attitudes towards safety. Making AI risk an explicit focus of the club, talking about it a lot at socials, inviting AI safety researchers to dinners, etc might do most of the work tbh.

Hi :) I'm surprised by this post. Doing full-time community building myself, I have a really hard time imagining that any group (or sensible individual) would use these 'cult indoctrination techniques' as strategies to get other people interested in EA.

Was wondering if you could share anything more about specific examples / communities where you have found this happening? I'd find that helpful for knowing how to relate to this content as a community builder myself! :-) 


(To be clear, I could imagine repeating talking points and closed social circles ha... (read more)

1
electroswing
2y
I should clarify—I think EAs engaging in this behavior are exhibiting cult indoctrination behavior unintentionally, not intentionally.  One specific example would be in my comment here. I also notice that when more experienced EAs tend to talk to new EAs about x-risk from misaligned AI, they tend to present an overly narrow perspective. Sentences like "Some superintelligent AGI is going to grab all the power and then we can do nothing to stop it" are thrown around casually without stopping to examine the underlying assumptions. Then newer EAs repeat these cached phrases without having carefully formed an inside view, and the movement has worse overall epistemics.  Here is a recent example of an EA group having a closed off social circle to the point where a person who actively embraces EA has difficulty fitting in.  Haven't read the whole post yet but the start of Zvi's post here lists 21 EA principles which are not commonly questioned.  I am not going to name the specific communities where I've observed culty behavior because this account is pseudoanonymous.