Reason for this post: many university community builders are considering pivoting their groups hard towards AI safety. From the perspective of how community builders should spend their time, this seems like the wrong tradeoff.
Argument: It seems unlikely that university “community building” frameworks are great fits for the AI safety space. It’s difficult to build and sustain a community devoted to a cause that <1% of members will be able to get jobs in[1]. Even if it would be feasible to sustain such communities, the approach seems unlikely to be optimal, as a ton of organizer time and effort goes to waste[2]. I think a better model would be closer to “recruiting” or “movement building” (edit: I previously included the term "movement building" to describe things like "upskilling" that could be part of a recruitment pipeline, but which seem less associated with the word. As others pointed out, the term is pretty vague and unhelpful, so I removed it.)
Edit: by recruiting, I mean building out a pipeline specifically tailored to getting folks into AI safety jobs, in contrast to generally building a community, which is what most EA student orgs currently focus on.
If this is true, then current university community builders should consider not whether to pivot their groups towards AI, but rather to leave their groups behind and enter recruiting . If these fields are more efficient than community building, however, they would likely have fewer jobs, meaning fewer opportunities for impact for current community builders[3]. If you buy fanaticism, doing anything you can to improve recruiting/movement building may be worth giving up community building[4]; if you aren’t okay with fanaticism, it seems worthwhile to evaluate the number of opportunities out there and your relative fit compared to others.
Thoughts?
- ^
Recent technical researcher hiring rounds for Anthropic and Redwood have been oversubscribed 100:1. A big reason for this is that candidates are overwhelmingly underqualified, implying that if applicants were more qualified, more would be hired. That said, given how fast interest in the field is growing, it seems likely that applicant numbers will continue to grow faster than job openings, even assuming higher qualifications. (This seems especially likely if, once filling out its management ranks, Redwood can begin hiring non-EA research scientists. This seems to be their current plan, and it would expand its potential applicant pool by many x.) In this world, the vast majority of interested folks will not be able to contribute technically. While there will certainly be many non-technical jobs in the space, it would be surprisingly if non-technical roles vastly exceeded technical ones.
- ^
Widely-targeted community building seems very different from hits-based projects. Given the narrow qualifications for technical researchers, widening AI community building seems likely to have quickly diminishing returns.
- ^
I could be wrong about this - maybe AI alignment is so valuable that we should have, say, 10 or 100 recruiters per safety engineer opening. If the main hiring bottleneck is applicant qualifications, however, I'm not sure why we would need a ton of non-technical recruiters/movement builders to solve that.
- ^
Okay, maybe not anything... if you counterfactually displace someone who would have recruited better, that's almost infinitely bad, right? Maybe a better qualifier would be "as long as your work expands the number of opportunities in the space or is marginally better than the next best alternative in a zero-sum situation."
If Redwood and Anthropic are flooded with applications from underqualified applicants, I think this is just because they barely have any hard requirements on their job postings. It's a lot of fluff like "Have broad knowledge of many topics in computer science, math, and machine learning, and have enthusiasm for quickly picking up new topics." In contrast, most job postings say something like, you should know these specific topics, have x years of experience, have an MS degree, etc. So I don't think people should feel discouraged just from their low offer rate very much. EDIT: If only 1% of applicants get accepted, the reason is probably something like "you need significant experience in natural language processing or data engineering or something, even though this wasn't mentioned in the job posting" rather than "you need to be smarter than 99% of EAs". (That said, I think it can make sense for Redwood and Anthropic to write vague requirements on their job postings, so that they don't miss out on great candidates who otherwise wouldn't have applied.)
So rather than looking at the offer rate for Redwood and Anthropic, I think the more relevant question is, how much harder is it to get an AI safety position compared to other AI positions? Many universities have general AI clubs which may be fairly popular and presumably help members pursue a career in AI. But quality AI positions can be hard to get. Still, this doesn't stop the viability or fruitfulness of AI clubs. Likewise, I think it would often make sense to have an AI safety club. I don't think that an EA club should go all-in on AI safety and make it sound like all their members should only be trying to get AI safety jobs though.
EDIT: Of course, just because an AI club is viable doesn't mean that an AI safety club is. But I'm optimistic that it is, at least at certain universities. At EA at Georgia Tech, which we started just last year, we currently have 35+ people in our AGI Safety Fundamentals Program. Starting this week, we'll be having weekly events with the AI alignment speaker series that Harvard EA has organized this semester. We're considering spinning this off into a separate AI safety club before the next school year, and then that could run additional programming like general discussion events or social events. EA Oxford started a dedicated AI safety club this semester with an impressive lineup of guest speakers, and EA MIT's new AI safety club is going well as well.
What's the distinction between "community building" and "movement building"?
Right, this is what I suspect. It's naturally more efficient to expand a pre-existing program than create a new one from scratch, especially in highly technical fields.
I don't have a great inside view on this, but the sorts of workshops Sydney has been running see... (read more)