Hide table of contents

Reason for this post: many university community builders are considering pivoting their groups hard towards AI safety. From the perspective of how community builders should spend their time, this seems like the wrong tradeoff.

Argument: It seems unlikely that university “community building” frameworks are great fits for the AI safety space. It’s difficult to build and sustain a community devoted to a cause that <1% of members will be able to get jobs in[1]. Even if it would be feasible to sustain such communities, the approach seems unlikely to be optimal, as a ton of organizer time and effort goes to waste[2]. I think a better model would be closer to “recruiting” or “movement building” (edit: I previously included the term "movement building"  to describe things like "upskilling" that could be part of a recruitment pipeline, but which seem less associated with the word. As others pointed out, the term is pretty vague and unhelpful, so I removed it.)

Edit: by recruiting, I mean building out a pipeline specifically tailored to getting folks into AI safety jobs, in contrast to generally building a community, which is what most EA student orgs currently focus on.

If this is true, then current university community builders should consider not whether to pivot their groups towards AI, but rather to leave their groups behind and enter recruiting . If these fields are more efficient than community building, however, they would likely have fewer jobs, meaning fewer opportunities for impact for current community builders[3]. If you buy fanaticism, doing anything you can to improve recruiting/movement building may be worth giving up community building[4]; if you aren’t okay with fanaticism, it seems worthwhile to evaluate the number of opportunities out there and your relative fit compared to others.

Thoughts?

  1. ^

    Recent technical researcher hiring rounds for Anthropic and Redwood have been oversubscribed 100:1. A big reason for this is that candidates are overwhelmingly underqualified, implying that if applicants were more qualified, more would be hired. That said, given how fast interest in the field is growing, it seems likely that applicant numbers will continue to grow faster than job openings, even assuming higher qualifications. (This seems especially likely if, once filling out its management ranks, Redwood can begin hiring non-EA research scientists. This seems to be their current plan, and it would expand its potential applicant pool by many x.) In this world, the vast majority of interested folks will not be able to contribute technically. While there will certainly be many non-technical jobs in the space, it would be surprisingly if non-technical roles vastly exceeded technical ones.

  2. ^

    Widely-targeted community building seems very different from hits-based projects. Given the narrow qualifications for technical researchers, widening AI community building seems likely to have quickly diminishing returns. 

  3. ^

    I could be wrong about this - maybe AI alignment is so valuable that we should have, say, 10 or 100 recruiters per safety engineer opening. If the main hiring bottleneck is applicant qualifications, however, I'm not sure why we would need a ton of non-technical recruiters/movement builders to solve that.

  4. ^

    Okay, maybe not anything... if you counterfactually displace someone who would have recruited better, that's almost infinitely bad, right? Maybe a better qualifier would be "as long as your work expands the number of opportunities in the space or is marginally better than the next best alternative in a zero-sum situation."

24

0
0

Reactions

0
0
Comments8
Sorted by Click to highlight new comments since: Today at 9:33 PM
mic
2y10
0
0

If Redwood and Anthropic are flooded with applications from underqualified applicants, I think this is just because they barely have any hard requirements on their job postings. It's a lot of fluff like "Have broad knowledge of many topics in computer science, math, and machine learning, and have enthusiasm for quickly picking up new topics." In contrast, most job postings say something like, you should know these specific topics, have x years of experience, have an MS degree, etc. So I don't think people should feel discouraged just from their low offer rate very much. EDIT: If only 1% of applicants get accepted, the reason is probably something like "you need significant experience in natural language processing or data engineering or something, even though this wasn't mentioned in the job posting" rather than "you need to be smarter than 99% of EAs". (That said, I think it can make sense for Redwood and Anthropic to write vague requirements on their job postings, so that they don't miss out on great candidates who otherwise wouldn't have applied.)

So rather than looking at the offer rate for Redwood and Anthropic, I think the more relevant question is, how much harder is it to get an AI safety position compared to other AI positions? Many universities have general AI clubs which may be fairly popular and presumably help members pursue a career in AI. But quality AI positions can be hard to get. Still, this doesn't stop the viability or fruitfulness of AI clubs. Likewise, I think it would often make sense to have an AI safety club. I don't think that an EA club should go all-in on AI safety and make it sound like all their members should only be trying to get AI safety jobs though.

EDIT: Of course, just because an AI club is viable doesn't mean that an AI safety club is. But I'm optimistic that it is, at least at certain universities. At EA at Georgia Tech, which we started just last year, we currently have 35+ people in our AGI Safety Fundamentals Program. Starting this week, we'll be having weekly events with the AI alignment speaker series that Harvard EA has organized this semester. We're considering spinning this off into a separate AI safety club before the next school year, and then that could run additional programming like general discussion events or social events. EA Oxford started a dedicated AI safety club this semester with an impressive lineup of guest speakers, and EA MIT's new AI safety club is going well as well.

What's the distinction between "community building" and "movement building"?

So I don't think people should feel discouraged just from their low offer rate very much. 

To clarify, is your main point here that that AI safety orgs could absorb a lot more talent, if folks were more qualified? If so, that's also my understanding. I doubt, however, that general university groups would be the optimal way to build qualified and motivated applicants, because student clubs inherently spend a lot of time on non-development focused activities. It seems like outsourcing the recruitment pipeline to, say, Cambridge's AGI Safety Fundamentals program, GCP guides program, plus cross-university skill-building workshops would accomplish the majority of what a successful university AI safety club might accomplish, plus some, with a lot less organizer time. 

Given that those programs aren't fully built out, it might make sense for organizers to spend more of their time helping to build up those programs, rather than devote a ton of time to an AI club at their home university.

What's the distinction between "community building" and "movement building"?


Good catch - I added "movement building" late last night and it's way too vague. I meant it to encompass important things that recruiting doesn't really touch on, like upskilling, but it is way too unspecific. I'll add a note

Yup I think it would be helpful if more people seriously advertised the AGI Safety Fundamentals program or recommended the GCP Guides program (once that builds capacity to take on more people), if they don’t have time to run those programs locally or have more valuable things to do. Something else I would add to the pipeline is having students learn more about machine learning through courses, MOOCs, bootcamps, or research opportunities. People are more likely to get engaged by things that are local and in-person, but I think this gap between in-person vs virtual outsourced engagement can be minimized somewhat if you write the right marketing and still have some in-person activities, like weekly group lunches/dinners.

I’d be excited to see cross-university skill-building workshops. Do you have more details on what you’re envisioning here? What sorts of workshops do you think would be most useful? But it’s also possible that creating these isn’t in a student's comparative advantage, especially if they aren’t already that knowledgeable about the skills they want to teach.

But it’s also possible that creating these isn’t in a student's comparative advantage, especially if they aren’t already that knowledgeable about the skills they want to teach.

Right, this is what I suspect. It's naturally more efficient to expand a pre-existing program than create a new one from scratch, especially in highly technical fields.

Do you have more details on what you’re envisioning here? What sorts of workshops do you think would be most useful? 

I don't have a great inside view on this, but the sorts of workshops Sydney has been running seem pretty popular (we had a couple USC fellows attend her "Impact Generator" workshop and they found it both helpful and motivating.) Lightcone in the Bay is doing a ton of that too, and GCP was planning to build out workshops after fine-tuning their Guides program. 

Without confidently claiming that it's the case with these organisations, it seems worth flagging that if the sort of hard cutoffs you're talking about don't track talent particularly well, it may be worth it for orgs to pay the cost of having to review more applications rather than to risk some of the few talented people self excluding. It's noteworthy that I can instantly think of three field leaders in AI safety who either didn't start or didn't finish undergrad.

Having said that, Andy Jones of Anthropic did put a pretty clear bar into his recent post pointing out the need for more engineers in safety: Could write a substantial pull request for a major ML library.

What's the distinction between "community building" and "movement building"?

Yeah, this post is particularly clear.

Many university community builders are considering pivoting their groups hard towards AI safety

Are you comfortable naming any names (perhaps in PM)? I'd be keen to chat to a few of these people.

sure, I'll shoot you a PM