Per Andy Jones over at LessWrong:
If you think you could write a substantial pull request for a major machine learning library, then major AI safety labs want to interview you today.
I work for Anthropic, an industrial AI research lab focussed on safety. We are bottlenecked on aligned engineering talent. Specifically engineering talent. While we'd always like more ops folk and more researchers, our safety work is limited by a shortage of great engineers.
I've spoken to several other AI safety research organisations who feel the same.
I'm not sure what you mean by "AI safety labs", but Redwood Research, Anthropic, and the OpenAI safety team have all hired self-taught ML engineers. DeepMind has a reputation for being more focused on credentials. Other AI labs don't do as much research that's clearly focused on AI takeover risk.
I'm currently at DeepMind and I'm not really sure where this reputation has come from. As far as I can tell DeepMind would be perfectly happy to hire self-taught ML engineers for the Research Engineer role (but probably not the Research Scientist role; my impression is that this is similar at other orgs). The interview process is focused on evaluating skills, not credentials.
DeepMind does get enough applicants that not everyone makes it to the interview stage, so it's possible that self-taught ML engineers are getting rejected before getting a chance to show they know ML. But presumably this is also a problem that Redwood / Anthropic / OpenAI have? Presumably there is some way that self-taught ML engineers are signaling that they are worth interviewing. (As a simple example, if I personally thought someone was worth interviewing, my recommendation would function as a signal for "worth interviewing", and in that situation DeepMind would interview them, and at that point I predict their success would depend primarily on their skills and not their credentials.)
If there's some signal of "worth interviewing" that DeepMind is failing to pick up on, I'd love to know that; it's the sort of problem I'd expect DeepMind-the-company to want to fix.
I think DM clearly restricts REs more than OpenAI (and I assume Anthropic). I know of REs at DM who have found it annoying/difficult to lead projects because of being REs, I know of someone without a PhD who left Brain (not DeepMind but still Google so prob more similar) partly because it was restrictive, and lead team at OAI/Anthropic, and I know of people without an undergrad degree who have been hired by OAI/Anthropic. At OpenAI I'm not aware of it being more difficult for people to lead projects etc because of being 'officially an RE'. I had bad experiences at DM that were ostensibly related to not having a PhD (but could also have been explained by lack of research ability).
I think DM clearly restricts REs more than OpenAI (and I assume Anthropic). I know of REs at DM who have found it annoying/difficult to lead projects because of being REs, I know of someone without a PhD who left Brain (not DeepMind but still Google so prob more similar) partly because it was restrictive, and lead team at OAI/Anthropic, and I know of people without an undergrad degree who have been hired by OAI/Anthropic. At OpenAI I'm not aware of it being more difficult for people to lead projects etc because of being 'officially an RE'. I had bad experiences at DM that were ostensibly related to not having a PhD (but could also have been explained by lack of research ability).