Here are two things I wouldn't expect to be true at the same time:

  • The EA movement has a ton of programmers, many of them earning to give, and many of them interested in moving into some form of direct work.
  • Roles for programmers in direct work tend to sit open for a long time, and people trying to hire programmers have a really hard time finding people.

As far as I can tell, though, these really are both true! For example I ran a small email survey (n=40, mostly engineers) and found 30% of were interested in switching to something more valuable, and 40% were potentially interested. And there are a bunch of openings:

  • OpenAI has Senior Software Engineer role that doesn't require ML experience, and a ML Engineer role that requires an amount of knowledge that an engineer could pretty easily get working on their own.
  • MIRI has an opening for a Software Engineer.
  • Deepmind has several openings including the relatively generic Software Engineer, Science.
  • GiveDirectly has a more general Data / Tech role.
  • Wave has a bunch of openings (via 80k) including one for a Software Engineer. I have a bunch of thoughts about Wave in particular, but as a former employee I can't share them.

So, why don't these openings get filled quickly? Some guesses:

  • Location: the jobs aren't where the people are, and neither want to move. For example, I'm in Boston and don't want to leave or work remotely.
  • Pay: top tech companies can offer very high compensation, and these organizations don't pay as much. Though since postings don't include comp it's possible that they actually do pay similarly? But maybe people don't apply because they think it would be a large pay cut?
  • Experience: the jobs want someone who's been programming for a long time, and people who could take the jobs haven't been.
  • Ability: the jobs want extremely talented people, and most programmer EAs don't pass their bar. But this doesn't explain why I know a bunch of engineers at Google, which has a pretty high hiring bar, looking to do more directly valuable things.
  • Personal risk aversion: as a parent of young children this makes a lot of sense to me! Moving across the country to work at a place that's not as financially secure as, say, Google, would be a real risk. (And one that hit me when I was laid off from Wave.)
  • Working conditions: maybe these jobs aren't as nice in ways other than pay? More hours, less free food, less ability to work on cool things? But this seems unlikely to me—lots of people want to work on ML.
  • Cause mismatch: the good jobs are all in AI safety, but the programmers looking to move are interested in global poverty, animal welfare, or something.
  • Awareness: maybe people are not actively looking for jobs and don't know what's available? Maybe 80k should have some sort of recruiter/headhunter that tries to match EAs to specific roles? Maybe they already do this and I don't know about it?
  • Imposter syndrome: people often don't have a good model of where they stand, and so might think possible jobs aren't for them. For example, MIRI posts that they're looking for "engineers with extremely strong programming skills", and probably some of the people who would do well there don't realize that their programming skills are good enough. Even if a job posting is framed in a friendly welcoming way, if the organization has a very strong reputation that in itself may make some people think they couldn't be good enough.
  • Combination: maybe there are jobs that do well on many different metrics, but not enough of them for any one person. For example, maybe there are jobs that pay well (OpenAI?) and jobs in global poverty (GiveDirectly) but if you want both there isn't something. Or there's remote work (Wave, etc) and there's work on AI risk, but no options for both.

What's going on? I'm especially interested in comments from programmers who would like to be doing direct work but are instead earning to give, but any speculation is welcome!

Thanks to Catherine Olsson for discussion that led to this post and reading a draft. Cross-posted from jefftk.com.

41

0
0

Reactions

0
0

More posts like this

Comments10
Sorted by Click to highlight new comments since: Today at 6:57 PM

At least some people at OpenAI are making a ton of money: https://www.nytimes.com/2018/04/19/technology/artificial-intelligence-salaries-openai.html /. Of course not everyone is making that much but I doubt salaries at OpenAI/DeepMind are low. I think the obvious explanation is the best one. These companies want to hire top talent. Top talent is hard to find.

The situation is different for organizations that cannot afford high salaries. Let me link to Nate's explanation from three years ago:

I want to push back a bit against point #1 ("Let's divide problems into 'funding constrained' and 'talent constrained'.) In my experience recruiting for MIRI, these constraints are tightly intertwined. To hire talent, you need money (and to get money, you often need results, which requires talent). I think the "are they funding constrained or talent constrained?" model is incorrect, and potentially harmful. In the case of MIRI, imagine we're trying to hire a world-class researcher for $50k/year, and can't find one. Are we talent constrained, or funding constrained? (Our actual researcher salaries are higher than this, but they weren't last year, and they still aren't anywhere near competitive with industry rates.)
Furthermore, there are all sorts of things I could be doing to loosen the talent bottleneck, but only if I knew the money was going to be there. I could be setting up a researcher stewardship program, having seminars run at Berkeley and Stanford, and hiring dedicated recruiting-focused researchers who know the technical work very well and spend a lot of time practicing getting people excited -- but I can only do this if I know we're going to have the money to sustain that program alongside our core research team, and if I know we're going to have the money to make hires. If we reliably bring in only enough funding to sustain modest growth, I'm going to have a very hard time breaking the talent constraint.
And that's ignoring the opportunity costs of being under-funded, which I think are substantial. For example, at MIRI there are numerous additional programs we could be setting up, such as a visiting professor + postdoc program, or a separate team that is dedicated to working closely with all the major industry leaders, or a dedicated team that's taking a different research approach, or any number of other projects that I'd be able to start if I knew the funding would appear. All those things would lead to new and different job openings, letting us draw from a wider pool of talented people (rather than the hyper-narrow pool we currently draw from), and so this too would loosen the talent constraint -- but again, only if the funding was there. Right now, we have more trouble finding top-notch math talent excited about our approach to technical AI alignment problems than we have raising money, but don't let this fool you -- the talent constraint would be much, much easier to address with more money, and there are many things we aren't doing (for lack of funding) that I think would be high impact.

source: https://forum.effectivealtruism.org/posts/k6bBgWFdHH5hgt9RF/peter-hurford-thinks-that-a-large-proportion-of-people#DvKfX3iN5Z8kuaFs7

I don't think this is quite right. The people working at OpenAI are paid well, but at the same time they are taking huge cuts in salary compared to where they could be working otherwise. (Goodfellow and Sutskever could be making millions anywhere.) And given the distribution of salary, it's very likely that the majority of both OpenAI and Deepmind researchers are making under $200k - not a crazy amount for Deep Learning talent nowadays.

I’m not looking for an engineering role, but definitely for myself the disconnect between what I am looking for and what EA-adjacent opportunities I find advertised is 100% location. I live in a particular city and I am not in a position to move in the short term, and as that city is not the Bay, NYC, or Oxford, it’s hard to find any useful postings or even guidance from the online EA community. I’d love for 80,000 Hours to have any advice whatsoever tailored to someone constrained to job-searching only within their own city, but so far I haven’t come across any.

In case you haven't come across it yet, the 80,000 Hours job board has a filter for jobs which can be done remotely, which you might find useful.

I wrote this post recently:

https://www.lesswrong.com/posts/3u8oZEEayqqjjZ7Nw/current-ai-safety-roles-for-software-engineers

Generally, I feel like there are actually pretty few regular engineering positions around for EAs (Maybe 8-15), and these both have fairly high bars and require work in the US/UK.

Small orgs have different needs to large ones, and most of the EA groups are small. This in part means they want senior and/or entrepreneurial types.

I do suggest that programmers learn ML or intensely learn Functional programming, though not that many available people seem interested in either (especially those who are doing E2G outside of EA jobs.) Either would be a significant challenge, for one thing.

The OpenAI and DeepMind posts you linked aren't necessarily relevant, e.g. the Software Engineer, Science role is not for DeepMind's safety team, and it's pretty unclear to me whether the OpenAI ML engineer role is safety-relevant.

My model is that if you want to move from generic software engineering to safety work that these would be very good next steps.

This seems plausible, but also quite distinct from the claim that "roles for programmers in direct work tend to sit open for a long time", which I took the list of openings to be supporting evidence for.

Conceptually related: SSC on Joint Over- and Underdiagnosis.

There was another discussion about this on the forum a couple of years ago: https://forum.effectivealtruism.org/posts/Ebjm8rNFP4mGEjtFD/is-the-community-short-of-software-engineers-after-all

Curated and popular this week
Relevant opportunities