TL;DR: The Cooperative AI Foundation (CAIF) is a new AI safety organisation and we're hiring for a Chief Operating Officer (COO). You can suggest people that you think might be a good fit for the role using this form. If you're the first to suggest the person we eventually hire, we'll send you $5000.

This post was inspired by conversations with Richard Parr and Cate Hall (though I didn't consult them about the post, and they may not endorse it). Thanks to Anne le Roux and Jesse Clifton for reading a previous draft. Any mistakes are my own.

Background

The Cooperative AI Foundation (CAIF, pronounced “safe”) is a new organisation supporting research on the cooperative intelligence of advanced AI systems. We believe that many of the most important problems facing humanity are problems of cooperation, and that AI will be increasingly important when it comes to solving (or exacerbating) such problems. In short, we’re an A(G)I safety research foundation seeking to build the nascent field of Cooperative AI.

CAIF is supported by an initial endowment of $15 million and some of the leading thinkers in AI safety and AI governance, but is currently lacking operational capacity. We’re expanding our team and our top priority is to hire a Chief Operating Officer – a role that will be critical for the scaling and smooth running of the foundation, both now and in the years to come. We believe that this marks an exciting opportunity to have a particularly large impact on the growth of CAIF, the field, and thus on the benefits to humanity that it prioritises.

How You Can Help

Do you know someone who might be a good fit for this role? Submit their name (and yours) via this form. CAIF will reach out to the people we think are promising, and if you were the first to suggest the person we eventually hire, we'll send you a referral bonus of $5000. The details required from a referral can be found by looking at the form, with the following conditions:

  • Referrals must be made by 3 July 2022 23:59 UTC
  • You can't refer yourself (though if you're interested in the role, please apply!)
  • Please don't directly post names or personal details in the comments below
  • We'll only send you the bonus if the person you suggest (and we hire) isn't someone we'd already considered
  • The person you refer doesn't need to already be part of the EA community[1] or be knowledgable about AI safety
  • If you've already suggested a name to us (i.e., before this was posted), we'll still send you the bonus
  • If you have any questions about the referral scheme, please comment below

Finally, we're also looking for new ways of advertising the role.[2] If you have suggestions, please post them in the comments below. If we use your suggested method (and we weren't already planning to), we'll send you a smaller bonus of $250. Feel free to list all your suggestions in a single comment – we'll send you a bonus for each one that we use.

Why We Posted This

Arguably, the most critical factor in how successful an organisation is (given sufficient funding, at least) is the quality of the people working there. This is especially true for us as a new, small organisation with ambitious plans for growth, and for a role as important as the COO. Because of this, we are strongly prioritising hiring an excellent person for this role.

The problem is that finding excellent people is hard (especially for operations roles). Many excellent people that might consider moving to this role are not actively looking for one, and may not already be in our immediate network. This means that referrals are critical for finding the best person for the job, hence this post.

$5000 may seem like a lot for simply sending us someone's name, but is a small price to pay in terms of increasing CAIF's impact. Moreover, it's also well below what a recruitment agency would usually charge for a successful referral to a role like this – and using these agencies is often worth it! Finally, though there is some risk of the referral scheme being exploited, we believe that the upsides outweigh these risks substantially. We suggest that other EA organisations might want to adopt similar schemes in the future.

  1. ^

    I owe my appreciation of this point to Cate.

  2. ^

    Like this post, which was suggested by Richard.

41

New Comment