Hide table of contents

Project Management (PM) is a common role in the tech industry. However, I cannot find much information about this role in the AI safety field other than this earn-to-give focused 80k career review for product managment, and this really short 80k career review for research management. 

Is PM a common role in AI safety? Does it differ in that most AI safety work is research focused, while most industry work is product focused? Do AI safety organizations look for PMs with the greatest technical ability, or best people/management skills? 

10

0
0

Reactions

0
0
New Answer
New Comment

1 Answers sorted by

I see two new relevant roles on the 80,000 Hours job board right now:

Here's an excerpt from Anthropic's job posting. It's looking for basic familiarity with deep learning and mechanistic interpretability, but mostly nontechnical skills.

In this role you would:

  • Partner closely with the interpretability research lead on all things team related, from project planning to vision-setting to people development and coaching.
  • Translate a complex set of novel research ideas into tangible goals and work with the team to accomplish them.
  • Ensure that the team's prioritization and workstreams are aligned with its goals.
  • Manage day-to-day execution of the team’s work including investigating models, running experiments, developing underlying software infrastructure, and writing up and publishing research results in a variety of formats.
  • Unblock your reports when they are stuck, and help get them whatever resources they need to be successful.
  • Work with the team to uplevel their project management skills, and act as a project management leader and counselor.
  • Support your direct reports as a people manager - conducting productive 1:1s, skillfully offering feedback, running performance management, facilitating tough but needed conversations, and modeling excellent interpersonal skills.
  • Coach and develop your reports to decide how they would like to advance in their careers and help them do so.
  • Run the interpretability team’s recruiting efforts, in concert with the research lead.

You might be a good fit if you:

  • Are an experienced manager and enjoy practicing management as a discipline.
  • Are a superb listener and an excellent communicator.
  • Are an extremely strong project manager and enjoy balancing a number of competing priorities.
  • Take complete ownership over your team’s overall output and performance.
  • Naturally build strong relationships and partner equally well with stakeholders in a variety of different “directions” - reports, a co-lead, peer managers, and your own manager.
  • Enjoy recruiting for and managing a team through a period of growth.
  • Effectively balance the needs of a team with the needs of a growing organization.
  • Are interested in interpretability and excited to deepen your skills and understand more about this field.
  • Have a passion for and/or experience working with advanced AI systems, and feel strongly about ensuring these systems are developed safely.

Other requirements:

  • A minimum of 3-5 years of prior management or equivalent experience
  • Some technical or science-based knowledge or expertise
  • Basic familiarity in deep learning, AI, and circuits-style interpretability, or a desire to learn
  • Previous direct experience in machine learning is a plus, but not required
More from gvst
Curated and popular this week
Relevant opportunities