Hide table of contents

Successif is excited to announce our technical AI safety and AI governance services. This post focuses exclusively on our AI program. For more information on our general program or on our organization, please refer to our last post here.
 

High-level Summary

For the past year and a half, we have been supporting professionals aiming to transition to high-impact roles. Today, we are announcing: 

  • The publication of some of the findings from our ongoing AI market research. 
  • An AI-specific program for those professionals who know they want to work on mitigating the catastrophic and existential risks of AI systems.
  • Training programs to help those currently working in AI safety and AI governance increase their impact, as well as those looking to transition into certain AI jobs.

 

I. Context

With recent developments in AI and growing political enthusiasm for certain aspects of AI safety, we decided to develop additional programs to support the fields of AI safety and AI governance. 

Mitigating existential and catastrophic risks from AI in a short time will be facilitated by welcoming mid-career people into the field. Given shortened timelines for risk actualization, it is essential to tap into professionals who do not need years of study to be on-the-job ready. Mid-career professionals have unique skillsets. For instance, they can offer new perspectives on issues by leveraging their diverse backgrounds to draw relevant analogies. They have also had more time to develop soft skills, and are consequently better qualified to manage others and exercise leadership. Their greater career capital and network can also be leveraged to mitigate AI risks. Finally, they will be able to provide mentorship and guidance to less experienced professionals in the fields of AI safety and governance, something that is currently needed in the community.

This is why we are offering the following new services: 

  1. Continuous AI job market research to help others identify the most impactful jobs for mitigating catastrophic and existential risks of AI. For each of these roles, we build a theory of change, identify necessary skills, and gather the most common interview questions.
  2. Self-guided training programs to equip people in the fields of AI safety and AI governance with common policy and advocacy tools to effectively work together and contribute to the policy debate.
  3. An AI-specific program to support professionals looking to transition into AI governance and AI safety. 

We are also announcing a change to our general program. While people applying for our general track still have access to peer support and collective coaching, one-on-one advising sessions are primarily reserved for AI program participants, with some exceptions.  We may refer non-AI participants to other organizations, as the number of career services for mid-career people has been increasing and our team is particularly well placed to advise individuals seeking to transition towards high-impact AI work. 

 

II. AI Market Research

One of our primary focuses is conducting continuous AI job market research to pinpoint the jobs with the highest potential to mitigate the risks of AI. This includes a review of organizations’ strategic documents, interviews with experts and professionals in AI, and constant monitoring of legal, political, and technological developments. Since the field is evolving at a fast pace, our research process is continuous and allows us to adapt and update our strategy accordingly in order to maximize the impact of our organization and our participants. 

We have a continuously updated internal database that contains the types of roles we identify as high-leverage to mitigate catastrophic and existential risks from AI systems. For each role, our database includes the following:

  • Tasks performed on the job
  • Required skills and background
  • Specific organizations where these roles could be especially leveraged
  • A list of interview questions frequently asked in the hiring process
  • A way to assess candidates’ fitness

As of today, we are publishing a partial version of this database in a report on our website, accessible here. Note that it is a constantly evolving endeavor as the field is changing rapidly. We are hoping our report will be useful for the community. We are limiting the amount of public information we are releasing because we want to avoid helping non-impact oriented individuals secure the same jobs.
 

III. AI Program

Our AI program  includes: 

  • Access to collective workshops (Pathways to catastrophic and existential risks, Ikigai, Career transition strategies, Women & leadership, Working in AI safety, Working in AI governance)
  • Access to a peer-support group
  • Skills assessment sessions 
  • Individual career mentorship sessions
  • Mock interviews
  • Opportunity matching

Apply here for our career services.

IV. Training Programs

As of November 2023, we will be rolling out, unit by unit, several training modules. These are for both individuals who are currently in a high-impact AI role and those who are looking to transition. Alongside our advising services, professionals will be able, with our advice, to assemble different modules to construct their own learning path based on their previous skills and experience and on their professional goals. 

The training program will focus on four career tracks: (1) AI media advocacy (2) AI policy advocacy (3) AI policy analysis, and (4) AI governance research. The units are self-guided and can be taken at any time. Some units include exercises graded by humans and returned to you. There is an exam you can take at the end to validate the course and be issued a certificate if you pass. 

If you are a member of a target audience for one of the courses, and you believe taking it would help you maximize your impact, you can apply here. The training programs are free, but you have the option to make a donation at the end to offset our costs if you have found it valuable.

Career TrackTarget AudienceLearning Objectives (not exhaustive)
AI Media Advocacy

1. Current and prospective technical AI safety researchers 

2. Current and prospective AI governance researchers

3. Current and prospective AI policy analysts

1. Understanding public opinion

2. Identifying the most effective ways to amplify your findings

3. Giving effective media interviews

4. Handling adversarial conversations

5. Writing and publishing op-eds

6. Writing and publishing press releases and media advisories

AI Policy Advocacy

1. Current and prospective technical AI safety researchers 

2. Current and prospective AI governance researchers

3. Current and prospective AI policy analysts

1. Engaging with policymakers successfully

2. Negotiating policy proposals

3. Negotiating in multicultural and international contexts 

AI Policy Analysis1. Current and prospective AI policy analysts

1. Diagnosing a policy problem

2. Coming up with feasible and effective policy solutions

3. Defending a policy proposal

AI Governance Research1. Current and prospective AI governance researchers

1. Selecting the most suited research method

2. Conducting effective research

3. Amplifying findings



 While most of the examples used in the modules are drawn from the fields of AI governance and AI safety, the main content focuses on teaching the participant skills, as opposed to deepening their theoretical knowledge related to AI policies. Other organizations, such as GovAI and BlueDot, are already covering AI policy content comprehensively. In addition, organizations such as Horizon and Training for Good also provide targeted training for those who want to go into AI policy in the US and in the EU. Where needed, we encourage our participants to apply to these programs and provide support to them during the application process.

 

V. How We Can Help You

  • Opportunity matching: If you are a high-impact organization working on AI and have a potential project sitting on a desk that you never hired for because it would require niche skills, we can help you find a good person to take it on! 
  • Research: If you have an impactful AI research project idea, let us know! Our trainees could undertake it as their capstone project. 
  • Head hunting: If you have an open role that is difficult to hire for, we can help! 
  • Training programs: Let us know if our training modules could be useful to you or your organization. 

For any of the above or other questions, please contact us at contact@successif.org.

15

2
0

Reactions

2
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities