The January-March 2022 iteration of Cambridge EA's AGI Safety Fundamentals programme is offering £800 compensation for facilitators knowledgeable in technical AI alignment and/or AI governance! Apply to facilitate here or participate here.
AGI Safety Fundamentals programme background
This is the 3rd iteration of an 8-week seminar programme which pairs participants with facilitators to have high-fidelity discussion around structured, chronological content on AGI safety. It finishes with capstone projects, supporting graduates to do high-quality research in the future. Collaboration and networking opportunities are provided, as well as disseminating knowledge of relevant career opportunities.
Two tracks are available, both with a focus on mitigating long-term risks:
Why should I facilitate?
Over the summer, 170 participants took part in the AGI Safety Fundamentals programme. This was only possible thanks to 30 facilitators that were able to host high-fidelity discussion. We are on-track to receive more applicants than the summer round, thus the demand for facilitation has increased!
Previous facilitators reviewed this programme highly, when asked if they'd recommend it to others interested in learning more about AI Safety. For individuals with a specific background (detailed below), we believe this is an effective way to contribute to AI Safety field building and the creation of future researchers and policymakers.
Link to a forum post with a more detailed retrospective on the previous iteration.
How to apply to facilitate
If you are:
- Excited and knowledgeable about AI alignment or (long-term) governance
- Already familiar with a good portion (50%+) of the readings/concepts in either curricula (above), and able to summarise those concepts to participants
- Willing and able to put in ~4 hours per week total for 8 weeks during Jan-March 2022
Then we would be delighted if you sign up to facilitate this round of the programme!
No previous facilitating experience required, we can provide training for this. A weekly guide with discussion prompts will also be provided.
Facilitators are remunerated at £800 for their work (~$1000).
Sign up here by 15th December!
If you’re instead interested in participating (rather than facilitating), apply to participate here.
Further details about the programme
This programme is run by Cambridge Effective Altruism CIC, in collaboration with AI safety group leaders globally.
The alignment curriculum was designed by Richard Ngo, a former ML research engineer at DeepMind, now working on the policy team at OpenAI. The governance curriculum was designed by Stanford Existential Risks Initiative organisers, in collaboration with Richard.
People from all regions of the world are welcome to participate in the programme.
The programme is hosted virtually by default, not in person, though there will be an option for in-person cohorts in your region where possible.