CFAR and MIRI are running an 18-day AI Summer Fellows Program (AISFP) in the San Francisco Bay Area from June 27 to July 14. The goal of this free program is to help build up participants' ability to make headway on the AI alignment problem: "the problem of creating AI systems that will reliably do what their users want them to do even when AI systems become much more capable than their users across a broad range of tasks."
From the CFAR website:
The intent of the program is to boost participants as far as possible in four skills:
- The CFAR applied rationality skillset, including both what is taught at our intro workshops, and more advanced material from our alumni workshops.
- Epistemic rationality as applied to the foundations of AI, and other philosophically tricky problems — i.e., the skillset taught in the core LW Sequences. (E.g.: reductionism; how to reason in contexts as confusing as anthropics without getting lost in words.)
- Technical forecasting in AI as well as AI alignment interventions. (E.g., the content discussed in Nick Bostrom’s book Superintelligence.)
- The ability to do AI alignment-relevant technical research, while reflecting on the cognitive habits involved. We will give crash courses in: reflection, logical uncertainty, and decision theory.
The program will feature 20-24 participants along with a mix of CFAR instructors and MIRI researchers.
The application deadline is Friday, April 20 (11:59pm PST). If you'd like to participate, fill out our short application form. Finalists will be contacted by a MIRI staff member for an interview.