Apply to Seminar to Study, Explain, and Try to Solve Superintelligence Alignment
Applications for the AFFINE Superintelligence Alignment Seminar are now open, and we invite you to apply. It will take place in Hostačov, near Prague (Czechia), from 28 April to 28 May.
We are working on a draft of the learning materials, in consultation with world-experts.
KEY INFO
- Dates: From 28th April to 28th of May
- Location: Hostačov, 582 82 Skryje-Golčův Jeníkov, Czechia 🇨🇿
- Accommodation & catering: Covered
- Mentors: Abram Demski, Ramana Kumar, Steve Byrnes, Kaj Sotala, Kaarel Hänni, Jonas Hallgren, Ouro, Cole Wyeth, Aram Ebtekar, Elliot Thornley, Linda Linsefors, Paul ‘Lorxus’ Rappoport, and more
- Positions available: 30
- Attendance cost: Free (donations welcome, we have some budget for travel expenses)
- Stipends: We are applying for additional funding to cover stipends for those who would benefit from them
- TO JOIN: Fill out this form by the 8th of March
Goal
The main purpose of the Seminar is to give promising newcomers to AI alignment an opportunity to acquire a deep understanding of some large pieces of the problem, making them better equipped for work on the mitigation of AI existential risk.
ASI breaking out of human control and pursuing its ends misaligned with human flourishing is a central form of catastrophic and existential risk model caused by AI. Despite that, research aiming at finding a solution to the ASI alignment problem is systematically neglected by the general AI Safety ecosystem, which instead largely focuses on spending resources on many things that are broadly related to it (monitoring, steering, and measuring relatively small amounts of optimization, etc.).
It is our goal to fix this inadequacy and provide more people with the prerequisites to tackle the core problems of superintelligence alignment.
The problem at hand is very difficult, so we will be focusing on learning and distillation and debate/epistemics practice for the full month, rather than trying to produce novel research.
Strategy
The program will concentrate on learning outcomes: topics or concepts, which have a good chance of being relevant to superintelligence alignment. The participants will try to understand a topic by reading the materials, thinking about it, talking about it to other participants and mentors, and finally solidifying their understanding by trying to teach the topic to other participants, especially in the form of 1-on-1 peer teaching, but also lectures, or written materials. The alpha version of the list of learning outcomes can be found here.
There will also be lectures, workshops, debates, and discussions.
The Czech countryside setting removes urban distractions while providing space for both focused solo work and spontaneous collaboration. The program rhythm will alternate between intensive technical engagement and explicit recovery time, preventing the burnout that plagues many month-long intensives.
We expect to accept up to 30 mentees, in addition to a number of mentors, on-site as well as remote. We will have two full-time on-site mentors: Ouro (ex-Orthogonal) and Jonas Hallgren (Equilibria Network). Other confirmed mentors include: Abram Demski, Ramana Kumar, Steve Byrnes, Kaj Sotala, Kaarel Hänni, Cole Wyeth, Aram Ebtekar, Elliot Thornley, Linda Linsefors, and Paul ‘Lorxus’ Rappoport.
If funding allows, we will extend the Seminar to a full-year fellowship for the ~10 most promising candidates.
Crucially, the selection for continuation into the year-long fellowship will happen because of collaborative excellence, not despite it. We’re looking for participants who help others learn, who integrate across disciplines, and who build rather than hoard knowledge. The goal extends beyond producing ten individual researchers to creating a cohesive network that continues collaborating after the month ends, whether at CEEALAR or elsewhere.
The full-year fellows will be selected partly based on how good they were at collaborating with other participants. Active encouragement of collaboration will help us cultivate the collaborative spirit of the environment in the face of potential adversariality arising from competition for the extended Fellowship. To encourage ambitious approaches not guaranteed to work, we do not expect novel and promising research outputs within the one-year time frame of the program. It would, however, be a very welcome surprise.
We are particularly — but by no means exclusively — interested in people who have not yet had a chance to engage in depth with the AI alignment problem and AI existential risk.
You can find more information in the Google Doc and in the Manifund post.
Interested?
If you are interested, please fill out this form by the 8th of March so that we can schedule an interview with you. If any questions arise, send them by replying to this message or put them in the form. The sooner we receive your application, the greater the chance of an early response and acceptance.
We have some budget for covering the costs of travel for those who need it the most. Accommodation and daily catering with high-quality food (including vegan and vegetarian options) will be provided. The seminar is free of charge.
Finally, if you know someone who you think would be a good candidate for a participant (or a mentor), let us know. Send their contact info with a short explanation of why they would be a good fit.

