The applications for the PIBBSS summer fellowship 2026 are now open, and I will be one of the mentors. If you want to work with me on the Learning-Theoretic AI Alignment Agenda (LTA), I recommend to apply. Exceptional fellows might be offered an (additional) paid research fellowship at CORAL after the programme, depending on available funding and other circumstances.
Why Apply?
I believe that the risk of a global catastrophe due to unaligned artificial superintelligence is the most pressing problem of our time. I also believe that the LTA is our last best chance for solving the technical alignment problem. Even if governance efforts succeed, they will only buy time: but we still need to use this time somehow. The LTA is how we use this time. (I also consider some other research agendas to be useful, but mostly inasmuch as they can be combined with LTA.)
Fortunately, the LTA has many shovel-ready research directions that can be advanced in parallel. What we need is more researchers working on them. If you are a mathematician or theoretical computer scientist that can contribute, this is probably the most important thing you can choose to do.
Requirements
- Applicants should be seriously interested in AI alignment and at least considering AI alignment research as a long-term career path.
- The typical applicant is a PhD student or postdoc in math or computer science. I do not require official credentials, but I do require relevant skills and knowledge.
- Strong background in mathematics is necessary. Bonus points for familiarity with the fields in the LTA reading list.
- Experience in mathematical research, including proving non-trivial original theorems, is necessary.
- Experience in academic technical writing is highly desirable. A strong candidate would have at least one academic publication, preprint or some other comparable work.
- PIBBSS may also have their own (mentor independent) acceptance criteria, which I'm not responsible for.
Programme Content
The following refers to my (mentor specific) content. The fellowship involves additional activities, see details on the webpage.
- The participants will learn about the LTA, see reading list and video lectures for reference (these will also be teaching materials).
- The participants will be required to choose a project out of a list I provide. They will be able to choose to work solo or in a group.
- The typical project will focus on an existing idea in LTA, and will require the participants to write an Alignment Forum post with a more detailed and/or rigorous explanation of the idea than currently exists, possibly proving some basic mathematical results needed for further investigation.
- The projects will be comparable in complexity to those completed by my MATS scholars in the past, for example: 1 2 3 4 5.
- Examples of project topics which will probably be available (not an exhaustive or precise list) include:
- Metacognitive agents (see child comments)
- Compositional control theory
- Ambidistributions
- Learnability of credal set decision rules
- Selection theorems from Algorithmic Information Theory
- Selection theorems from strong influence
- Infinitary bridge transform
- Learning in Formal Computational Realism
- Local symmetries in game theory
- Learnable undogmatic ontologies
- Generalized string diagrams for credal sets
- String machines
