Give me feedback! :)
TL;DR: MATS is fundraising for Summer 2025 and could support more scholars at $35k/scholar
Ryan Kidd here, MATS Co-Executive Director :)
The ML Alignment & Theory Scholars (MATS) Program is twice-yearly independent research and educational seminar program that aims to provide talented scholars with talks, workshops, and research mentorship in the fields of AI alignment, interpretability, and governance and connect them with the Berkeley AI safety research community. The Winter 2024-25 Program will run Jan 6-Mar 14, 2025 and our Summer 2025 Program is set to begin in June 2025. We are currently accepting donations for our Summer 2025 Program and beyond. We would love to include additional interested mentors and scholars at $35k/scholar. We have substantially benefited from individual donations in the past and were able to support ~11 additional scholars due to Manifund donations.
MATS helps expand the talent pipeline for AI safety research by empowering scholars to work on AI safety at existing research teams, found new research teams, and pursue independent research. To this end, MATS connects scholars with research mentorship and funding, and provides a seminar program, office space, housing, research management, networking opportunities, community support, and logistical support to scholars. MATS supports mentors with logistics, advertising, applicant selection, and research management, greatly reducing the barriers to research mentorship. Immediately following each program is an optional extension phase in London where top performing scholars can continue research with their mentors. For more information about MATS, please see our recent reports: Alumni Impact Analysis, Winter 2023-24 Retrospective, Summer 2023 Retrospective, and Talent Needs of Technical AI Safety Teams.
You can see further discussion of our program on our website and Manifund page. Please feel free to AMA in the comments here :)
MATS is now hiring for three roles!
We are generally looking for candidates who:
Please apply via this form and share via your networks.
TL;DR: MATS could support another 10-15 scholars at $21k/scholar with seven more high-impact mentors (Anthropic, DeepMind, Apollo, CHAI, CAIS)
The ML Alignment & Theory Scholars (MATS) Program is twice-yearly educational seminar and independent research program that aims to provide talented scholars with talks, workshops, and research mentorship in the field of AI alignment and connect them with the Berkeley AI safety research community.
MATS helps expand the talent pipeline for AI safety research by empowering scholars to work on AI safety at existing research teams, found new research teams, and pursue independent research. To this end, MATS connects scholars with research mentorship and funding, and provides a seminar program, office space, housing, research coaching, networking opportunities, community support, and logistical support to scholars. MATS supports mentors with logistics, advertising, applicant selection, and complementary scholar support and research management systems, greatly reducing the barriers to research mentorship.
The Winter 2023-24 Program will run Jan 8-Mar 15 in Berkeley, California and feature seminar talks from leading AI safety researchers, workshops on research strategy, and networking events with the Bay Area AI safety community. We currently have funding for ~50 scholars and 23 mentors, but could easily use more.
We are currently funding constrained and accepting donations. We would love to include up to seven additional interested mentors from Anthropic, Apollo Research, CAIS, Google DeepMind, UC Berkeley CHAI, and more, with up to 10-15 additional scholars at $21k/scholar.
Buck Shlegeris, Ethan Perez, Evan Hubinger, and Owain Evans are mentoring in both programs. The links show their MATS projects, "personal fit" for applicants, and (where applicable) applicant selection questions, designed to mimic the research experience.
Astra seems like an obviously better choice for applicants principally interested in:
MATS has the following features that might be worth considering:
Speaking on behalf of MATS, we offered support to the following AI governance/strategy mentors in Summer 2023: Alex Gray, Daniel Kokotajlo, Jack Clark, Jesse Clifton, Lennart Heim, Richard Ngo, and Yonadav Shavit. Of these people, only Daniel and Jesse decided to be included in our program. After reviewing the applicant pool, Jesse took on three scholars and Daniel took on zero.
Thanks for publishing this, Arb! I have some thoughts, mostly pertaining to MATS:
Why do we emphasize acceleration over conversion? Because we think that producing a researcher takes a long time (with a high drop-out rate), often requires apprenticeship (including illegible knowledge transfer) with a scarce group of mentors (with high barrier to entry), and benefits substantially from factors such as community support and curriculum. Additionally, MATS' acceptance rate is ~15% and many rejected applicants are very proficient researchers or engineers, including some with AI safety research experience, who can't find better options (e.g., independent research is worse for them). MATS scholars with prior AI safety research experience generally believe the program was significantly better than their counterfactual options, or was critical for finding collaborators or co-founders (alumni impact analysis forthcoming). So, the appropriate counterfactual for MATS and similar programs seems to be, "Junior researchers apply for funding and move to a research hub, hoping that a mentor responds to their emails, while orgs still struggle to scale even with extra cash."