TL;DR: MATS Autumn 2026 applications are now open. It's a 10-week fully-funded research fellowship (Sep 28th to Dec 4th, 2026) in AI alignment, security, and governance, with mentorship from researchers at Anthropic, Google DeepMind, OpenAI, Redwood, AI Futures Project, and more. This cohort also launches two new tracks: a Founding & Field-Building track and a Biosecurity track. Fellows receive a $5,000/month stipend + $8,000/month compute budget, plus housing, meals, and travel. Apply by June 7, 2026 AoE at matsprogram.org/apply.
This is also MATS's first Autumn cohort - part of our shift to running three fellowships per year to expand capacity for AI safety research and talent development.
About MATS
MATS Research is an educational research nonprofit dedicated to solving the talent pipeline bottleneck in AI alignment and security research. We believe reducing risks from powerful AI is one of the world's most urgent and talent-constrained challenges, and that ambitious people from a wide range of backgrounds and career stages can meaningfully contribute to this work. That's why we're training the next generation of AI safety researchers and founders.
Program details
The Autumn 2026 cohort runs from September 28 to December 4, 2026, based primarily in Berkeley, California and London, UK. Fellows receive:
- $5,000/month stipend + $8,000/month compute budget
- Office space in Berkeley or London (depending on mentor preference)
- Housing, meals, and travel covered
- J1 visa support if needed
- Mentorship from world-class researchers and a dedicated research manager
- A close-knit cohort, regular seminars and workshops with industry experts, and an active global alumni network
- Over 80% of fellows receive an extension to continue their fellowship for 6 to 12 months with ongoing mentorship, support, and funding ($7,680/mo stipend + $8,000/month compute)
Research tracks
Applicants can apply to one or more of the following tracks. Each track page describes the research agenda, the mentors involved, and what we're looking for in applicants — we encourage prospective fellows to read the relevant track pages before applying:
- Empirical
- Theory
- Strategy and Forecasting
- Policy and Governance
- Systems Security
- Biosecurity
- Founding and Field-Building
New this cohort
Two of these tracks are new for Autumn 2026:
- The Founding and Field-Building track for founders, field-builders, and high-agency generalists looking to launch new AI safety initiatives.
- The Biosecurity track focused on preventing catastrophic biological risk from AI.
Our record
- 527 alumni, 100+ mentors
- 200+ publications with 12,300+ citations
- 80% of alumni who graduated before 2025 are working in AI safety/security
- 10% have co-founded active AI safety startups
- 30+ initiatives founded by alumni
Who we're looking for
MATS explicitly looks for talent that traditional pipelines might overlook. We welcome technical researchers without prior ML experience who can demonstrate strong reasoning and research potential. We also encourage applications from policy professionals with strong writing skills, familiarity with governmental processes, and the technical literacy to engage with AI systems - particularly those with backgrounds in national security, cybersecurity, US-China relations, biosecurity, and/or nuclear policy.
Remote participation
While we prefer fellows to participate in-person from Berkeley (with some streams based out of London), we understand this may not always be feasible and are open to remote participation for exceptional candidates on a case-by-case basis.
Apply by June 7, 2026 AoE → matsprogram.org/apply
It only takes 1–2 hours to apply. Please share this post with people you know who'd be a strong fit.
