Hide table of contents

Applications are open for the ML Alignment & Theory Scholars (MATS) Summer 2025 Program, running Jun 16-Aug 22, 2025. First-stage applications are due Apr 18!

MATS is a twice-yearly, 10-week AI safety research fellowship program operating in Berkeley, California, with an optional 6-12 month extension program for select participants. Scholars are supported with a research stipend, shared office space, seminar program, support staff, accommodation, travel reimbursement, and computing resources. Our mentors come from a variety of organizations, including Anthropic, Google DeepMind, OpenAI, Redwood Research, GovAI, UK AI Security Institute, RAND TASP, UC Berkeley CHAI, Apollo Research, AI Futures Project, and more! Our alumni have been hired by top AI safety teams (e.g., at AnthropicGDMUK AISI, METR, Redwood, Apollo), founded research groups (e.g., Apollo, Timaeus, CAIP, Leap Labs), and maintain a dedicated support network for new researchers.

If you know anyone who you think would be interested in the program, please recommend that they apply!

Program details

MATS is an educational seminar and independent research program (generally 40 h/week) in Berkeley, CA that aims to provide talented scholars with talks, workshops, and research mentorship in the fields of AI alignment, security, and governance, and connect them with the San Francisco Bay Area AI alignment research community. MATS provides scholars with housing in Berkeley, CA, as well as travel support, a co-working space, and a community of peers. The main goal of MATS is to help scholars develop as AI alignment researchers. You can read more about our theory of change here.

Based on individual circumstances, we may be willing to alter the time commitment of the program and arrange for scholars to leave or start early. Please tell us your availability when applying. Our tentative timeline for the MATS Summer 2025 program is below.

Scholars will receive a USD 12k stipend from AI Safety Support for completing the Training and Research Phases.

Applications (now!)

Applications open: Mar 19

Applications are due: Apr 18

Note: Neel Nanda's applications follow a modified schedule and are now closed.

Research phase (Jun 16-Aug 22)

The core of MATS is a two-month Research Phase. During this Phase, each scholar spends at least one hour a week working with their mentor, with more frequent communication via Slack. Mentors vary considerably in terms of their:

  • Influence on project choices;
  • Attention to low-level details vs. high-level strategies;
  • Emphasis on outputs vs. processes;
  • Availability for meetings.

Our Research Management team complements mentors by offering dedicated 1-1 check-ins, research coaching, debugging, and general executive help to unblock research progress and accelerate researcher development.

Educational seminars and workshops will be held 2-3 times per week. We also organize multiple networking events to acquaint scholars with researchers in the SF Bay Area AI alignment community.

Research milestones

Scholars complete two milestones during the Research Phase. The first is a Research Plan outlining a threat model or risk factor, a theory of change, and a plan for their research. This document will guide their work during the remainder of the program, which culminates in a research symposium attended by members of the SF Bay Area AI alignment community. The second milestone is a ten-minute Poster Presentation at the symposium.

Community at MATS

The Research Phase provides scholars with a community of peers, who share an office, meals, and housing. In contrast to pursuing independent research remotely, working in a community grants scholars easy access to future collaborators, a deeper understanding of other research agendas, and a social network in the AI safety community. Scholars also receive support from full-time Community Managers.

In the past, each week of the Research Phase included at least one social event, such as a party, game night, movie night, or hike. Weekly lightning talks provided scholars with an opportunity to share their research interests in an informal, low-stakes setting. Outside of work, scholars organized social activities, including road trips to Yosemite, visits to San Francisco, pub outings, weekend meals, and even a skydiving trip.

Extension phase

At the conclusion of the Research Phase, scholars can apply to continue their research in a 6-12 month Extension Phase, in London by default. Acceptance decisions are largely based on mentor endorsement and double-blind review of the mid-program Research Plan milestone. By this phase, we expect scholars to pursue their research with high autonomy.

Post-MATS

After completion of the program, MATS alumni have:

You can read more about MATS alumni here.

Who should apply?

Our ideal applicant has:

  • An understanding of the AI safety research landscape equivalent to having completed AI Safety Fundamentals' Alignment Course (if you are accepted into the program but have not previously completed this course, you are expected to do so before the Training Phase begins);
  • Previous experience with technical research (e.g. ML, CS, math, physics, neuroscience, etc.), generally at a postgraduate level; or
  • Previous policy research experience or a background conducive to AI governance (e.g. government positions, technical background, strong writing skills, AI forecasting knowledge, completed AISF Governance Course);
  • Strong motivation to pursue a career in AI safety research.

Even if you do not meet all of these criteria, we encourage you to apply! Several past scholars applied without strong expectations and were accepted.

Applying from outside the US

Scholars from outside the US can apply for B-1 visas (further information here) for the Research Phase. Scholars from Visa Waiver Program (VWP) Designated Countries can instead apply to the VWP via the Electronic System for Travel Authorization (ESTA), which is processed in three days. Scholars who receive a B-1 visa can stay up to 180 days in the US, while scholars accepted into the VWP can stay up to 90 days. Please note that B-1 visa approval times can be significantly longer than ESTA approval times, depending on your country of origin.

How to apply

Applications are now open. Submissions for most mentors are due on April 18th. 

Candidates first fill out a general application, then apply to work under a particular mentor who will independently review their application. Applications are evaluated primarily based on responses to mentor questions and prior relevant research experience. Information about our mentors' research agendas and application questions can be found on the MATS website.

Before applying, you should:

  • Read through the descriptions and agendas of each stream and the associated candidate selection questions;
  • Prepare your answers to the questions for streams you’re interested in applying to. These questions can be found on the application;
  • Prepare your LinkedIn or resume.

The candidate selection questions can be quite hard, depending on the mentor! Make sure you allow adequate time to complete your application. A strong application to one mentor may be of higher value than moderate applications to several mentors (though each application will be assessed independently).

If you have any questions about the program or application process, contact us at applications@matsprogram.org. Sign up here!

No comments on this post yet.
Be the first to respond.
Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f