Hide table of contents

Applications have opened for the Summer 2023 Cohort of the SERI ML Alignment Theory Scholars Program! Our mentors include Alex Turner, Dan Hendrycks, Daniel Kokotajlo, Ethan Perez, Evan Hubinger, Janus, Jeffrey Ladish, Jesse Clifton, John Wentworth, Lee Sharkey, Neel Nanda, Nicholas Kees Dupuis, Owain Evans, Victoria Krakovna, and Vivek Hebbar.

Applications are due on May 7, 11:59 pm PT. We encourage prospective applicants to fill out our interest form (~1 minute) to receive program updates and application deadline reminders! You can also recommend that someone apply to MATS, and we will reach out and share our application with them.

Program details

SERI MATS is an educational seminar and independent research program that aims to provide talented scholars with talks, workshops, and research mentorship in the field of AI alignment, and connect them with the Berkeley alignment research community. Additionally, MATS provides scholars with housing and travel, a co-working space, and a community of peers. The main goal of MATS is to help scholars develop as alignment researchers.

Timeline

Based on individual circumstances, we may be willing to alter the time commitment of the scholars program and allow scholars to leave or start early. Please tell us your availability when applying. Our tentative timeline for applications and the MATS Summer 2023 program is below.

Pre-program

  • Application release: Apr 8
  • Application due date: May 7, 11:59 PT
  • Acceptance status: Mid-Late May

Summer program

  • Program Start: Early Jun. The Summer 2023 Cohort consists of three phases.
  • Training phase: Early-Late Jun
  • Research phase: Jul 3-Aug 31
  • Extension phase: ~ Sep-Dec

Training phase

A 4-week online training program (10 h/week for two weeks, then 40 h/week for two weeks). Scholars receive a stipend for completing this part of MATS (historically, $6k).

Scholars whose applications are accepted join the training phase. Mentors work on various alignment research agendas — when a mentor accepts a scholar, the scholar is considered to have entered that mentor’s specific “research stream.” At the end of the training phase, scholars will, by default, transition into the research phase.

MATS has a strong emphasis on education in addition to fostering independent research. The training program hosts an advanced alignment research curriculum, mentor-specific reading lists, workshops on model-building and rationality, and more.

Research phase

A 2-month in-person educational seminar and independent research program in Berkeley, California for select scholars (40 h/week). Scholars receive a stipend for completing this part of MATS (historically, $16k).

During the research phase, each scholar spends ~1-2 hours/week working with their mentor, with more frequent communication via Slack. Scholars' research directions will typically be chosen through a collaborative process with their mentors, and scholars are expected to develop their independent research direction as the program continues. Educational seminars and workshops will be held 2-3 times per week, similar to our past Summer 2022 and Winter 2022 seminar programs.

The extent of mentor support will vary depending on the project and the mentor. Regardless of the specific project or mentorship direction, we encourage scholars to make full use of our Scholar Support program to maximize the value of their experience at MATS. Our Scholar Support team offers dedicated 1-1 check-ins, research coaching, debugging, and general executive help to unblock research progress and accelerate researcher development.

Community at MATS

In contrast to doing independent research remotely, the MATS research phase provides scholars with a community of peers. During the Research phase, scholars work out of a shared office, have communal housing, and are supported by a full-time Community Manager.

Working in a community of independent researchers gives scholars easy access to future collaborators, a deeper understanding of other alignment agendas, and a social network in the alignment community.

The Winter 2022 program included workshops (automated research tools, technical writing, research strategy), scholar-led study groups (mechanistic interpretability, linear algebra, learning theory), weekly lightning talks, and impromptu office activities like group-jailbreaking Bing chat. Outside of work, scholars organized social activities, including road trips to Yosemite, visits to San Francisco, and joining ACX meetups.

Extension phase

Pending mentor and funder review, scholars have the option to continue their research in a subsequent four-month Autumn 2023 Cohort, likely in London.

Post-program

MATS aims to produce researchers who will, after completion of the program:

MATS alumni have published AI-safety research, joined alignment organizations (e.g., Anthropic, MIRI), and founded an alignment research organization. You can read more about MATS alumni here.

Who should apply?

Our ideal applicant has:

  • an understanding of the AI alignment research landscape equivalent to having completed the AGI Safety Fundamentals AI Alignment Course;
  • previous experience with technical research (e.g. ML, CS, math, physics, neuroscience, etc.), generally at a postgraduate level;
  • strong motivation to pursue a career in AI alignment research.

Even if you do not meet all of these criteria, we encourage you to apply! Several past scholars applied without strong expectations and were accepted.

Attending SERI MATS if you are not from the United States

Scholars from outside the US can apply for B-1 visas (further information here) for the Research phase. Scholars from Visa Waiver Program (VWP) Designated Countries can instead apply to the VWP via the Electronic System for Travel Authorization (ESTA), which is processed in three days. Scholars accepted into the VWP can stay up to 90 days in the US, while scholars who receive a B-1 visa can stay up to 180 days. Please note that B-1 visa approval times can be significantly longer than ESTA approval times, depending on your country of origin.

How to apply

Applications are now open! Submissions are due May 7, 11:59 pm PT. We encourage prospective applicants to fill out our interest form (~1 minute) to receive program updates and application deadline reminders! You can also recommend that someone apply and we’ll reach out to them and share our application.

SERI MATS runs several concurrent streams, each for a different alignment research agenda. You can view all of the available agendas on the SERI MATS website. To apply for a stream, please fill out our application.

Before applying, you should:

  • Read through the descriptions and agendas of each stream and the associated candidate selection questions.
  • Prepare your answers to the questions for streams you’re interested in applying to. These questions can be found on our website.
  • Prepare your LinkedIn or resume.

Applications are evaluated primarily based on responses to mentor questions and prior relevant research experience.

The candidate selection questions can be quite hard, depending on the mentor! Allow for sufficient time to apply to your chosen stream(s). A strong application to one stream may be of higher value than moderate applications to several streams (though each application will be assessed independently).

Attend our application office hours

We have office hours for prospective applicants to clarify questions about the MATS program application process. Before attending office hours, we request that applicants read this current post fully and our FAQ.

You can add the events to your Google Calendar. Our office hours will be held on this Zoom link at the following times:

  • Wed 12 Apr, 10 am-12 pm PT;
  • Wed 12 Apr, 2 pm-4 pm PT;
  • Mon 3 May, 10 am-12 pm PT;
  • Mon 3 May, 2 pm-4 pm PT.

The MATS program is a joint initiative by the Stanford Existential Risks Initiative and the Berkeley Existential Risk Initiative, with support from Conjecture and AI Safety Support.

36

0
0

Reactions

0
0

More posts like this

Comments2


Sorted by Click to highlight new comments since:

[Crossposting from LessWrong]

 

I wouldn't be able to start until October (I'm a full time student and might be working on my thesis and have at least one exam to write during the summer); should I still apply?

I am otherwise very interested in the SERI MATS program and expect to be a strong applicant in other ways.

We hope to hold another cohort starting in Nov. However, applying for the summer cohort might be good practice, and if the mentor is willing, you could just defer to winter!

Curated and popular this week
trammell
 ·  · 25m read
 · 
Introduction When a system is made safer, its users may be willing to offset at least some of the safety improvement by using it more dangerously. A seminal example is that, according to Peltzman (1975), drivers largely compensated for improvements in car safety at the time by driving more dangerously. The phenomenon in general is therefore sometimes known as the “Peltzman Effect”, though it is more often known as “risk compensation”.[1] One domain in which risk compensation has been studied relatively carefully is NASCAR (Sobel and Nesbit, 2007; Pope and Tollison, 2010), where, apparently, the evidence for a large compensation effect is especially strong.[2] In principle, more dangerous usage can partially, fully, or more than fully offset the extent to which the system has been made safer holding usage fixed. Making a system safer thus has an ambiguous effect on the probability of an accident, after its users change their behavior. There’s no reason why risk compensation shouldn’t apply in the existential risk domain, and we arguably have examples in which it has. For example, reinforcement learning from human feedback (RLHF) makes AI more reliable, all else equal; so it may be making some AI labs comfortable releasing more capable, and so maybe more dangerous, models than they would release otherwise.[3] Yet risk compensation per se appears to have gotten relatively little formal, public attention in the existential risk community so far. There has been informal discussion of the issue: e.g. risk compensation in the AI risk domain is discussed by Guest et al. (2023), who call it “the dangerous valley problem”. There is also a cluster of papers and works in progress by Robert Trager, Allan Dafoe, Nick Emery-Xu, Mckay Jensen, and others, including these two and some not yet public but largely summarized here, exploring the issue formally in models with multiple competing firms. In a sense what they do goes well beyond this post, but as far as I’m aware none of t
 ·  · 19m read
 · 
I am no prophet, and here’s no great matter. — T.S. Eliot, “The Love Song of J. Alfred Prufrock”   This post is a personal account of a California legislative campaign I worked on March-June 2024, in my capacity as the indoor air quality program lead at 1Day Sooner. It’s very long—I included as many details as possible to illustrate a playbook of everything we tried, what the surprises and challenges were, and how someone might spend their time during a policy advocacy project.   History of SB 1308 Advocacy Effort SB 1308 was introduced in the California Senate by Senator Lena Gonzalez, the Senate (Floor) Majority Leader, and was sponsored by Regional Asthma Management and Prevention (RAMP). The bill was based on a report written by researchers at UC Davis and commissioned by the California Air Resources Board (CARB). The bill sought to ban the sale of ozone-emitting air cleaners in California, which would have included far-UV, an extremely promising tool for fighting pathogen transmission and reducing pandemic risk. Because California is such a large market and so influential for policy, and the far-UV industry is struggling, we were seriously concerned that the bill would crush the industry. A partner organization first notified us on March 21 about SB 1308 entering its comment period before it would be heard in the Senate Committee on Natural Resources, but said that their organization would not be able to be publicly involved. Very shortly after that, a researcher from Ushio America, a leading far-UV manufacturer, sent out a mass email to professors whose support he anticipated, requesting comments from them. I checked with my boss, Josh Morrison,[1] as to whether it was acceptable for 1Day Sooner to get involved if the partner organization was reluctant, and Josh gave me the go-ahead to submit a public comment to the committee. Aware that the letters alone might not do much, Josh reached out to a friend of his to ask about lobbyists with expertise in Cal
Rasool
 ·  · 1m read
 · 
In 2023[1] GiveWell raised $355 million - $100 million from Open Philanthropy, and $255 million from other donors. In their post on 10th April 2023, GiveWell forecast the amount they expected to raise in 2023, albeit with wide confidence intervals, and stated that their 10th percentile estimate for total funds raised was $416 million, and 10th percentile estimate for funds raised outside of Open Philanthropy was $260 million.  10th percentile estimateMedian estimateAmount raisedTotal$416 million$581 million$355 millionExcluding Open Philanthropy$260 million$330 million$255 million Regarding Open Philanthropy, the April 2023 post states that they "tentatively plans to give $250 million in 2023", however Open Philanthropy gave a grant of $300 million to cover 2023-2025, to be split however GiveWell saw fit, and it used $100 million of that grant in 2023. However for other donors I'm not sure what caused the missed estimate Credit to 'Arnold' on GiveWell's December 2024 Open Thread for bringing this to my attention   1. ^ 1st February 2023 - 31st January 2024