Ryan Kidd

Co-Director @ MATS
474 karmaJoined Sep 2021Working (0-5 years)Berkeley, CA, USA
matsprogram.org

Bio

Participation
6

Give me feedback! :)

Current

Past

  • Ph.D. in Physics from the University of Queensland (2017-2022)
  • Group organizer at Effective Altruism UQ (2018-2021)

Comments
25

Answer by Ryan KiddMar 21, 202410
1
0

MATS is now hiring for three roles!

  • Program Generalist (London) (1 hire, starting ASAP);
  • Community Manager (Berkeley) (1 hire, starting Jun 3);
  • Research Manager (Berkeley) (1-3 hires, starting Jun 3).

We are generally looking for candidates who:

  • Are excited to work in a fast-paced environment and are comfortable switching responsibilities and projects as the needs of MATS change;
  • Want to help the team with high-level strategy;
  • Are self-motivated and can take on new responsibilities within MATS over time; and
  • Care about what is best for the long-term future, independent of MATS’ interests.

Please apply via this form and share via your networks.

Thanks for publishing this, Arb! I have some thoughts, mostly pertaining to MATS:

  1. MATS believes a large part of our impact comes via accelerating researchers who might still enter AI safety, but would otherwise take significantly longer to spin up as competent researchers, rather than converting people into AIS researchers. MATS highly recommends that applicants have already completed AI Safety Fundamentals and most of our applicants come from personal recommendations or AISF alumni (though we are considering better targeted advertising to professional engineers and established academics). Here is a simplified model of the AI safety technical research pipeline as we see it.

    Why do we emphasize acceleration over conversion? Because we think that producing a researcher takes a long time (with a high drop-out rate), often requires apprenticeship (including illegible knowledge transfer) with a scarce group of mentors (with high barrier to entry), and benefits substantially from factors such as community support and curriculum. Additionally, MATS' acceptance rate is ~15% and many rejected applicants are very proficient researchers or engineers, including some with AI safety research experience, who can't find better options (e.g., independent research is worse for them). MATS scholars with prior AI safety research experience generally believe the program was significantly better than their counterfactual options, or was critical for finding collaborators or co-founders (alumni impact analysis forthcoming). So, the appropriate counterfactual for MATS and similar programs seems to be, "Junior researchers apply for funding and move to a research hub, hoping that a mentor responds to their emails, while orgs still struggle to scale even with extra cash."
  2. The "push vs. pull" model seems to neglect that e.g. many MATS scholars had highly paid roles in industry (or de facto offers given their qualifications) and chose to accept stipends at $30-50/h because working on AI safety is intrinsically a "pull" for a subset of talent and there were no better options. Additionally, MATS stipends are basically equivalent to LTFF funding; scholars are effectively self-employed as independent researchers, albeit with mentorship, operations, research management, and community support. Also, 63% of past MATS scholars have applied for funding immediately post-program as independent researchers for 4+ months as part of our extension program (many others go back to finish their PhDs or are hired) and 85% of those have been funded. I would guess that the median MATS scholar is slightly above the level of the median LTFF grantee from 2022 in terms of research impact, particularly given the boost they give to a mentor's research.
  3. Comparing the cost of funding marginal good independent researchers ($80k/year) to the cost of producing a good new researcher ($40k) seems like a false equivalence if you can't have one without the other. I believe the most taut constraint on producing more AIS researchers is generally training/mentorship, not money. Even wizard software engineers generally need an on-ramp for a field as pre-paradigmatic and illegible as AI safety. If all MATS' money instead went to the LTFF to support further independent researchers, I believe that substantially less impact would be generated. Many LTFF-funded researchers have enrolled in MATS! Caveat: you could probably hire e.g. Terry Tao for some amount of money, but this would likely be very large. Side note: independent researchers are likely cheaper than scholars in managed research programs or employees at AIS orgs because the latter two have overhead costs that benefit researcher output.
  4. Some of the researchers who passed through AISC later did MATS. Similarly, several researchers who did MLAB or REMIX later did MATS. It's often hard to appropriately attribute Shapley value to elements of the pipeline, so I recommend assessing orgs addressing different components of the pipeline by how well they achieve their role, and distributing funds between elements of the pipeline based on how much each is constraining the flow of new talent to later sections (anchored by elasticity to funding). For example, I believe that MATS and AISC should be assessed by their effectiveness (including cost, speedup, and mentor time) at converting "informed talent" (i.e., understands the scope of the problem) into "empowered talent" (i.e., can iterate on solutions and attract funding/get hired). This said, MATS aims to improve our advertising towards established academics and software engineers, which might bypass the pipeline in the diagram above. Side note: I believe that converting "unknown talent" into "informed talent" is generally much cheaper than converting "informed talent" into "empowered talent."
  5. Several MATS mentors (e.g., Neel Nanda) credit the program for helping them develop as research leads. Similarly, several MATS alumni have credited AISC (and SPAR) for helping them develop as research leads, similar to the way some Postdocs or PhDs take on supervisory roles on the way to Professorship. I believe the "carrying capacity" of the AI safety research field is largely bottlenecked on good research leads (i.e., who can scope and lead useful AIS research projects), especially given how many competent software engineers are flooding into AIS. It seems a mistake not to account for this source of impact in this review.

Cheers, Nick! We decided to change the title to "retrospective" based on this and some LessWrong comments.

Answer by Ryan KiddNov 15, 202321
2
0

TL;DR: MATS could support another 10-15 scholars at $21k/scholar with seven more high-impact mentors (Anthropic, DeepMind, Apollo, CHAI, CAIS)

The ML Alignment & Theory Scholars (MATS) Program is twice-yearly educational seminar and independent research program that aims to provide talented scholars with talks, workshops, and research mentorship in the field of AI alignment and connect them with the Berkeley AI safety research community.

MATS helps expand the talent pipeline for AI safety research by empowering scholars to work on AI safety at existing research teams, found new research teams, and pursue independent research. To this end, MATS connects scholars with research mentorship and funding, and provides a seminar program, office space, housing, research coaching, networking opportunities, community support, and logistical support to scholars. MATS supports mentors with logistics, advertising, applicant selection, and complementary scholar support and research management systems, greatly reducing the barriers to research mentorship.

The Winter 2023-24 Program will run Jan 8-Mar 15 in Berkeley, California and feature seminar talks from leading AI safety researchers, workshops on research strategy, and networking events with the Bay Area AI safety community. We currently have funding for ~50 scholars and 23 mentors, but could easily use more.

We are currently funding constrained and accepting donations. We would love to include up to seven additional interested mentors from Anthropic, Apollo Research, CAIS, Google DeepMind, UC Berkeley CHAI, and more, with up to 10-15 additional scholars at $21k/scholar.

Buck Shlegeris, Ethan Perez, Evan Hubinger, and Owain Evans are mentoring in both programs. The links show their MATS projects, "personal fit" for applicants, and (where applicable) applicant selection questions, designed to mimic the research experience.

Astra seems like an obviously better choice for applicants principally interested in:

  • AI governance: MATS has no AI governance mentors in the Winter 2023-24 Program, whereas Astra has Daniel Kokotajlo, Richard Ngo, and associated staff at ARC Evals and Open Phil;
  • Worldview investigations: Astra has Ajeya Cotra, Tom Davidson, and Lukas Finnvedan, whereas MATS has no Open Phil mentors;
  • ARC Evals: While both programs feature mentors working on evals, only Astra is working with ARC Evals;
  • AI ethics: Astra is working with Rob Long.

MATS has the following features that might be worth considering:

  1. Empowerment: Emphasis on empowering scholars to develop as future "research leads" (think accelerated PhD-style program rather than a traditional internship), including research strategy workshops, significant opportunities for scholar project ownership (though the extent of this varies between mentors), and a 4-month extension program;
  2. Diversity: Emphasis on a broad portfolio of AI safety research agendas and perspectives with a large, diverse cohort (50-60) and comprehensive seminar program;
  3. Support: Dedicated and experienced scholar support + research coach/manager staff and infrastructure;
  4. Network: Large and supportive alumni network that regularly sparks research collaborations and AI safety start-ups (e.g., Apollo, Leap Labs, Timaeus, Cadenza, CAIP);
  5. Experience: Have run successful research cohorts with 30, 58, 60 scholars, plus three extension programs with about half as many participants.

Speaking on behalf of MATS, we offered support to the following AI governance/strategy mentors in Summer 2023: Alex Gray, Daniel Kokotajlo, Jack Clark, Jesse Clifton, Lennart Heim, Richard Ngo, and Yonadav Shavit. Of these people, only Daniel and Jesse decided to be included in our program. After reviewing the applicant pool, Jesse took on three scholars and Daniel took on zero.

I think that one's level of risk aversion in grantmaking should depend on the upside and the downside risk of grantees' action space. I see a potentially high upside to AI safety standards or compute governance projects that are specific, achievable, and verifiable and are rigorously determined by AI safety and policy experts. I see a potentially high downside to low-context and high-bandwidth efforts to slow down AI development that are unspecific, unachievable, or unverifiable and generate controversy or opposition that could negatively affect later, better efforts.

One might say, "If the default is pretty bad, surely there are more ways to improve the world than harm it, and we should fund a broad swathe of projects!" I think that the current projects to determine specific, achievable, and verifiable safety standards and compute governance levers are actually on track to be quite good, and we have a lot to lose through high-bandwith, low-context campaigns.

Thanks Joseph! Adding to this, our ideal applicant has:

  • an understanding of the AI alignment research landscape equivalent to having completed the AGI Safety Fundamentals course;
  • previous experience with technical research (e.g. ML, CS, maths, physics, neuroscience, etc.), ideally at a postgraduate level;
  • strong motivation to pursue a career in AI alignment research, particularly to reduce global catastrophic risk.

MATS alumni have gone on to publish safety research (LW posts here), join alignment research teams (including at Anthropic and MIRI), and found alignment research organizations (including a MIRI team, Leap Labs, and Apollo Research). Our alumni spotlight is here.

  • We broadened our advertising approach for the Summer 2023 Cohort, including a Twitter post and a shout-out on Rob Miles' YouTube and TikTok channels. We expected some lowering of average applicant quality as a result but have yet to see a massive influx of applicants from these sources. We additionally focused more on targeted advertising to AI safety student groups, given their recent growth. We will publish updated applicant statistics after our applications close.
  • In addition to applicant selection and curriculum elements, our Scholar Support staff, introduced in the Winter 2022-23 Cohort, supplement the mentorship experience by providing 1-1 research strategy and unblocking support for scholars. This program feature aims to:
    • Supplement and augment mentorship with 1-1 debugging, planning, and unblocking;
    • Allow air-gapping of evaluation and support, improving scholar outcomes by resolving issues they would not take to their mentor;
    • Solve scholars’ problems, giving more time for research.
  • Defining "good alignment research" is very complicated and merits a post of its own (or two, if you also include the theories of change that MATS endorses). We are currently developing scholar research ability through curriculum elements focused on breadth, depth, and epistemology (the "T-model of research"):
  • Our Alumni Spotlight includes an incomplete list of projects we highlight. Many more past scholar projects seem promising to us but have yet to meet our criteria for inclusion here. Watch this space.
  • Since Summer 2022, MATS has explicitly been trying to parallelize the field of AI safety as much as is prudent, given the available mentorship and scholarly talent. In longer-timeline worlds, more careful serial research seems prudent, as growing the field rapidly is a risk for the reasons outlined in the above article. We believe that MATS' goals have grown more important from the perspective of timelines shortening (though MATS management has not updated on timelines much as they were already fairly short in our estimation).
  • MATS would love to support senior research talent interested in transitioning into AI safety! Our scholars generally comprise 10% Postdocs, and we would like this number to rise. Currently, our advertising strategy is contingent on the AI safety community adequately targeting these populations (which seems false) and might change for future cohorts.
Load more