Co-Director at SERI MATS (2022-current)
Ph.D. in Physics from the University of Queensland (2022)
Group organizer at Effective Altruism UQ (2017-2021)
I think that one's level of risk aversion in grantmaking should depend on the upside and the downside risk of grantees' action space. I see a potentially high upside to AI safety standards or compute governance projects that are specific, achievable, and verifiable and are rigorously determined by AI safety and policy experts. I see a potentially high downside to low-context and high-bandwidth efforts to slow down AI development that are unspecific, unachievable, or unverifiable and generate controversy or opposition that could negatively affect later, better efforts.
One might say, "If the default is pretty bad, surely there are more ways to improve the world than harm it, and we should fund a broad swathe of projects!" I think that the current projects to determine specific, achievable, and verifiable safety standards and compute governance levers are actually on track to be quite good, and we have a lot to lose through high-bandwith, low-context campaigns.
Thanks Joseph! Adding to this, our ideal applicant has:
MATS alumni have gone on to publish safety research (LW posts here), join alignment research teams (including at Anthropic and MIRI), and found alignment research organizations (including a MIRI team, Leap Labs, and Apollo Research). Our alumni spotlight is here.
Copying over the Facebook comments I just made.
Response to Kat, intended as a devil's advocate stance:
A few key background claims:
We hope to hold another cohort starting in Nov. However, applying for the summer cohort might be good practice, and if the mentor is willing, you could just defer to winter!
I'm not advocating a stock HR department with my comment. I used "HR" as a shorthand for "community health agent who is focused on support over evaluation." This is why I didn't refer to HR departments in my post. Corporate HR seems flawed in obvious ways, though I think it's probably usually better than nothing, at least for tail risks.
In my management role, I have to juggle these responsibilities. I think a HR department should generally exist, even if management is really fair and only wants the best for the world, we promise (not bad faith, just humour).
This post is mainly explaining part of what I'm currently thinking about regarding community health in EA and at MATS. If I think of concrete, shareable examples of concerns regarding insufficient air-gapping in EA or AI safety, I'll share them here.
Yeah, I think that EA is far better at encouraging and supporting disclosure to evaluators than, for example, private industry. I also think EAs are more likely to genuinely report their failures (and I take pride in doing this myself, to the extent I'm able). However, I feel that there is still room for more support in the EA community that is decoupled from evaluation, for individuals that might benefit from this.
Speaking on behalf of MATS, we offered support to the following AI governance/strategy mentors in Summer 2023: Alex Gray, Daniel Kokotajlo, Jack Clark, Jesse Clifton, Lennart Heim, Richard Ngo, and Yonadav Shavit. Of these people, only Daniel and Jesse decided to be included in our program. After reviewing the applicant pool, Jesse took on three scholars and Daniel took on zero.