Re: why our current rate of spending on AI safety is "low." At least for now, the main reason is lack of staff capacity! We're putting a ton of effort into hiring (see here) but are still not finding as many qualified candidates for our AI roles as we'd like. If you want our AI safety spending to grow faster, please encourage people to apply!
I'll also note that GCRs was the original name for this part of Open Phil, e.g. see this post from 2015 or this post from 2018.
Holden has been working on independent projects, e.g. related to RSPs; the AI teams at Open Phil no longer report to him and he doesn't approve grants. We all still collaborate to some degree, but new hires shouldn't e.g. expect to work closely with Holden.
We fund a lot of groups and individuals and they have a lot of different (and sometimes contradicting) policy opinions, so the short answer is "yes." In general, I really did mean the "tentative" in my 12 tentative ideas for US AI policy, and the other caveats near the top are also genuine.
That said, we hold some policy intuitions more confidently than others, and if someone disagreed pretty thoroughly with our overall approach and they also weren't very persuasive that their alternate approach would be better for x-risk reduction, then they might not be a good fit for the team.
Indeed. There aren't hard boundaries between the various OP teams that work on AI, and people whose reporting line is on one team often do projects for or with a different team, or in another team's "jurisdiction." We just try to communicate about it a lot, and our team leads aren't very possessive about their territory — we just want to get the best stuff done!
The hiring is more incremental than it might seem. As explained above, Ajeya and I started growing our teams earlier via non-public rounds, and are now just continuing to hire. Claire and Andrew have been hiring regularly for their teams for years, and are also just continuing to hire. The GCRCP team only came into existence a couple months ago and so is hiring for that team for the first time. We simply chose to combine all these hiring efforts into one round because that makes things more efficient on the backend, especially given that many people might be a fit for one or more roles on multiple teams.
The technical folks leading our AI alignment grantmaking (Daniel Dewey and Catherine Olsson) left to do more "direct" work elsewhere a while back, and Ajeya only switched from a research focus (e.g. the Bio Anchors report) to an alignment grantmaking focus late last year. She did some private recruiting early this year, which resulted in Max Nadeau joining her team very recently, but she'd like to hire more. So the answer to "Why now?" on alignment grantmaking is "Ajeya started hiring soon after she switched into a grantmaking role. Before that, our initial alignment grantmakers left, and it's been hard to find technical folks who want to focus on grantmaking rather than on more thoroughly technical work."
Re: the governance team. I've lead AI governance grantmaking at Open Phil since ~2019, but for a few years we felt very unclear about what our strategy should be, and our strategic priorities shifted rapidly, and it felt risky to hire new people into a role that might go away through no fault of their own as our strategy shifted. In retrospect, this was a mistake and I wish we'd started to grow the team at least as early as 2021. By 2022 I was finally forced into a situation of "Well, even if it's risky to take people on, there is just an insane amount of stuff to do and I don't have time for ~any of it, so I need to hire." Then I did a couple non-public hiring rounds which resulted in recent new hires Alex Lawsen, Trevor Levin, and Julian Hazell. But we still need to hire more; all of us are already overbooked and turning down opportunities for lack of bandwidth constantly.
Recently, I've encountered an increasing number of misconceptions, in rationalist and effective altruist spaces, about what Open Philanthropy's Global Catastrophic Risks (GCR) team does or doesn't fund and why, especially re: our AI-related grantmaking. So, I'd like to briefly clarify a few things:
I hope these clarifications are helpful, and lead to fruitful discussion, though I don't expect to have much time to engage with comments here.