Hi everyone,

We’re hosting an Ask Me Anything session to answer questions about Open Philanthropy’s new hiring round (direct link to roles), which involves over 20 new positions across our teams working on global catastrophic risks (GCRs). 

You can start sharing questions now, and you’re welcome to keep asking questions through the end of the hiring round (11:59 pm PST on November 9th). We’ll plan to share most of our answers between the morning of Friday, October 20th and EOD on Monday, October 23rd.

Participants include:

  • Ajeya Cotra, who leads our work on technical AI safety.
  • Julian Hazell, a Program Associate in AI Governance and Policy.
  • Jason Schukraft, who leads our GCR cause prioritization team.
  • Eli Rose, a Senior Program Associate in GCR Capacity Building (formerly known as the “Effective Altruism Community Growth (Longtermism)” team).
  • Chris Bakerlee, a Senior Program Associate in Biosecurity and Pandemic Preparedness.
  • Philip Zealley, a member of the recruiting team who can answer general questions about the OP recruiting process (and this round in particular). 

They’ll be happy to answer questions about:

  • The new roles — the work they involve, the backgrounds a promising candidate might have, and so on.
  • The work of our teams — grants we’ve made, aspects of our strategy, and plans for the future.
  • Working at Open Philanthropy more broadly — what we like, what we find more difficult, what we’ve learned in the process, etc.

This hiring round is a major event for us; if you’re interested in working at Open Phil, this is a great time to apply (or ask questions here!).

To help us respond, please direct your questions at a specific team when possible. If you have multiple questions for different teams, please split them up into multiple comments.

Comments67
Sorted by Click to highlight new comments since: Today at 8:01 AM
Austin
6mo53
10
2
1

I have this impression of OpenPhil as being the Harvard of EA orgs -- that is, it's the premier choice of workplace for many highly-engaged EAs, drawing in lots of talent, with distortionary effects on other orgs trying to hire 😅

When should someone who cares a lot about GCRs decide not to work at OP?

When should someone who cares a lot about GCRs decide not to work at OP?

I agree that there are several advantages of working at Open Phil, but I also think there are some good answers to "why wouldn't someone want to work at OP?"

Culture, worldview, and relationship with labs

Many people have an (IMO fairly accurate) impression that OpenPhil is conservative, biased toward inaction, generally prefers maintaining the status quo, and is generally in favor of maintaining positive relationships with labs.

As I've gotten more involved in AI policy, I've updated more strongly toward this position. While simple statements always involve a bit of gloss/imprecision, I think characterizations like "OpenPhil has taken a bet on the scaling labs", "OpenPhil is concerned about disrupting relationships with labs", and even "OpenPhil sometimes uses its influence to put pressure on orgs to not do things that would disrupt the status quo" are fairly accurate.

The most extreme version of this critique is that perhaps OpenPhil has been net negative through its explicit funding for labs and implicit contributions to a culture that funnels money and talent toward labs and other organizations that entrench a lab-friendly status quo. 

This might change as OpenPhil hires new people and plans to spend more money, but by default, I expect that OpenPhil will continue to play the "be nice with labs//don't disrupt the status quo" role in the space. (In contrast to organizations like MIRI, Conjecture, FLI, the Center for AI Policy, perhaps CAIS). 

Lots of people want to work there; replaceability

Given OP's high status, lots of folks want to work there. Some people think the difference between the "best applicant" and the "2nd best applicant" is often pretty large, but this certainly doesn't seem true in all cases.

I think if someone EG had an opportunity to work at OP vs. start their own organization or do something that requires more agency/entrepreneurship, there might be a strong case for them to do the latter, since it's much less likely to happen by default.

What does the world need?

I think this is somewhat related to the first point, but I'll flesh it out in a different way.

Some people think that we need more "rowing"– like, OP's impact is clearly good, and if we just add some more capacity to the grantmakers and make more grants that look pretty similar to previous grants, we're pushing the world into a considerably better direction.

Some people think that the default trajectory is not going so well, and this is (partially or largely) caused or maintained by the OP ecosystem  Under this worldview, one might think that adding some additional capacity to OP is not actually all that helpful in expectation. 

Instead, people with this worldview believe that projects that aim to (for example) advocate for strong regulations, engage with the media, make the public more aware about AI risk, and do other forms of direct work more focused on folks outside of the core EA community might be more impactful. 

Of course, part of this depends on how open OP will be to people "steering" from within. My expectation is that it would be pretty hard to steer OP from within (my impression is that lots of smart people have tried, and folks like Ajeya and Luke have clearly been thinking about things for a long time, and the culture has already been shaped by many core EAs, and there's a lot of inertia, so a random new junior person is pretty unlikely to substantially shift their worldview, though I of course could be wrong). 

(I began working for OP on the AI governance team in June. I'm commenting in a personal capacity based on my own observations; other team members may disagree with me.)

OpenPhil sometimes uses its influence to put pressure on orgs to not do things that would disrupt the status quo

FWIW I really don’t think OP is in the business of preserving the status quo.  People who work on AI at OP have a range of opinions on just about every issue, but I don't think any of us feel good about the status quo! People (including non-grantees) often ask us for our thoughts about a proposed action, and we’ll share if we think some action might be counterproductive, but many things we’d consider “productive” look very different from “preserving the status quo.” For example, I would consider the CAIS statement to be pretty disruptive to the status quo and productive, and people at Open Phil were excited about it and spent a bunch of time finding additional people to sign it before it was published.

Lots of people want to work there; replaceability

I agree that OP has an easier time recruiting than many other orgs, though perhaps a harder time than frontier labs. But at risk of self-flattery, I think the people we've hired would generally be hard to replace — these roles require a fairly rare combination of traits. People who have them can be huge value-adds relative to the counterfactual!

pretty hard to steer OP from within

I basically disagree with this. There are areas where senior staff have strong takes, but they'll definitely engage with the views of junior staff, and they sometimes change their minds. Also, the AI world is changing fast, and as a result our strategy has been changing fast, and there are areas full of new terrain where a new hire could really shape our strategy. (This is one way in which grantmaker capacity is a serious bottleneck.)

Wow lots of disagreement here - I'm curious what the disagreement is about, if anyone wants to explain?

I'm not officially part of the AMA but I'm one of the disagreevotes so I'll chime in.

As someone who's only recently started, the vibe this post gives of it being hard for me to disagree with established wisdom and/or push the org to do things differently, meaning my only role is to 'just push out more money along the OP party line', is just miles away from what I've experienced.

If anything, I think how much ownership I've needed to take for the projects I'm working on has been the biggest challenge of starting the role. It's one that (I hope) I'm rising to, but it's hard!

In terms of how open OP is to steering from within, it seems worth distinguishing 'how likely is a random junior person to substantially shift the worldview of the org', and 'what would the experience of that person be like if they tried to'. Luke has, from before I had an offer, repeatedly demonstrated that he wants and values my disagreement in how he reacts to it and acts on it, and it's something I really appreciate about his management. 

Ah, sorry you got that impression from my question! I mostly meant Harvard in terms of "desirability among applicants" as opposed to "established bureaucracy". My outside impression is that a lot of people I respect a lot (like you!) made the decision to go work at OP instead of one of their many other options. And that I've heard informal complaints from leaders of other EA orgs, roughly "it's hard to find and keep good people, because our best candidates keep joining OP instead". So I was curious to learn more about OP's internal thinking about this effect.

This is a hard question to answer, because there are so many different jobs someone could take in the GCR space that might be really impactful. And while we have a good sense of what someone can achieve by working at OP, we can't easily compare that to all the other options someone might have. A comparison like "OP vs. grad school" or "OP vs. pursuing a government career" comes with dozens of different considerations that would play out differently for any specific person.

Ultimately, we hope people will consider jobs we've posted (if they seem like a good fit), and also consider anything else that looks promising to them.

I'm curious about the process that led to your salary ranges, for all the teams broadly, but especially for the technical AI safety roles, where the case for having very expensive counterfactuals (in money and otherwise) is cleanest.

When I was trying to ballpark a salary range for a position that is in some ways comparable to a grantmaking position at Open Phil, most reference jobs I considered had an upper range that's higher than OP's, especially in the Bay Area[1]:

Of course it makes sense that the upper end of for-profit pay is higher, for various practical and optical reasons. But I think I was a bit surprised by how the pay range at some other (nonprofit) reference institutions were quite high by my lights. I distinctively recall numbers for e.g. GiveWell being much lower in the recent past (including after inflation adjustment). And in particular, current ranges for salaries at reference institutions were broadly higher than OP's, despite Open Phil's work being more neglected and of similar or higher importance.

So what process did you use to come up up with your salary ranges? In particular, did the algorithm take into account reference ranges in 2023, or was it (perhaps accidentally) anchored on earlier numbers from years past?

COI disclaimer: I did apply to OP so I guess that there's a small COI in the very conjunctive and unlikely world that this comment might affect my future salary.

  1. ^

    TBC, I also see lower bounds that's more similar to OP's, or in some cases much lower. But it intuitively makes sense to me that OP's hiring bar is aiming to be higher than eg junior roles at most other EA orgs, or that of non-EA foundations with a much higher headcount and thus greater ability to onboard junior people.

PhilZ
6mo12
0
0
1
1
1

Generally, we try to compensate people in such a way that compensation is neither the main reason to be at Open Phil nor the main reason to consider leaving. We rely on market data to set compensation for each role, aiming to compete with a candidate’s “reasonable alternatives” (e.g., other foundations, universities, or high-end nonprofits; not roles like finance or tech where compensation is the main driving factor in recruiting). Specifically, we default to using a salary survey of other large foundations (Croner), and currently target the 75th percentile, as well as offering modest upwards adjustments on top of the base numbers for staff in SF and DC (where we think there are positive externalities for the org from staff being able to cowork in person, but higher cost of living). I can’t speak to what they’re currently doing, but historically, GiveWell has used the same salary survey; I’d guess that the Senior Research role is benchmarked to Program Officer, which is a more senior role than we’re currently posting for in this GCR round, which explains the higher compensation. I don’t know what BMGF benchmarks you are looking at, but I’d guess you’re looking at more senior positions that typically require more experience and control higher budgets at the higher end.

That said, your point about technical AI Safety researchers at various nonprofit orgs making more than our benchmarks is something that we’ve been reflecting on internally and think does represent a relevant “reasonable alternative” for the kinds of folks that we’re aiming to hire, and so we’re planning to create a new comp ladder for technical AI Safety roles, and in the meantime have moderately increased the posted comp for the open TAIS associate and senior associate roles.

Not a question, but I was having trouble navigating earlier so I figured I'd share a list of the roles currently on the OP page

  • AI Governance and Policy
    • (Senior) Program Associate, Generalist
    • Senior Program Associate, US AI Policy Advocacy
    • Senior Program Associate, Technical AI Governance Mechanisms
  • Technical AI Safety
    • (Senior) Program Associate, Generalist
    • Senior Program Associate, Subfield Specialist
    • (Senior) Research Associate
    • Executive Assistant
  • Biosecurity and Pandemic Preparedness
    • Security Associate / Lead
    • Operations Associate / Lead
    • Executive Assistant
    • Research Associate / Fellow or Lead Researcher
    • (Senior) Program Associate, Generalist
    • (Senior) Program Associate, Community Manager / Grantmaker
    • (Senior) Program Associate, Life Sciences Governance Grantmaker
    • Cause X Contractor
  • Global Catastrophic Risks Capacity Building
    • (Senior) Program Associate, Generalist
    • (Senior) Program Associate, AI Safety Capacity-Building
    • (Senior) Program Associate, University Groups
    • Operations Associate / Lead
    • Chief of Staff
  • Global Catastrophic Risks Cause Prioritization
    • Cause Prioritization Research Fellow
    • Cause Prioritization Strategy Fellow

Thanks! I've added a direct link to the roles now, to reduce potential confusion.

Roughly what percent of Open Philanthropy hires are from candidates who apply without any personal connections, and roughly what percent of your hires are through connections/networking?

I don’t have specific data on this, but only a minority of our hires apply through referrals or active networking. Our process is set up to avoid favoring people who come via prior connections (e.g. by putting a lot of weight on anonymized work tests), and many of our best hires have joined without any prior connections to OP. However, we do still get a lot of value from people referring candidates to our roles, and would encourage more of this to keep expanding our networks.

It seems that OP's AI safety & gov teams have both been historically capacity-constrained. Why the decision to hire for these roles now (rather than earlier)?

(FYI to others - I've just seen Ajeya's very helpful writeup, which has already partially answer this question!)

The technical folks leading our AI alignment grantmaking (Daniel Dewey and Catherine Olsson) left to do more "direct" work elsewhere a while back, and Ajeya only switched from a research focus (e.g. the Bio Anchors report) to an alignment grantmaking focus late last year. She did some private recruiting early this year, which resulted in Max Nadeau joining her team very recently, but she'd like to hire more. So the answer to "Why now?" on alignment grantmaking is "Ajeya started hiring soon after she switched into a grantmaking role. Before that, our initial alignment grantmakers left, and it's been hard to find technical folks who want to focus on grantmaking rather than on more thoroughly technical work."

Re: the governance team. I've lead AI governance grantmaking at Open Phil since ~2019, but for a few years we felt very unclear about what our strategy should be, and our strategic priorities shifted rapidly, and it felt risky to hire new people into a role that might go away through no fault of their own as our strategy shifted. In retrospect, this was a mistake and I wish we'd started to grow the team at least as early as 2021. By 2022 I was finally forced into a situation of "Well, even if it's risky to take people on, there is just an insane amount of stuff to do and I don't have time for ~any of it, so I need to hire." Then I did a couple non-public hiring rounds which resulted in recent new hires Alex Lawsen, Trevor Levin, and Julian Hazell. But we still need to hire more; all of us are already overbooked and turning down opportunities for lack of bandwidth constantly.

To add on this, I'm confused by your choice to grow these teams quite abruptly as opposed to incrementally. What's your underlying reasoning?

The hiring is more incremental than it might seem. As explained above, Ajeya and I started growing our teams earlier via non-public rounds, and are now just continuing to hire. Claire and Andrew have been hiring regularly for their teams for years, and are also just continuing to hire. The GCRCP team only came into existence a couple months ago and so is hiring for that team for the first time. We simply chose to combine all these hiring efforts into one round because that makes things more efficient on the backend, especially given that many people might be a fit for one or more roles on multiple teams.

What's the reason for the change from Longtermism to GCRs? How has/will this change strategy going forward?

We decided to change the name to reflect the fact that we don't think you need to take a long-termist philosophical stance to work on AI risks and biorisks; the specific new name was chosen from among a few contenders after a survey process. The name change doesn't reflect any sharp break from how we have operated in practice for the last while, so I don't think there are any specific strategy changes it implies.

I'll also note that GCRs was the original name for this part of Open Phil, e.g. see this post from 2015 or this post from 2018.

What is Holden Karnofsky working on these days? He was writing publicly on AI for many months in a way that seemed to suggest he might start a new evals organization or a public advocacy campaign. He took a leave of absence to explore these kinds of projects, then returned as OpenPhil's Director of AI Strategy. What are his current priorities? How closely does he work with the teams that are hiring? 

Holden has been working on independent projects, e.g. related to RSPs; the AI teams at Open Phil no longer report to him and he doesn't approve grants. We all still collaborate to some degree, but new hires shouldn't e.g. expect to work closely with Holden.

I and many others I know find grantmaking rather stressful and psychologically taxing (more than average for other roles in professional EA). Common pain points include discomfort with rejecting people[1], having to make difficult tradeoffs with poor feedback loops, navigating/balancing a number of implicit and explicit commitments (not all of which you still agree with, if you ever did), having to navigate an epistemic/professional environment where people around you are heavily incentivized to manipulate you, and the constant feeling of stress of being behind (not all of it "real"). 

So how do people at Open Phil deal with this stress, whether individually or institutionally? And what are some character traits or personality dispositions which would not be a good fit for grantmaking roles at Open Phil[2]?

  1. ^

    In many cases people you or someone you know have a social or prior professional connection with.

  2. ^

    In case it's helpful context, I wrote a short list of reasons someone might be a poor fit for LTFF fund chair here

Yeah, I feel a lot of this stress as well, though FWIW for me personally research was more stressful. I don't think there's any crisp institutional advice or formula for dealing with this kind of thing unfortunately. One disposition that I think makes it hard to be a grantmaker at OP (in addition to your list, which I think is largely overlapping) is being overly attached to perfection and satisfyingly clean, beautifully-justifiable answers and decisions.

Would the AI Governance & Policy group consider hiring someone in AI policy who disagreed with various policies that organizations you've funded have promoted?

For instance, multiple organizations you've funded have released papers or otherwise advocated for strong restrictions on open source AI -- would you consider hiring someone who disagrees with substantially on their recommendations or many specific points they raise?

We fund a lot of groups and individuals and they have a lot of different (and sometimes contradicting) policy opinions, so the short answer is "yes." In general, I really did mean the "tentative" in my 12 tentative ideas for US AI policy, and the other caveats near the top are also genuine.

That said, we hold some policy intuitions more confidently than others, and if someone disagreed pretty thoroughly with our overall approach and they also weren't very persuasive that their alternate approach would be better for x-risk reduction, then they might not be a good fit for the team.

Disclaimer: I joined OP two weeks ago in the Program Associate role on the Technical AI Safety team. I'm leaving some comments describing questions I wanted to know to assess whether I should take the job (which, obviously, I ended up doing).

What sorts of personal/career development does the PA role provide? What are the pros and cons of this path over e.g. technical research (which has relatively clear professional development in the form of published papers, academic degrees, high-status job titles that bring public credibility)?

For me personally, research and then grantmaking at Open Phil has been excellent for my career development, and it's pretty implausible that grad school in ML or CS, or an ML engineering role at an AI company, or any other path I can easily think of, would have been comparably useful. 

If I had pursued an academic path, then assuming I was successful on that path, I would be in my first or maybe second year as an assistant professor right about now (or maybe I'd just be starting to apply for such a role). Instead, at Open Phil, I wrote less-academic reports and posts about less established topics in a more home-grown style, gave talks in a variety of venues, talked to podcasters and journalists, and built lots of relationships in industry, academia, and the policy world in the course of funding and advising people. I am likely more noteworthy among AI companies, policymakers, and even academic researchers than I would have been if I had spent that time doing technical research in a grad school and then went for a faculty role — and I additionally get to direct funding, an option which wouldn't have been easily available to me on that alternative path.

The obvious con of OP relative to a path like that is that you have to "roll your own" career path to a much greater degree. If you go to grad school, you will definitely write papers, and then be evaluated based on how many good papers you've written; there isn't something analogous you will definitely be made to do and evaluated on at OP (at least not something clearly publicly visible). But I think there are a lot of pros:

  • The flipside of the social awkwardness and stress that Linch highlighted in one of his questions is that a grantmaking role teaches you how to navigate delicate power dynamics, say no, give tough feedback, and make non-obvious decisions that have tangible consequences on reasonably short timeframes. I think I've developed more social maturity and operational effectiveness than I would have in a research role; this is a pretty important and transferrable skillset.
  • There is more space than there would be in a grad school or AI lab setting to think about weird questions that sit at the intersection of different fields and have no obvious academic home, such as the trajectory of AI development and timelines to very powerful AI. While independent research or other small-scale nonprofit research groups could offer a similar degree of space to think about "weird stuff," OP is unusual in combining that kind of latitude with the ability to direct funding (and thus the ability to help make big material projects happen in the world).
     

Disclaimer: I joined OP two weeks ago in the Program Associate role on the Technical AI Safety team. I'm leaving some comments describing questions I wanted to know to assess whether I should take the job (which, obviously, I ended up doing).

What does OP’s TAIS funding go to? Don’t professors’ salaries already get paid by their universities? Can (or can't) PhD students in AI get no-strings-attached funding (at least, can PhD students at prestigious universities)?

Professors typically have their own salaries covered, but need to secure funding for each new student they take on, so providing funding to an academic lab allows them to take on more students and grow (it's not always the case that everyone is taking on as many students as they can manage). Additionally, it's often hard for professors to get funding for non-student expenses (compute, engineering help, data labeling contractors, etc) through NSF grants and similar, which are often restricted to students.

Disclaimer: I joined OP two weeks ago in the Program Associate role on the Technical AI Safety team. I'm leaving some comments describing questions I wanted to know to assess whether I should take the job (which, obviously, I ended up doing).

Is it way easier for researchers to do AI safety research within AI scaling labs (due to: more capable/diverse AI models, easier access to them (i.e. no rate limits/usage caps), better infra for running experiments, maybe some network effects from the other researchers at those labs, not having to deal with all the logistical hassle that comes from being a professor/independent researcher)? 

Does this imply that the research ecosystem OP is funding (which is ~all external to these labs) isn't that important/cutting-edge for AI safety?

I think this is definitely a real dynamic, but a lot of EAs seem to exaggerate it a lot in their minds and inappropriately round the impact of external research down to 0. Here are a few scattered points on this topic:

  • Third party researchers can influence the research that happens at labs through the normal diffusion process by which all research influences all other research. There's definitely some barrier to research insight diffusing from academia to companies (and e.g. it's unfortunately common for an academic project to have no impact on company practice because it just wasn't developed with the right practical constraints in mind), but it still happens all the time (and some types of research, e.g. benchmarks, are especially easy to port over). If third party research can influence lab practice to a substantial degree, then funding third party research just straightforwardly increases the total amount of useful research happening, since labs can't hire everyone who could do useful work.  
  • It will increasingly be possible to do good (non-interpretability) research on large models through APIs provided by labs, and Open Phil could help facilitate that and increase the rate at which it happens. We can also help facilitate greater compute budgets and engineering support.
  • The work of the lab-external safety research community can also impact policy and public opinion; the safety teams at scaling labs are not their only audience. For example, capability evaluations and model organisms work both have the potential to have at least as big an impact on policy as they do on technical safety work happening inside labs.
  • We can fund nonprofits and companies which directly interface with AI companies in a consulting-like manner (e.g. red-teaming consultants); I expect an increasing fraction of our opportunities to look like this.
  • Academics and other external safety researchers we fund now can end up joining scaling labs later (as e.g. Ethan Perez and Collin Burns did), to implement ideas that they developed on the outside; I think this is likely to happen more and more.
  • Some research directions benefit less than others from access to cutting edge models. For example, it seems like there's a lot of interpretability work that can be done on very small models, whereas scalable oversight work seems harder to do without quite smart models.

Thank you for doing this! Highly helpful, and transparent, we need more of this. I have many questions, mostly on a meta-level, but the part about AI safety is what I'd preferred to be answered. 

About AI safety : 

  • What kind of impact or successes do you expect by hiring these 3 seniors roles in AI safety? Can you detail a bit the impact value expected by the creation of these roles?
  • Do you think that the AI safety field is talent-constrained at the senior level, but has its fair share of junior positions already filled? 

About the ratio of hires between AI safety and biorisks: 

  • Given the high number of positions in biosafety, should we conclude that the field is more talent-constrained than AI safety that seem to need less workforce?

More diverse consideration about GCR

  • Do you intend to dedicate any of these roles to nuclear risks to help a bit the lack of funding in the field of nuclear risk, or is it rated rather low in your prioritization cause ranking?

About cause-prioritization positions

  • What kind of projects do you intend to launch/can you be more specific about the topics that will be researched in this area? Also, what kind of background knowledge is needed for such a job?

Thank you so much for your answers!

On technical AI safety, fundamentally having more grantmaking and research capacity (junior or senior) will help us make more grants to great projects that we wouldn't have been able to otherwise; I wrote about that team's hiring needs in this separate post. In terms of AI safety more broadly (outside of just my team), I'd say there is a more severe constraint on people who can mentor junior researchers, but the field could use more strong researchers at all levels of seniority.

Hi Vaipan, I’ll take your question about the ratio of hires between AI safety and biosecurity. In short, no, it wouldn’t be correct to conclude that biosecurity is more talent constrained than AI safety. The number of roles is rather a reflection of our teams’ respective needs at the given moment.


And on the “more diverse consideration about GCR” question, note that my team is advertising for a contractor who will look into risks that lie outside biosecurity and AI safety, including nuclear weapons risks. Note though that I expect AI safety and biosecurity to remain more highly prioritized going forward.

Hi Vaipan,

Thanks for your questions. I’ll address the last one, on behalf of the cause prio team.

One of the exciting things about this team is that, because it launched so recently, there’s a lot of room to try new things as we explore different ways to be useful. To name a few examples:

  • We’re working on a constellation of projects that will help us compare our grantmaking focused on risks from advanced AI systems to our grantmaking focused on improving biosecurity and pandemic preparedness.
  • We’re producing a slew of new BOTECs across different focus areas. If it goes well, this exercise will help us be more quantitative when evaluating and comparing future grantmaking opportunities.
  • As you can imagine, the result of a given BOTEC depends heavily on the worldview assumptions you plug in. There isn’t an Open Phil house view on key issues like AI timelines or p(doom). One thing the cause prio team might do is periodically survey senior GCR leaders on important questions so we better understand the distribution of answers.
  • We’re also doing a bunch of work that is aimed at increasing strategic clarity. For instance, we’re thinking a lot about next-generation AI models: how to forecast their capabilities, what dangers those capabilities might imply, how to communicate those dangers to labs and policymakers, and ultimately how to design evals to assess risk levels.

There is no particular background knowledge that is required for a role on our team. For context, a majority of current team members were working on global health and wellbeing issues less than a year ago. For this hiring round, applicants that understand the GCR ecosystem and have at least superficial understanding of frontier AI models will in general do better than applicants that lack that understanding. But I encourage everyone who is interested to apply.


 

It seems likely that these roles will be extremely competitive to hire for. Most applicants will have similar values (ie: EA-ish). Considering the size of the pool, it seems likely that the top applicants will be similar in terms of quality. Therefore, why do you think there's a case that someone taking one of these roles will have high counterfactual impact?

Empirically, in hiring rounds I've previously been involved in for my team at Open Phil, it has often seemed to be the case that if the top 1-3 candidates just vanished, we wouldn't make a hire. I've also observed hiring rounds that concluded with zero hires. So, basically I dispute the premise that the top applicants will be similar in terms of quality (as judged by OP).

I'm sympathetic to the take "that seems pretty weird." It might be that Open Phil is making a mistake here, e.g. by having too high a bar. My unconfident best-guess would be that our bar has been somewhat too high in the past, though this is speaking just for myself. I think when you have a lot of strategic uncertainty, as GCR teams often do, that pushes towards a higher hiring bar as you need people who have a wide variety of skills.

I'd probably also gently push back against the notion that our hiring pool is extremely deep, though that's obviously relative. I think e.g. our TAIS roles will likely get many fewer applicants than roles for similar applicants doing safety research at labs, for a mix of reasons including salience to relevant people and the fact that OP isn't competitive with labs on salary.

(As of right now, TAIS has only gotten 53 applicants across all its roles since the ad went up, vs. governance which has gotten ~2x as many — though a lot of people tend to apply right around the deadline.)

Thank you - this is a very useful answer

Echoing Eli: I've run ~4 hiring rounds at Open Phil in the past, and in each case I think if the top few applicants disappeared, we probably just wouldn't have made a hire, or made significantly fewer hires.

I'm not from Open Philantropy, but it's likely people worry too much about this.

Recognizing the benefits of diverse perspectives, I'm curious about the organization's approach to age diversity in hiring. Are there insights on the value of experiences that candidates from different age brackets bring to the table? Additionally, is there a defined age cutoff for roles at Open Philanthropy?

There is certainly no defined age cutoff, and we are usually extra excited when we can hire candidates who bring many years of career experience to the table in addition to other qualifications!

Can you write about cross pollination between technical safety and AI governance and policy? In the case of the new governance mechanisms role (zeroing in on proof of learning and other monitoring schemas), it seems like bridging or straddling the two teams is important. 

Indeed. There aren't hard boundaries between the various OP teams that work on AI, and people whose reporting line is on one team often do projects for or with a different team, or in another team's "jurisdiction." We just try to communicate about it a lot, and our team leads aren't very possessive about their territory — we just want to get the best stuff done!

I'll just add that in a lot of cases, I fund technical research that I think is likely to help with policy goals (for example, work in the space of model organisms of misalignment can feed into policy goals).

If you were applying to Open Philanthropy as a candidate, what would be the parts of the hiring process that would cause you the must frustration/annoyance (alternatively: what would be the pain points)? What parts of the hiring process would you come away thinking "Open Philanthropy does that really well" afterwards?

We gather data on these topics through post-round candidate surveys, so I can share what candidates actually say rather than speculate! 

On the frustration / annoyance side:

  • The most common thing that comes up is that we don’t share personalized feedback with unsuccessful candidates except for those who make it to the final stage of the process. This is a tricky problem that we’ve discussed a lot, as it’s not feasible for us to do this given the large number of candidates who apply. We’re trying to provide more generalized feedback on our work tests for certain roles as one possible improvement.
  • Another thing that comes up is that our processes can be fairly long (often 2-3 months from when a candidate applies). We try to speed this up where possible but have to trade that off against having a large number of data points from work trials and interviews, which we also see as valuable.

On the ‘OP does that really well’ side:

  • Candidates often praise the transparency of our process and what we’re looking for at each stage.
  • Candidates usually enjoy their interactions with OP staff - we do our best to make most conversations a two-way street, and are less intimidating than I think some people imagine!

The number of applications will affect the counterfactual value of applying. Now, saying your expected number might lower the number of people who will apply, but I would still appreciate having a range of expected applicants for the AI Safety roles. 

What is the expected amount of people applying for the AI Safety roles? 

It's hard to project forward of course, but currently there are ~50 applicants to the TAIS team and ~100 to the AI governance team (although I think a number of people are likely to apply close to the deadline).

Disclaimer: I joined OP two weeks ago in the Program Associate role on the Technical AI Safety team. I'm leaving some comments describing questions I wanted to know to assess whether I should take the job (which, obviously, I ended up doing).

How inclined are you/would the OP grantmaking strategy be towards technical research with theories of impact that aren’t “researcher discovers technique that makes the AI internally pursue human values” -> “labs adopt this technique”. Some examples of other theories of change that technical research might have:

  • Providing evidence for the dangerous capabilities of current/future models (should such capabilities emerge) that can more accurately inform countermeasures/policy/scaling decisions.
  • Detecting/demonstrating emergent misalignment from normal training procedures. This evidence would also serve to more accurately inform countermeasures/policy/scaling decisions.
  • Reducing the ease of malicious misuse of AIs by humans.
  • Limiting the reach/capability of models instead of ensuring their alignment.

I'm very interested in these paths. In fact, I currently think that well over half the value created by the projects we have funded or will fund in 2023 will go through "providing evidence for dangerous capabilities" and "demonstrating emergent misalignment;" I wouldn't be surprised if that continues being the case.

Disclaimer: I joined OP two weeks ago in the Program Associate role on the Technical AI Safety team. I'm leaving some comments describing questions I wanted to know to assess whether I should take the job (which, obviously, I ended up doing).

How much do the roles on the TAIS team involve engagement with technical topics? How do the depth and breadth of “keeping up with” AI safety research compare to being an AI safety researcher?

The way I approach the role, it involves thinking deeply about what technical research we want to see in the world and why, and trying to articulate that to potential grantees (in one-on-one conversations, posts like this one, RFPs, talks at conferences, etc) so that they can form a fine-grained understanding of how we're thinking about the core problems and where their research interests overlap with Open Phil's philanthropic goals in the space. To do this well, it's really valuable to have a good grip on the existing work in the relevant area(s).

Questions for the 7.2 Chief of Staff role

Hi :) I'd (also also also) love to think about this role more proactively - here are some questions I had in mind that would really provide helpful context! 

Bolded are the 3 most pressing questions I have:

  1. If an individual hasn’t formally held a Chief of Staff role before, what other skills or experiences could do you see as qualifying them for this position?
  2. What are the current challenges in the projects Claire is handling that a Chief of Staff would take ownership of?
  3. What signals do you look for in a candidate to determine their conscientiousness?
  4. What are some underrated skills that would make for an ideal Chief of Staff (specifically for this team, and for OP)?
  5. What tools are currently being utilized to support this role? Is there a particular suite of technology being considered to lead the work more effectively? 

I promise this is the last one! Thank you in advance for your thoughts + time :)

  1. Check out "what kinds of qualities are you looking for in a hire" here. My sense is we index less on previous experience than many other organizations (though it's still important). Experience juggling many tasks, prioritize, and syncing up with stakeholders jumps to mind. I have a hypothesis that consultant experience would be helpful for this role, but that's a bit conjectural.
  2. This is a bit TBD — happy to chat more further down the pipeline with any interested candidates.
  3. We look for this in work tests and in previous experience.

Thanks for the questions! Given the quantity of questions you've shared across the different roles, I think our teams might struggle to get to all of them in satisfactory detail, especially since we're past the initial answering window. Would you be able to highlight your highest priority question(s) under each top-level comment, as we'd like to make sure that we're addressing the ones that are most important for your decision-making?

Hi! Yes I've gone ahead and selected the priority questions by bolding them - thank you for your help :)

Questions for the 5.3 Executive Assistant, Technical AI Safety role

Hi :) I'd (also also) love to think about this role more proactively - here are some questions I had in mind that would really provide helpful context! 

Bolded are the 2 most pressing questions I have:

  1. What ways are you looking for candidates to showcase their ability as a generalist, demonstrating a capacity to excel in a variety of tasks and roles (apart from showing previous experiences)?
  2. What are the intangible qualities in a team member that, while not easily quantified, significantly contribute to their effectiveness?
  3. What systems does Ajeya currently utilize to manage time, tasks, and energy? What are the primary challenges or roadblocks she's encountering, and where are the major inefficiencies?
  4. Could you identify a specific issue Ajeya is facing with her organizational systems, particularly any low-hanging fruit that could be addressed for quick improvements?
  5. What level of knowledge in AI safety is preferred? Would a candidate with a foundational understanding from AIGSF, but without a deep technical research background, still meet the requirements?
Ajeya
6mo3
0
0
1
1
1

Thanks Mishaal!

  1. I think previous experience taking on operationally challenging projects is definitely the most important thing here, though it may not necessarily be traditional job experience (running a student group or local group can also provide good experience here). Beyond that, demonstrating pragmatism and worldliness in interviews (for example, when discussing real or hypothetical operational or time management challenges) is useful.
  2. I think an important quality in a role like this is steadiness — not getting easily overwhelmed by juggling a lot of competing tasks, having the ability to get the easy stuff done quickly and make smart calls about prioritizing between the harder more nebulous tasks. And across all our roles, being comfortable with upward feedback and disagreement is key.

Questions for the 7.1 Program Operations Associate/Lead role

Hi :) I'd (also) love to think about this role more proactively - here are some questions I had in mind that would really provide helpful context! 

Bolded are the 3 most pressing questions I have:

  1. What frameworks are currently in place for maintaining, evaluating, and enhancing existing programs?
  2. What qualities or experiences set apart a great candidate from a good one, especially if someone hasn't had a synonymous role before? What attributes enable individuals to excel beyond the norm?
  3. How can a candidate effectively demonstrate a mindset geared towards optimization and continuous improvement?
  4. Can you describe the existing organizational systems for resource management? Are these systems at a stage where iterative improvements are preferred, or is there a need for fresh approach?
  5. Regarding administrative and logistical tasks, what varieties of events are expected to be managed?
  1. The CB team continuously evaluates the track record of grants we've made when they're up for renewal, and this feeds into our sense of how good programs are overall. We also spend a lot of time keeping up with what's happening in CB and in x-risk generally, and this feeds into our picture of how well CB projects are working.
  2. Check out "what kinds of qualities are you looking for in a hire" here.
  3. Same answer as 2.

Questions for the 7.5 Program Associate / Senior Program Associate, University Groups role

Hi :) I'd love to think about this role more proactively - here are some questions I had in mind that would really provide helpful context!

Bolded are the 3 most pressing questions I have:

  1. What soft skills make a candidate stand out for this position? What are the intangible signals that make you think there is a good fit?
  2. Can you describe the existing systems utilized for the university groups? What components are currently effective, what areas require improvement, and what new elements need to be introduced?
  3. What is your current overarching strategy around community building? 
  4. What is the extent of the university networks you are currently engaging with? Is global outreach a goal, and how much emphasis is placed on regional expansion?
  5. What grant management systems are currently in operation for this initiative? Is there a plan to overhaul these systems or are you looking to adapt functionalities from other areas?
  6. What relationship management systems are being employed? Have these systems been thoroughly developed or is there a need for further development?
  7. Could you provide more details regarding the retreats, including the key components and elements involved in event planning? What systems are currently in place for this, and what areas are you aiming to improve?
  8. Is the mentorship program aimed at connecting individuals with more experienced organizers, or is the goal to broaden their network within the field?
  9. What processes are in place for outreach when identifying new founders?
  10. Have you created templates or models focusing on specific types of university groups that can be replicable/ easily implemented?
  11. What specific skills would significantly enhance a candidate's suitability for this role?
  12. What types of experiences do you believe would make a candidate seem like a very strong fit
  13. Apart from retreats, what are other events/community-building initiatives you have in place/want to implement?
  1. Similar to that of our other roles, plus experience running a university group as an obvious one — I also think that extroversion and proactive communication are somewhat more important for these roles than for others.
  2. Going to punt on this one as I'm not quite sure what is meant by "systems."
  3. This is too big to summarize here, unfortunately.
[anonymous]6mo1
0
0

I'm interested in the biosecurity and pandemic preparedness job roles. What kind of experience is Open Philanthropy looking for in entry level/associate level applicants. I am currently a student going into my last semester of my epidemiology MPH and am looking at next steps. 

Thank you!

As highlighted in the job descriptions, the answer for what we’re looking for in both skills and experiences varies from role to role (i.e., operations ≠ grantmaking ≠ research). When we evaluate candidates, though, we usually are less asking “What degree do they have? How many years have they worked in this field?” and more asking “What have they accomplished so far (relative to their career stage)? Do they have the skills we’re looking for? Will they be a good fit with our team and help meet an ongoing need? Do we have reason to expect them to embody Open Philanthropy’s operating values from day one?”

Curated and popular this week
Relevant opportunities