AI safety scholarships look worth-funding (if other funding is sane)

by anon-a1 min read19th Nov 20196 comments

21

Frontpage

Funders tend to think that specific subsets of early-career AI safety researchers are worth funding:

The pool of AI safety-oriented PhD students across the world is a stronger cohort in total than any of these particular groups (because it includes them), and not much weaker on average. So on the face of it, if those targets are worth funding, then so too should more-general AI safety research scholarships.

People also tend to think broad swathes of early-career x-risk researchers are worth funding:

If AI safety is about as important as these other areas, a comparable amount of talent and supervision is available, then AI safety PhD scholarships should be similarly worth supporting.

Indeed, there are as many or more AI safety students entering good programs, and supervisors with some interest in safety like Marcus Hutter, Roger Grosse, David Duvenaud and others.

On the face of it, students able to bring funding would be best-equipped to negotiate the best possible supervision from the best possible school with the greatest possible research freedom.

The strongest apparent arguments against are:

  • That PhD scholarships are expensive. Indeed, maybe this isn't the most effective concievable funding object, but it seems as effective as other recent projects.
  • Concern about adverse selection of unfunded students. But this could be mitigated by making funding conditional on entering a top-10 university, which would draw a pool of students that would be stronger than the average within the field.
  • Concerns about the politics of the AI safety and AI field. But this could be mitigated by picking students who are supervised to work on topics that connect to the mainstream.
  • Concerns about the amount of time spent by evaluators. But the amount of time evaluators would spend reading ~100 aplications (in a year) is pretty small compared with the amount of research that the extra work ~3 PhD students selected might do over five years.
  • Concerns about the volume of excellent students being too low, especially after applying some of the filters above. But there are more students from top schools moving into AI safety than (longtermist) econ/philosophy, and GCBRs. Perhaps about as many as the others put together. Even if half of scholarships are already funded with grants at the best possible school, with the best possible supervisor, with adequate freedom, some will not be, and so there should be room for multiple scholarships per year. Even if the number was less than that, offering some grants would give encouragement to the brightest up-and-comers.
  • Preference for funding supervisors directly, and having them choose the best PhD students. But interested PhD students are one of the best ways to get a professor interested in a new topic.

This seems like a strong case. Is something being missed?

Frontpage

21

6 comments, sorted by Highlighting new comments since Today at 8:03 AM
New Comment
  • I don't think it's reasonable to think about FHI DPhil scholarships and even less so RSP as a mainly a funding program. (maybe ~15% of the impact comes from the funding)
  • If I understand the funding landscape correctly, both EA funds and LTFF are potentially able to fund single-digit number of PhDs. Actually has someone approached these funders with a request like "I want to work on safety with Marcus Hutter, and the only thing preventing me is funding"? Maybe I'm too optimistic, but I would expect such requests to have decent chance of success.
Students able to bring funding would be best-equipped to negotiate the best possible supervision from the best possible school with the greatest possible research freedom.

This seems like the key premise, but I'm pretty uncertain about how much freedom this sort of scholarship would actually buy, especially in the US (people who've done PhDs in ML please comment!) My understanding is that it's rare for good candidates to not get funding; and also that, even with funding, it's usually important to work on something your supervisor is excited about, in order to get more support.

In most of the examples you give (with the possible exceptions of the FHI and GPI scholarships) buying research freedom for PhD students doesn't seem to be the main benefit. In particular:

OpenPhil has its fellowship for AI researchers who happen to be highly prestigious

This might be mostly trying to buy prestige for safety.

and has funded a couple of masters students on a one-off basis.
FHI has its... RSP, which funds early-career EAs with slight supervision.
Paul even made grants to independent researchers for a while.

All of these groups are less likely to have other sources of funding compared with PhD students.

Having said all that, it does seem plausible that giving money to safety PhDs is very valuable, in particular via the mechanism of freeing up more of their time (e.g. if they can then afford shorter commutes, outsourcing of time-consuming tasks, etc).

it's usually important to work on something your supervisor is excited about, in order to get more support.

You would fund students who are picking supervisors interested in safety, like Hutter, Steinhardt, whatever.

All of these groups are less likely to have other sources of funding compared with PhD students.

The proposal would be merely to open up 0-3 scholarships per year. So the question here is not which group is less likely to have other sources of funding, but how effective it it to fund the marginal unfunded person. There are many counts in favour of funding EA PhD students over masters students, early-career EAs and independent researchers. They require less supervision. They output material that is more academically respectable (and publishable). They are more likely to stick with AI safety as a career, ...

Catherine here, I work for Open Phil on the technical AI program area. I’m not going to comment fully on our entire case for the Open Phil AI Fellows program, but I want to just address some things that seem wrong to me here:

“early-career AI safety researchers”

The OpenPhil AI PhD Fellows are mostly not early-career “AI safety” researchers. (see the fellowship description here)

The pool of AI safety-oriented PhD students across the world is a stronger cohort in total than any of these particular groups (because it includes them), and not much weaker on average.

I don’t think this would be true, even if the “it includes them” claim were true. I think you need much more evidence to justify a claim that “a larger set containing X is not much weaker on average than the set X itself”.

there are more students from top schools moving into AI safety than econ, philosophy, and GCBRs

? I think you’re claiming there are more grad-school-bound undergrads-from-top-schools, total, aspiring to be “AI safety researchers” than to be economists? This seems definitely false to me. Am I misunderstanding?

I think you need much more evidence to justify a claim that “a larger set containing X is not much weaker on average than the set X itself”.
  • If OpenPhil's fellow are not expected to do research on AI safety then apparently the justification for funding is quite different, so let's put them to one side.
  • The CS DPhil scholars at Oxford seem similar to EA CS PhDs at Toronto, ANU, and other rank 10-30 schools.
  • The RSP students are also seem similar, with broader interests but less credentials.
  • Paul's grantees seem more aligned though less qualified and supervised, though there are only three.

Overall, rank 10-30 AI safety PhD students seems comparable to these three latter groups, and clearly not much weaker.

? I think you’re claiming there are more grad-school-bound undergrads-from-top-schools, total, aspiring to be “AI safety researchers” than to be economists? This seems definitely false to me. Am I misunderstanding?

Edited to clarify that this means researchers on longtermist econ issues.

But I am interested to know if this argument is wrong in any other respect!