Funders tend to think that specific subsets of early-career AI safety researchers are worth funding:
- OpenPhil has its fellowship for AI researchers who happen to be highly prestigious, and has funded a couple of masters students on a one-off basis
- FHI has its DPhil scholarships for people who happen to be in Oxford, and its RSP, which funds early-career EAs with slight supervision.
- Paul even made grants to independent researchers for a while.
The pool of AI safety-oriented PhD students across the world is a stronger cohort in total than any of these particular groups (because it includes them), and not much weaker on average. So on the face of it, if those targets are worth funding, then so too should more-general AI safety research scholarships.
People also tend to think broad swathes of early-career x-risk researchers are worth funding:
- OpenPhil provided early-career funding for GCBRs and for AI policy careers.
- GPI has its DPhil scholarships for philosophy and econ students who happen to be in Oxford.
If AI safety is about as important as these other areas, a comparable amount of talent and supervision is available, then AI safety PhD scholarships should be similarly worth supporting.
Indeed, there are as many or more AI safety students entering good programs, and supervisors with some interest in safety like Marcus Hutter, Roger Grosse, David Duvenaud and others.
On the face of it, students able to bring funding would be best-equipped to negotiate the best possible supervision from the best possible school with the greatest possible research freedom.
The strongest apparent arguments against are:
- That PhD scholarships are expensive. Indeed, maybe this isn't the most effective concievable funding object, but it seems as effective as other recent projects.
- Concern about adverse selection of unfunded students. But this could be mitigated by making funding conditional on entering a top-10 university, which would draw a pool of students that would be stronger than the average within the field.
- Concerns about the politics of the AI safety and AI field. But this could be mitigated by picking students who are supervised to work on topics that connect to the mainstream.
- Concerns about the amount of time spent by evaluators. But the amount of time evaluators would spend reading ~100 aplications (in a year) is pretty small compared with the amount of research that the extra work ~3 PhD students selected might do over five years.
- Concerns about the volume of excellent students being too low, especially after applying some of the filters above. But there are more students from top schools moving into AI safety than (longtermist) econ/philosophy, and GCBRs. Perhaps about as many as the others put together. Even if half of scholarships are already funded with grants at the best possible school, with the best possible supervisor, with adequate freedom, some will not be, and so there should be room for multiple scholarships per year. Even if the number was less than that, offering some grants would give encouragement to the brightest up-and-comers.
- Preference for funding supervisors directly, and having them choose the best PhD students. But interested PhD students are one of the best ways to get a professor interested in a new topic.
This seems like a strong case. Is something being missed?