Hide table of contents

I often hear of people talk of an implicit ranking of best places to do a PhD focusing on AI Saftey. Can anyone enumerate this for me?

Something like

  1. Berkeley
  2. Stanford
  3. MIT ….

7

0
0

Reactions

0
0
New Answer
New Comment

2 Answers sorted by

I think the better question might be, "who are the best some professors/academic research groups in AI Safety to work with?"

Two meta-points I feel might be important —

  • For PhDs, the term "best university" doesn't mean much (there are some cases in which infrastructure makes a difference, but R1 schools, private or public, generally seem to have good research infrastructure). Your output as a graduate student heavily depends on which research group/PI you work with.
  • Specifically for AI safety, the sample size of academics is really low. So, I don't think we can rank them from best-to-eh. Doing so becomes more challenging because their research focus might differ, so a one-to-one comparison would be unsound.

With that out of the way, three research groups in academia come to mind:

Others:

  • Center for Human Inspired AI (CHIA) is a new research center at Cambridge; I don't know if their research would focus on subdomains of Safety; someone could look into this more.
  • I remember meeting two lovely folks from Oregon State University working on Safety at EAGx Berkeley. I cannot find their research group, and I forget what exactly they were working on; again, someone who knows more about this could comment perhaps.
  • An interesting route for a Safety-focused Ph.D. could be having a really good professor at a university who agrees to have an outside researcher as a co-advisor. I am guessing that more and more academics would want to start working on the Safety problem, so such collaborations would be pretty welcome, especially if they are also new to the domain.
    • One thing to watch out for: which research groups get funded by this NSF proposal. There will soon be new research groups that Ph.D. students interested in the Safety problem could gravitate towards!

Also Jacob Steinhardt, Vincent Conitzer, David Duvenaud, Roger Grosse, and in my field (causal inference), Victor Veitch.

Going beyond AI safety, you can get a general sense of strength from CSRankings (ML) and ShanghaiRankings (CS).

there's also CASMI at Northwestern, which I think not a lot of people know about. I find their research & agendas to be very aligned. 

 

Link - https://casmi.northwestern.edu/

Nice, I didn't know! Their research goals seem quite broad, which is good. Within the context of AI existential risk, this project looks interesting.

Curated and popular this week
Relevant opportunities