I am right now in the process of applying for graduate degrees at some US unis e.g. at Stanford. The major motivation to do this is to set up for technical alignment work and have closer access to the EA bay area community (than I'd have here in Europe). Especially in the light of EA quite rapidly expanding in many countries this year this seems however like a pretty basic motivation: After extensive outreach of EA in general/alignment in particular it seems like a reasonable lower bound that there are at least dozens if not potentially hundreds of students applying with precisely this motivation to stanford grad programmes. Obviously I am writing this in a more refined, specific way but ultimately its not like I'd have an extensive track record in breakthroughs of technical alignment research yet.

Now I'm wondering is interest in alignment/from an EA angle something even worth mentioning in the motivation letter, or is it to be expected that it has become such a basic thing, that it is just useless (or even negatively) impacting my application?

5

0
0

Reactions

0
0
Comments4
Sorted by Click to highlight new comments since: Today at 4:16 PM

Agreed. I'm sure many people on this Forum will be a better fit to answer this question than myself, but in general, your best bet is probably to figure out whether the program(s) and advisor(s) you're applying to work with do work in technical alignment. And mention your interests in alignment if they do, and don't if they don't.

For example, at Berkeley, CHAI and Jacob Steinhardt's group do work in technical alignment. At Cambridge, David Krueger's lab. I believe there's a handful of others.

or is it to be expected that it has become such a basic thing, that it is just useless (or even negatively) impacting my application?

(Low confidence) I would not guess "basic" would be the main issue with mentioning alignment. Bigger problems may include:

  • many ML academics are probably skeptical that useful work can be done in alignment, and/or find x-risky arguments kooky.
  • I expect there's a negative correlation between technical alignment interest and ML ability, conditional upon applying to ML grad school.

Anecdote: My grad school personal statement mentioned "Concrete Problems in AI Safety" and Superintelligence, though at a fairly vague level about the risks of distributional shift or the like. I got into some pretty respectable programs. I wouldn't take this as strong evidence, of course.

This might be helpful context: https://www.lesswrong.com/posts/SqjQFhn5KTarfW8v7/lessons-learned-from-talking-to-greater-than-100-academics

In general my sense is that most potential advisors would be put off discussion of AGI or x-risk. Talking about safety in more mainstream terms might get a better reception, for example Unsolved Problems in ML Safety and X-Risk Analysis for AI Research by Hendrycks et al both present AI safety concerns in a way that might have broader appeal. Another approach would be presenting specific technical challenges that you want to work on, such as ELK or interpretability or OOD robustness, which can interest people on technical grounds even if they don’t share your motivations.

I don’t mean to totally discourage x-risk discussion, I’m actually writing a thesis proposal right now and trying to figure out how to productively mention my real motivations. I think it’s tough but hopefully you can find a way to make it work.

Curated and popular this week
Relevant opportunities