I am right now in the process of applying for graduate degrees at some US unis e.g. at Stanford. The major motivation to do this is to set up for technical alignment work and have closer access to the EA bay area community (than I'd have here in Europe). Especially in the light of EA quite rapidly expanding in many countries this year this seems however like a pretty basic motivation: After extensive outreach of EA in general/alignment in particular it seems like a reasonable lower bound that there are at least dozens if not potentially hundreds of students applying with precisely this motivation to stanford grad programmes. Obviously I am writing this in a more refined, specific way but ultimately its not like I'd have an extensive track record in breakthroughs of technical alignment research yet.
Now I'm wondering is interest in alignment/from an EA angle something even worth mentioning in the motivation letter, or is it to be expected that it has become such a basic thing, that it is just useless (or even negatively) impacting my application?
This might be helpful context: https://www.lesswrong.com/posts/SqjQFhn5KTarfW8v7/lessons-learned-from-talking-to-greater-than-100-academics
In general my sense is that most potential advisors would be put off discussion of AGI or x-risk. Talking about safety in more mainstream terms might get a better reception, for example Unsolved Problems in ML Safety and X-Risk Analysis for AI Research by Hendrycks et al both present AI safety concerns in a way that might have broader appeal. Another approach would be presenting specific technical challenges that you want to work on, such as ELK or interpretability or OOD robustness, which can interest people on technical grounds even if they don’t share your motivations.
I don’t mean to totally discourage x-risk discussion, I’m actually writing a thesis proposal right now and trying to figure out how to productively mention my real motivations. I think it’s tough but hopefully you can find a way to make it work.