We have contact details and can send emails to 1500 students and former students who've received hard-cover copies of HPMOR (and possibly Human Compatible and/or The Precipice) because they've won international or Russian olympiads in maths, computer science, physics, biology, or chemistry.
This includes over 60 IMO and IOI medalists.
This is a pool of potentially extremely talented people, many of whom have read HPMOR.
I don't have the time to do anything with them, and people in the Russian-speaking EA community are all busy with other things.
The only thing that ever happened was an email sent to some kids still in high school about the Atlas Fellowship, and a couple of them became fellows.
I think it could be very valuable to alignment-pill these people. I think for most researchers who understand AI x-risk well enough and can speak Russian, even manually going through IMO and IOI medalists, sending those who seem more promising a tl;dr of x-risk and offering to schedule a call would be a marginally better use of their time than most technical alignment research they could be otherwise doing, because it plausibly adds highly capable researchers.
If you understand AI x-risk, are smart, have good epistemics, speak Russian, and want to have that ball, please DM me on LW or elsewhere.
To everyone else, feel free to make suggestions.
Anecdotally, approximately everyone who's now working on AI safety with Russian origins got into it because of HPMOR. Just a couple of days ago, an IOI gold medalist reached out to me, they've been going through ARENA.
HPMOR tends to make people with that kind of background act more on trying to save the world. It also gives some intuitive sense for some related stuff (up to "oh, like the mirror from HPMOR?"), but this is a lot less central than giving people the ~EA values and making them actually do stuff.
(Plus, at this point, the book is well-known enough in some circles that some % of future Russian ML researchers would be a lot easier to alignment-pill and persuade to not work on something that might kill everyone or prompt other countries to build something that kills everyone.
Like, the largest Russian broker decided to celebrate the New Year by advertising HPMOR and citing Yudkowsky.)
I'm not sure how universal this is- the kind of Russian kid who is into math/computer science is the kind of kid who would often be into the HPMOR aesthetics- but it seems to work.
I think many past IMO/IOI medalists are generally very capable and can help, and it's worth looking at the list of them and reaching out to people who've read HPMOR (and possibly The Precipice/Human Compatible) and getting them to work on AI safety.