Long TL;DR: You’re an engineer, you want to work on AI Safety, you’re not sure which org to apply to, so you’re going to apply to all of them. But - oh no - some of these orgs may actively be causing harm, and you don’t want to do that. What’s your alternative? Study AI Safety for 2 years before you apply? In this post I suggest you can collect the info you want quickly by going over specific posts
Why do I think some orgs might be [not helping] or [actively causing harm]?
Example link. (Help me out in the comments with more?)
1. Open the tag of the org you want on lesswrong
How: Search for a post related to that org. You’ll have tags on top of the post. Click the tag with the org name.
2. Sort by “newest first״
3. Open 2-3 posts
(Don’t read the post yet!)
4. In each post, look at the top 2-3 most upvoted comments
What I expect you’ll find sometimes
A post by the org, with comments trying to politely say “this is not safe”, heavily upvoted.
Bonus: Read the comments
Or even crazier: Read the post! [David Johnson thinks this is a must!]
Ok, less jokingly, this seems to me like a friendly way to start to see the main arguments without having to read too much background material (unless you find, for example, a term you don’t know).
Extra crazy optimization: Interview before research
Am I saying this idea for vetting AI Safety orgs is perfect?
No, I am saying it is better than the alternative of “apply to all of them (and do no research)”, assuming you resonate with my premise of “there’s a lot of variance in effectiveness of orgs” and “that matters”.
I also hope that by posting my idea, someone will comment with something even better.
However you choose to define "interested". Maybe research the orgs that didn't reject your CV? Maybe only research the ones that accepted you? Your call
Consider sharing your thoughts with the org. Just remember, whoever is talking to you was chosen as a person that convinces candidates to join. They will, of course, think their org is great. Beware of reasons like "the people saying we are causing harm are wrong, but we didn't explain publicly why". The whole point is letting the community help you with this complicated question.