TL;DR we’re conducting a survey about attitudes towards AI risks, targeted towards people in the EA and rationality communities. We're interested in responses whether or not you're involved in AI safety. Google Forms Link here.
Esben Kran and I previously published an older version of this survey which we've edited based on feedback. If you’ve completed a previous version of this survey, you don’t need to fill it out again. See also Esben's question on LessWrong.
- Recently, there has been some discussion about how to present arguments about AI safety and existential risks more generally
- One disagreement is about how receptive people are to existing arguments, which may depend on personal knowledge/background, how the arguments are presented, etc.
- We hope to take first steps towards a more empirical approach, by first gathering information about existing opinions and using this to inform outreach
- We expect this to take 5-10 min to complete, and hope to receive around 100 responses in total
- Link to the survey
- We're hoping to receive responses whether or not you're interested in AI safety
- Through the survey, we hope to:
- Get a better understanding of how personal background and particular arguments contribute to perception of AI safety as a field, and to use this as a rough guide for AI safety outreach
- Test the feasibility of similar projects
- We intend on publishing the results and our analysis on LessWrong and the EA forum
- Note that this is still quite experimental - we welcome questions and feedback!
- While we have done some user tests for the survey, we fully expect there to be things that we missed or are ambiguous
- If you’ve already filled out the survey or given us feedback, thank you!