Hide table of contents

TL;DR we’re conducting a survey about attitudes towards AI risks, targeted towards people in the EA and rationality communities. We're interested in responses whether or not you're involved in AI safety. Google Forms Link here.

Esben Kran and I previously published an older version of this survey which we've edited based on feedback. If you’ve completed a previous version of this survey, you don’t need to fill it out again. See also Esben's question on LessWrong

Motivation

  • Recently, there has been some discussion about how to present arguments about AI safety and existential risks more generally
  • One disagreement is about how receptive people are to existing arguments, which may depend on personal knowledge/background, how the arguments are presented, etc.
  • We hope to take first steps towards a more empirical approach, by first gathering information about existing opinions and using this to inform outreach
    • While other surveys exist, our survey focuses more on perceptions within the EA and rationality communities (not just on researchers), and on AI risk arguments in particular
    • We also think of this as a cheap test for similar projects in the future

The Survey

  • We expect this to take 5-10 min to complete, and hope to receive around 100 responses in total
  • Link to the survey
  • We're hoping to receive responses whether or not you're interested in AI safety

Expected Output

  • Through the survey, we hope to:
    • Get a better understanding of how personal background and particular arguments contribute to perception of AI safety as a field, and to use this as a rough guide for AI safety outreach
    • Test the feasibility of similar projects
  • We intend on publishing the results and our analysis on LessWrong and the EA forum
  • Note that this is still quite experimental - we welcome questions and feedback!
    • While we have done some user tests for the survey, we fully expect there to be things that we missed or are ambiguous
    • If you’ve already filled out the survey or given us feedback, thank you!

23

0
0

Reactions

0
0
Comments2
Sorted by Click to highlight new comments since: Today at 8:04 AM

When coming up with a similar project,* I thought the first step should be to conduct exploratory interviews with EAs that would reveal their hypotheses about the psychological factors that may go into one's decision to take AI safety seriously. My guess would be that ideological orientation would explain the most variance.

*which I most likely won't realize (98 %) 
Edit: My project has been accepted for the CHERI summer research program, so I'll keep you posted!

That's a very interesting project. I'd be very curious to see the finished product. That has become a frequently discussed aspect of AI safety. One member of my panel is a significant advocate of the importance of AI risk issues and another is quite skeptical and reacts quite negatively to any discussion approaches the A*I word ("quite" may be a weak way of putting it). 

But concerning policy communication, I think those are important issues to understand and pinpoint. The variance is certainly strange. 

Side note: As a first-time poster, I realized looking at your project, I failed to include a TL;DR and a summary for the expected output on mine. I'll try and edit, or on the next post, I suppose. 

Curated and popular this week
Relevant opportunities