I recently posted that I'd like AI researchers to establish a consensus (>=70%) opinion on this question: What properties would a hypothetical AI system need to demonstrate for you to agree that we should completely halt AI development?

So in the spirit of proactivity, I've created a short google form to collect researchers' opinions: https://docs.google.com/forms/d/e/1FAIpQLScD2NbeWT7uF70irTagPsTEzYx7q5yCOy7Qtb0RcgNjX7JZng/viewform

I'd welcome feedback on how to make this form even better, and I'd also appreciate if you'd forward it to an X-risk skeptical AI researcher in your network. Thanks!

5

0
0

Reactions

0
0
Comments3
Sorted by Click to highlight new comments since: Today at 12:46 AM

Hmm, I'm imagining that someone who has not been exposed to AI-risk arguments could be fairly confused by this survey. You don't actually explain the reason that such a proposal is being considered. I would advise adding a little context as to what x-risk concerns are, and then maybe giving them a chance to explain whether they agree/disagree with those concerns. 

I am concerned that only people who are super-familiar with AI risk will answer the survey, thus biasing the results. This can be ameliorated with questions at the end about whether they were familiar with AI x-risk arguments and whether they agreed with them prior to the survey.  You want to make sure that someone who thinks that x-risk is completely ridiculous will still complete the survey and find the questions reasonable. 

It's a good idea though, keep it up. 

I did write the survey assuming AI researchers have at least been exposed to these ideas, even if they were completely unconvinced by them, as that's my personal experience of AI researchers who don't care about alignment. But if my experiences don't generalize, I agree that more explanation is necessary.

I'd consider a few multiple-choice "demographic" questions, such as whether the respondent identifies alignment/safety as a significant focus of their work, the respondent's length of experience in a ML/AI role, etc. Not sure which questions would be most valuable, but having some would let you break some results down by subgroups.

Curated and popular this week
Relevant opportunities