What does the public think about AGI? AI-Alignment? Not much, probably. Or perhaps they have heard a speech from Elon Musk and think AGI = God, or something similar. People are quite uninformed about AGI, even if they have some understanding of AI in terms of its potential for automation or advancement in robotics. This is likely just rational ignorance, and with the myriad ways our attention is being directed and diverted today, in my view it's unrealistic to ask every citizen to become well-informed about each potential x-risk.

With the vast potential consequences of AGI, however, it is clearly wrong to assume that the public is disinterested in the outcomes. Furthermore, it seems that prominent researchers want to engage more with the public on the topic. This is part of the stated mission of organizations such as the Partnership on AI, OpenAI and the Future of Humanity Institute. A poll of researchers at the Human-Level AI Conference found that they were interested in asking the public questions a range of questions from "What responsibilities should we never transfer to machines?" to "What problem should humanity solve first using AI?", but that there was no reputable poll to reference. GoodAI Survey: https://medium.com/goodai-news/shaping-a-global-survey-on-agi-562ee7baa983.

Public polling has been done by the Center for the Governance of AI on AI in general, rather than general AI. Link here: https://governanceai.github.io/US-Public-Opinion-Report-Jan-2019/executive-summary.html. The findings indicate a few interesting phenomena. First, about 38-40% of those polled, even after being given some information about AI and it's implications, either didn't know what their opinion on the topic was, or neither supported nor opposed AI development. Second, there were large demographic discrepancies, on the dimensions of gender, education and income especially. Finally, the perceived risk of AI was deemed to be of the lowest likelihood and impact of the 15 global risks the participants were asked about.

I see the potential for improvement on the FHI polling by utilizing the Deliberative Polling methodology used at Stanford's Center for Deliberative Democracy. More info at: https://cdd.stanford.edu/. Deliberative Polling selects random, representative samples from the public (important for capturing the clear difference in opinion based on demographics) and gives them a questionnaire about the topic. A subset of the respondents is chosen to participate in the deliberation sessions, and are briefed with materials on the topic. They then engage in small group discussions with trained experts, where participants can pose a set of questions to the experts. The session culminates with a questionnaire to determine how opinion has changed (and historically, this change in opinion has been vast). Finally, the results of the polls are released to media outlets.

Deliberative Polling answers the question "How would the public deal with an issue if they were well educated about it?". I think this is key for AGI considerations, since giving only a small introduction to the topic for participants doesn't allow them to think about the full range of possible outcomes for themselves and others. Results from Deliberative Polling sessions could also better guide further polling and research, and spark public debate through publication of the results of the final questionnaire. Most importantly, perhaps, experts who are designing the values of AI systems can also understand where 'regular people' stand on these values once they understand the situation we are faced with. Win-win.





More posts like this

Sorted by Click to highlight new comments since:

I'd never heard of this center and find this work really interesting! Do you think deliberative polling in the context of values could be a way of getting some idea of where coherent extrapolated volition would go?

Curated and popular this week
Relevant opportunities