The Existential Risk Observatory has been interested in public awareness of AI existential risk since its inception over five years ago. We started surveying public awareness in December 2022, including by asking the following open question:
"Please list three events, in order of probability (from most to least probable), that you believe could potentially cause human extinction within the next 100 years."
If respondents would include AI or similar terms in their top-3 extinction risks ("robots" or "computers" count, "technology" doesn't), we counted them as aware, if not, as unaware. The aim of this methodology was to see how many people would spontaneously, without getting led by the question, connect the concepts of human extinction and AI. We used Prolific to find participants, n=300, and we only included US inhabitants over eightteen years old and fluent in English.
In the four surveys we ran, we obtained 7% (Dec '22), 12% (Apr '23), 15% (Apr '24), and, today, 24%. In a graph, that looks like this.
Frankly, I think ours is a rough measurement method. From participants' answers to our open questions, we see that not every participant takes all our questions seriously, and that some answers are obviously self-inconsistent. Therefore, I don't think the 24% by itself is a very meaningful number. I do however think our results say a few things:
- AI existential risk awareness is robustly rising among the general public.
- An appreciable minority of the public is concerned about AI existential risk.
- We're not there yet: for regulation specifically geared towards reduction of AI extinction risk (as opposed to, for example, job loss), a majority is still unlikely.
In both this survey and past studies, we found that those who are aware, and even part of those who are unaware, often support far-reaching regulation such as a government-mandated pause.
If one is interested in the data, do reach out.
