Hide table of contents

Here we briefly summarize the results so far from our U.S. nationally representative survey on Artificial Intelligence, Morality, and Sentience (AIMS), conducted in 2021 and 2023. The full reports are available on Sentience Institute’s website for the AIMS 2023 Supplemental SurveyAIMS 2023 Main Survey, and AIMS 2021 Main Survey. The raw data is available on Mendeley.

tl;dr: Results show that, from 2021 to 2023, there were increases in expectations of AI harm, moral concern for AIs, and mind perception of AIs. U.S. adults expect sentient AI to be developed sooner, now only in five years (median), and they strongly support AI regulation and slowdown.


Americans are significantly more concerned about AI in 2023 than they were in 2021 before ChatGPT. Only 23% of U.S. adults trust AI companies to put safety over profits, and 27% trust the creators of an AI to maintain control of current and future versions. This translates to widespread support for slowdowns and regulation, such as 63% support for banning artificial general intelligence that is smarter than humans, according to nationally representative surveys conducted by the nonprofit research organization Sentience Institute.

People expect AI to come very soon. The median estimate for when AI will have “general intelligence” is only two years from now, and just five years for human-level AI, sentient AI, and superintelligence.

The prospect of sentient AI is particularly daunting as 20% of people think that some AIs are already sentient; 10% think ChatGPT is sentient; and 69% support a ban on the development of sentient AIs. If AIs become sentient, a surprisingly large number of people think we should take at least some steps to protect their welfare—71% agree that sentient AIs “deserve to be treated with respect,” and 38% are in favor of legal rights.

Based on preregistered predictions for multi-item measures in the survey, we found surprisingly high moral concern for sentient AI and a surprisingly high view of them as having a mind (i.e., “mind perception”). We also found significant increases from 2021 to 2023 in moral concern, mind perception, perceived threat, and support for banning sentience-related AI technologies. Two single-item measures also showed significantly shorter timelines for sentient AI from 2021 to 2023.

This provides landmark public opinion data from before to after 2022, a major year for AI in which people like Google engineer Blake Lemoine raised the possibility that current AIs may be sentient, and groundbreaking AI systems were launched such as Stable Diffusion and ChatGPT. Additional results for the most recent 2023 data include:

  • 71% support government regulation that slows AI development.
  • 39% support a “bill of rights” that protects the well-being of sentient robots/AIs.
  • 68% agreed that we must not cause unnecessary suffering to large language models (LLMs), such as ChatGPT or Bard, if they develop the capacity to suffer.
  • 20% of people think that some AIs are already sentient; 37% are not sure; and 43% say they are not.
  • 10% of people say ChatGPT is sentient; 37% are not sure; and 53% say it is not.
  • 23% trust AI companies to put safety over profits; 29% are not sure; and 49% do not.
  • 27% trust the creators of an AI to maintain control of current and future versions; 27% are not sure; and 26% do not.
  • 49% of people say the pace of AI development is too fast; 30% say it's fine; 19% say they’re not sure; only 2% say it's too slow.

The data was collected in three nationally representative survey waves. A set of 86 questions were asked of 1,232 U.S. adults from November to December 2021 and an independent sample of 1,169 from April to June 2023. Another 1,099 were asked 111 related questions in a supplemental survey from June to July 2023. Margins of error were approximately +/- 3%.


A screenshot of a graph
The proportion of survey responses to questions about the possibility of sentient AI.
A screenshot of a graph
From the 2023 AIMS main survey and supplement, answers to, “If you had to guess, how many years from now do you think that…?” for each type of AI: artificial general intelligence (AGI), superintelligence, human-level artificial intelligence (HLAI), and sentient AI. The weighted medians, excluding respondents who said it will never happen, were two years for AGI and five years for superintelligence, HLAI, and sentient AI.
A screenshot of a graph
Some of the most surprising results from the 2021 AIMS main survey based on the research team’s preregistered 80% credible intervals (i.e., intervals that we think the true value will lie within 80% of the time). Intervals are represented as black boxes, and actual results are blue circles. Note that these three groupings are for illustration and do not all correspond to specific indices.
A screenshot of a graph
Support and opposition to policies related to sentient AI from the 2023 AIMS main and 2023 AIMS supplemental survey.





More posts like this

No comments on this post yet.
Be the first to respond.