Hide table of contents

Key insights

  1. A representative online Survey Assessing Risks from AI (SARA) of 1,141 Australians in Jan-Feb 2024 investigated public perceptions of AI risks and support for AI governance actions.
  2. Australians are most concerned about AI risks where AI acts unsafely (e.g., acting in conflict with human values, failure of critical infrastructure), is misused (e.g., cyber attacks, biological weapons), or displaces the jobs of humans; they are least concerned about AI-assisted surveillance, or bias and discrimination in AI decision-making.
  3. Australians judge “preventing dangerous and catastrophic outcomes from AI” the #1 priority for the Australian Government in AI; 9 in 10 Australians support creating a new regulatory body for AI.
  4. To meet public expectations, the Australian Government must urgently increase its capacity to govern increasingly-capable AI and address diverse risks from AI, including catastrophic risks.

Findings

Australians are concerned about diverse risks from AI

When asked about a diverse set of 14 possible negative outcomes from AI, Australians were most concerned about AI systems acting in ways that are not safe, not trustworthy, and not aligned with human values. Other high-priority risks include AI replacing human jobs, enabling cyber attacks, operating lethal autonomous weapons, and malfunctioning within critical infrastructure.

Australians are skeptical of the promise of artificial intelligence: 4 in 10 support the development of AI, 3 in 10 oppose it, and opinions are divided about whether AI will be a net good (4 in 10) or harm (4 in 10).

Australians support regulatory and non-regulatory action to address risks from AI

When asked to choose the top 3 AI priorities for the Australian Government, the #1 selected priority was preventing dangerous and catastrophic outcomes from AI. Other actions prioritised by at least 1 in 4 Australians included (1) requiring audits of AI models to make sure they are safe before being released, (2) making sure that AI companies are liable for harms, (3) preventing AI from causing human extinction, (4) reducing job losses from AI, and (5) making sure that people know when content is produced using AI.

Almost all (9 in 10) Australians think that AI should be regulated by a national government body, similar to how the Therapeutic Goods Administration acts as a national regulator for drugs and medical devices. 8 in 10 Australians think that Australia should lead the international development and governance of AI.

Australians take catastrophic and extinction risks from AI seriously

Australians consider the prevention of dangerous and catastrophic outcomes from AI the #1 priority for the Australian Government. In addition, a clear majority (8 in 10) of Australians agree with AI experts, technology leaders, and world political leaders that preventing the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war 1.

Artificial Intelligence was judged as the third most likely cause of human extinction, after nuclear war and climate change. AI was judged as more likely than a pandemic or an asteroid impact. About 1 in 3 Australians think it’s at least ‘moderately likely’ AI will cause human extinction in the next 50 years. 

Implications and actions supported by the research

Findings from SARA show that Australians are concerned about diverse risks from AI, especially catastrophic risks, and expect the Australian Government to address these through strong governance action.

Australians’ ambivalence about AI and expectation of strong governance action to address risks is a consistent theme of public opinion research in this area 2–4

Australians are concerned about more diverse risks from AI, compared to Government

The Australian Government published an interim response to its Safe and Responsible AI consultation 5. As part of its interim response, the Government plans to address known risks and harms from AI by strengthening existing laws, especially in areas of privacy, online safety, and mis/disinformation. 

Findings from SARA show that some Australians are concerned about privacy, online safety, and mis/disinformation risks, so government action in these areas is a positive step. However, the risks that Australians are most concerned about are not a focus of the Government’s interim response. These priority risks include AI systems being misused or accidentally acting in ways that harm people, AI-enabled cyber attacks, and job loss due to AI. The Government must broaden its consideration of AI risks to include those identified as high priority by Australians.

Australians want Government to establish a national regulator for AI, require pre-release safety audits, and make companies liable for AI harms

The Government plans to develop a voluntary AI Safety Standard and voluntary watermarking of AI-generated materials. Findings from SARA show that Australians support stronger Government action, including mandatory audits to make sure AI is safe before release 6, and making AI companies liable for harms caused by AI 7. Australians show strong support for a national regulatory authority for AI; this support has been consistently high since at least 2020 4. To meet expectations, Government should establish a national regulation for AI, and implement strong action to limit harms from AI.

Australians want Government action to prevent dangerous and catastrophic outcomes from frontier and general-purpose models

In its interim response, the Government described plans to establish mandatory safeguards for ‘legitimate, high-risk settings’ to ‘ensure AI systems are safe when harms are difficult or impossible to reverse’, as well as for ‘development, deployment and use of frontier or general-purpose models’. 

Findings from SARA indicate that Australians want the government, as a #1 priority action, to prevent dangerous and catastrophic outcomes from AI. Frontier and general-purpose models carry the greatest risk for catastrophic outcomes 8, and are also advancing in capabilities without clear safety measures. Australians believe preventing the risk of extinction from AI should be a global priority. To meet Australians’ expectations, Government must ensure it can understand and respond to emergent and novel risks from these AI models.

Research context and motivation

The development and use of AI technologies is accelerating. Across 2022 and 2023, new large-scale models have been announced monthly, and are achieving increasingly complex and general tasks9; this trend continues in 2024 with Google DeepMind Gemini, OpenAI Sora, and others. Experts in AI forecast that development of powerful AI models could lead to radical changes in wealth, health, and power on a scale comparable to the nuclear and industrial revolutions 10,11

Addressing the risks and harms from these changes requires effective AI governance: forming robust norms, policies, laws, processes and institutions to guide good decision-making about AI development, deployment and use12. Effective governance is especially crucial for managing extreme or catastrophic risks from AI that are high impact and uncertain, such as harm from misuse, accident or loss of control8

Understanding public beliefs and expectations about AI risks and their possible responses is important for ensuring that the ethical, legal, and social implications of AI are addressed through effective governance. We conducted the Survey Assessing Risks from AI (SARA) to generate ‘evidence for action’, to help public and private actors make the decisions needed for safer AI development and use. 

About the Survey Assessing Risks from AI

Ready Research and The University of Queensland collaborated to design and conduct the Survey Assessing Risks from AI (SARA). This briefing presents topline findings. Visit the website or read the technical report for more information on the project, or contact Dr Alexander Saeri (a.saeri@uq.edu.au). 

Between 18 January and 5 February 2024, The University of Queensland surveyed 1,141 adults living in Australia, online using Qualtrics survey platform. Participants were recruited through the Online Research Unit's panel, with nationally representative quota sampling by gender, age group, and state/territory. Multilevel regression with poststratification (MRP) was used to create Australian population estimates and confidence intervals, using 2021 Census information about sex, age, state/territory, and education.  The research project was reviewed and approved by UQ Research Ethics (Project 2023/HE002257). 

This project was funded by the Effective Altruism Infrastructure Fund.

References

  1. Center for AI Safety. Statement on AI risk. https://www.safe.ai/statement-on-ai-risk (2023).
  2. Lockey, S., Gillespie, N. & Curtis, C. Trust in Artificial Intelligence: Australian Insightshttps://espace.library.uq.edu.au/view/UQ:b32f129 (2020) doi:10.14264/b32f129.
  3. Gillespie, N., Lockey, S., Curtis, C., Pool, J. & Akbari, A. Trust in Artificial Intelligence: A Global Studyhttps://espace.library.uq.edu.au/view/UQ:00d3c94 (2023) doi:10.14264/00d3c94.
  4. Selwyn, N., Cordoba, B. G., Andrejevic, M. & Campbell, L. AI for Social Good - Australian Attitudes toward AI and Societyhttps://bridges.monash.edu/articles/report/AI_for_Social_Good_-_Australian_Attitudes_Toward_AI_and_Society_Report_pdf/13159781/1 (2020) doi:10.26180/13159781.V1.
  5. Safe and Responsible AI in Australia Consultation: Australian Government’s Interim Responsehttps://consult.industry.gov.au/supporting-responsible-ai (2024).
  6. Shevlane, T. et al. Model evaluation for extreme risks. arXiv [cs.AI] (2023) doi:10.48550/arXiv.2305.15324.
  7. Weil, G. Tort Law as a Tool for Mitigating Catastrophic Risk from Artificial Intelligence. (2024) doi:10.2139/ssrn.4694006.
  8. Anderljung, M. et al. Frontier AI Regulation: Managing Emerging Risks to Public Safety. arXiv [cs.CY] (2023) doi:10.48550/arXiv.2307.03718.
  9. Maslej, N. et al. Artificial Intelligence Index Report 2023. arXiv [cs.AI] (2023).
  10. Grace, K., Stein-Perlman, Z., Weinstein-Raun, B. & Salvatier, J. 2022 Expert Survey on Progress in AIhttps://aiimpacts.org/2022-expert-survey-on-progress-in-ai/ (2022).
  11. Davidson, T. What a Compute-Centric Framework Says about Takeoff Speedshttps://www.openphilanthropy.org/research/what-a-compute-centric-framework-says-about-takeoff-speeds/ (2023).
  12. Dafoe, A. AI governance: Opportunity and theory of impact. https://www.allandafoe.com/opportunity (2018).


 

Comments12


Sorted by Click to highlight new comments since:

Great that you got it into The Conversation! And appreciate the key takeaways box at the start here

Nice! I was surprised that more present-day harms were not more front of mind for respondents (e.g. job losses, AI pornography, and racial and gender bias were far below preventing catastrophic outcomes). Interesting.

Just FYI, the link at the top doesn't work for me.

Thanks Peter. Fixed!

Thanks for sharing. This is a very insightful piece. Im surprised that folks were more concerned about larger scale abstract risks compared to more well defined and smaller scale risks (like bias). I'm also surprised that they are this pro regulation (including a Sox months pause). Given this, I feel a bit confused that they mostly support the development of AI and I wonder what had most shaped their view.

Overall, I mildly worry that the survey led people to express more concern than they feel. Because this seems surprisingly close to my perception of the views of many existential risks "experts". What do you think?

Would love to see this for other countries too. How feasible do you think that would be?

Thanks Seb. I'm not that surprised—public surveys in the Existential Risk Persuasion tournament were pretty high (5% for AI). I don't think most people are good at calibrating probabilities between 0.001% and 10% (myself included).

I don't have strong hypotheses why people 'mostly support' something they also want treated with such care. My weak ones would be 'people like technology but when asked about what the government should do, want them to keep them safe (remove biggest threats).' For example, Australians support getting nuclear submarines but also support the ban on nuclear weapons. I don't necessarily see this as a contradiction—"keep me safe" priorities would lead to both. I don't know if our answers would have changed if we made the trade-offs more salient (e.g., here's what you'd lose if we took this policy action prioritising risks). Interested in suggestions for how we could do that better.

It'd be easy for us to run in other countries. We'll put the data and code online soon. If someone's keen to run the 'get it in the hands of people who want to use it' piece, we could also do the 'run the survey and make a technical report one'. It's all in R so the marginal cost of another country is low. We'd need access to census data to do the statistical adjustment to estimate population agreement (but that should be easy to see if possible).

Thanks. Hmm. The vibe I'm getting from these answers is P(extinction)>5% (which is higher than the XST you linked).

Ohh that's great. We're starting to do significant work in India and would be interested in knowing similar things there. Any idea of what it'd cost to run there?

I'll look into it. The census data part seems okay. Collecting a representative sample would be harder (e.g., literacy rates are lower, so I don't know how to estimate responses for those groups).

That makes sense. We might do some more strategic outreach later this year where a report like this would be relevant but for now i don't have a clear use case in mind for this so probably better to wait. Approximately how much time would you need to run this?

Our project took approximately 2 weeks FTE for 3 people (most was parallelisable). Probably the best reference class.

Very helpful. I'll keep it in mind if the use case/need emerges in the future.

Executive summary: A survey of Australians found high levels of concern about risks from AI, especially catastrophic risks, and strong support for government action to regulate AI and prevent dangerous outcomes.

Key points:

  1. Australians are most concerned about AI systems acting in unsafe, untrustworthy ways not aligned with human values. Other priority risks include job loss, cyber attacks, autonomous weapons, and infrastructure failures.
  2. Australians are skeptical of AI development overall, with opinions divided on whether it will be net positive or negative.
  3. Preventing dangerous and catastrophic outcomes from AI is seen as the #1 priority for Australian government action on AI. Other priorities include mandatory safety audits, corporate liability for harms, and preventing human extinction.
  4. 90% support a national government body to regulate AI, and 80% think Australia should lead international AI governance.
  5. AI is seen as a major existential risk, judged as the 3rd most likely cause of human extinction after nuclear war and climate change. 1 in 3 think AI-caused extinction is at least moderately likely in the next 50 years.
  6. The findings suggest the Australian government should broaden its AI risk considerations, establish a national AI regulator, require safety audits and corporate liability, and prioritize preventing catastrophic risks from frontier AI systems.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f
Recent opportunities in AI safety
20
Eva
· · 1m read