If you want to communicate AI risks in a way that increases concern,[1] our new study  says you should probably use vivid stories, ideally with identifiable victims.

Tabi Ward led this project as her honours thesis. In the study, we wrote[2] short stories about different AI risks like facial recognition bias, deepfakes, harmful chatbots, and design of chemical weapons. For each risk, we created two versions: one focusing on an individual victim, the other describing the scope of the problem with statistics.

We had 1,794 participants from the US, UK and Australia[3] read one of the stories, measuring their concern about AI risks before and after. Reading any of the stories increased concern. But, the ones with identifiable individual victims increased concern significantly more than the statistical ones.

Why? The individual victim stories were rated as more vivid by participants. A mediation analysis found the effect of identifiable victims on concern was explained by the vividness of the stories:

The total effect of scenario victim on post-test concern about risk was significant, b = -.06, HC3-corrected SE = .03, p = .016. This indicated that risk concerns associated with statistical victims (coded as '2') were lower compared to identifiable victims (coded as '1'). The total indirect effect via both mediators was also significant, b = -.04, HC3-corrected SE = .01, LLCI -.06, ULCI -.02. However, this finding was driven almost entirely by the indirect significant effect of vividness (b = -.04, HC3-corrected SE = .01, LLCI -.06, ULCI -.03), while perceived control did not appear to contribute significantly (b = .002, HC3-corrected SE = .003, LLCI -.01, ULCI .01). These results show that stimuli using an identifiable victim appears to have led to increased concerns about risks due to the risk appearing more vivid, rather than making it feel more controllable. The direct effect of scenario victim on post-concern about risk with vividness and perceived control partialled out was not significant, b = -.02, HC3-corrected SE = .03, p = .422. The mediation model results suggested the effect between scenario victim and concern about risk score was fully mediated by vividness.
Stories about AI risk with identifiable victims (Victim type = 0) were seen as LESS vivid compared to stories about groups of people (statistical victims, Victim type = 1)

This finding aligns with prior research on "compassion fade" and the "identifiable victim effect": people tend to have stronger emotional responses and helping intentions towards a single individual in need than towards larger numbers or statistics. Our study extends this to the domain of risk perception.

Communicating about the harms experienced by identifiable victims is a particular challenge for existential risks. These AI risks are defined by their scale and their 'statistical victim' nature: they could affect billions, but have not yet occurred. Nevertheless, those trying to draw attention to concerns should try to make the risks vivid. The most compelling narrative was one with an identifiable victim of an AI-designed 'nerve agent' was the most compelling narrative, but was a hypothetical future story (not a real one from a news report, like the others).

This might influence the way people communicate about AI. For example, when people are trying to increase concern, it might be harder for a reader to imagine how the following request from an AI is dangerous:

Take this strawberry, and make me another strawberry that's identical to this strawberry down to the cellular level, but not necessarily the atomic level.

Instead, our results would suggest it might be more potent to use compelling analogies that are easier to imagine:

Since these AI systems can do human-level economic work, they can probably be used to make more money and buy or rent more hardware, which could quickly lead to a "population" [of AIs] of billions or more.

The takeaway: if you're trying to highlight the potential risks of AI development, vivid stories may be an effective approach, particularly if they put a human face to the risks. It suggests the behavioural economics of risk communication seems to apply to AI risks.

  1. ^

    Obviously this can go too far, as the mini-series Chernobyl may have done for nuclear power. We're not making judgements about whether or not increasing concern is 'good', but pointing to effects that influence perception.

  2. ^

    We used ChatGPT 4 (June 2023) to generate initial versions of stories from prompts, then edited for consistency. In social science research, it's important that these stories (called 'vignettes') are carefully controlled for length, tone, etc - everything except the key variable under investigation. View all the stories on our Open Science Framework repository.

  3. ^

    These were members of the general public recruited through Prolific, not necessarily representative of key decision-makers for AI safety. However, we have found anecdotally that public beliefs about AI risks and expectations can influence decision-makers.

Comments2


Sorted by Click to highlight new comments since:

Great job!

Did you use causal mediation analysis, and can you share the data?

I want to note that the strawberry example wasn’t used to increase the concern, it was used to illustrate the difficulty of a technical problem deep into the conversation.

I encourage people to communicate in vivid ways while being technically valid and creating correct intuitions about the problem. The concern about risks might be a good proxy if you’re sure people understand something true about the world, but it’s not a good target without that constraint.

Yes all data and code are on the osf, but to my chagrin they’re in spss. Yes good observation re the strawberry, and the imprecision around the outcome.

Curated and popular this week
 ·  · 5m read
 · 
[Cross-posted from my Substack here] If you spend time with people trying to change the world, you’ll come to an interesting conundrum: Various advocacy groups reference previous successful social movements as to why their chosen strategy is the most important one. Yet, these groups often follow wildly different strategies from each other to achieve social change. So, which one of them is right? The answer is all of them and none of them. This is because many people use research and historical movements to justify their pre-existing beliefs about how social change happens. Simply, you can find a case study to fit most plausible theories of how social change happens. For example, the groups might say: * Repeated nonviolent disruption is the key to social change, citing the Freedom Riders from the civil rights Movement or Act Up! from the gay rights movement. * Technological progress is what drives improvements in the human condition if you consider the development of the contraceptive pill funded by Katharine McCormick. * Organising and base-building is how change happens, as inspired by Ella Baker, the NAACP or Cesar Chavez from the United Workers Movement. * Insider advocacy is the real secret of social movements – look no further than how influential the Leadership Conference on Civil Rights was in passing the Civil Rights Acts of 1960 & 1964. * Democratic participation is the backbone of social change – just look at how Ireland lifted a ban on abortion via a Citizen’s Assembly. * And so on… To paint this picture, we can see this in action below: Source: Just Stop Oil which focuses on…civil resistance and disruption Source: The Civic Power Fund which focuses on… local organising What do we take away from all this? In my mind, a few key things: 1. Many different approaches have worked in changing the world so we should be humble and not assume we are doing The Most Important Thing 2. The case studies we focus on are likely confirmation bias, where
 ·  · 2m read
 · 
I speak to many entrepreneurial people trying to do a large amount of good by starting a nonprofit organisation. I think this is often an error for four main reasons. 1. Scalability 2. Capital counterfactuals 3. Standards 4. Learning potential 5. Earning to give potential These arguments are most applicable to starting high-growth organisations, such as startups.[1] Scalability There is a lot of capital available for startups, and established mechanisms exist to continue raising funds if the ROI appears high. It seems extremely difficult to operate a nonprofit with a budget of more than $30M per year (e.g., with approximately 150 people), but this is not particularly unusual for for-profit organisations. Capital Counterfactuals I generally believe that value-aligned funders are spending their money reasonably well, while for-profit investors are spending theirs extremely poorly (on altruistic grounds). If you can redirect that funding towards high-altruism value work, you could potentially create a much larger delta between your use of funding and the counterfactual of someone else receiving those funds. You also won’t be reliant on constantly convincing donors to give you money, once you’re generating revenue. Standards Nonprofits have significantly weaker feedback mechanisms compared to for-profits. They are often difficult to evaluate and lack a natural kill function. Few people are going to complain that you provided bad service when it didn’t cost them anything. Most nonprofits are not very ambitious, despite having large moral ambitions. It’s challenging to find talented people willing to accept a substantial pay cut to work with you. For-profits are considerably more likely to create something that people actually want. Learning Potential Most people should be trying to put themselves in a better position to do useful work later on. People often report learning a great deal from working at high-growth companies, building interesting connection
 ·  · 31m read
 · 
James Özden and Sam Glover at Social Change Lab wrote a literature review on protest outcomes[1] as part of a broader investigation[2] on protest effectiveness. The report covers multiple lines of evidence and addresses many relevant questions, but does not say much about the methodological quality of the research. So that's what I'm going to do today. I reviewed the evidence on protest outcomes, focusing only on the highest-quality research, to answer two questions: 1. Do protests work? 2. Are Social Change Lab's conclusions consistent with the highest-quality evidence? Here's what I found: Do protests work? Highly likely (credence: 90%) in certain contexts, although it's unclear how well the results generalize. [More] Are Social Change Lab's conclusions consistent with the highest-quality evidence? Yes—the report's core claims are well-supported, although it overstates the strength of some of the evidence. [More] Cross-posted from my website. Introduction This article serves two purposes: First, it analyzes the evidence on protest outcomes. Second, it critically reviews the Social Change Lab literature review. Social Change Lab is not the only group that has reviewed protest effectiveness. I was able to find four literature reviews: 1. Animal Charity Evaluators (2018), Protest Intervention Report. 2. Orazani et al. (2021), Social movement strategy (nonviolent vs. violent) and the garnering of third-party support: A meta-analysis. 3. Social Change Lab – Ozden & Glover (2022), Literature Review: Protest Outcomes. 4. Shuman et al. (2024), When Are Social Protests Effective? The Animal Charity Evaluators review did not include many studies, and did not cite any natural experiments (only one had been published as of 2018). Orazani et al. (2021)[3] is a nice meta-analysis—it finds that when you show people news articles about nonviolent protests, they are more likely to express support for the protesters' cause. But what people say in a lab setting mig