Hide table of contents

Disclaimer:

This project came out of a Summer Research Project with the Existential Risk Alliance. I am fairly confident of my analysis of the history of GM (Genetically Modified) crops: it is based on 70-80 hours of research into the GM literature, and a survey of six expert historians. I have spent an additional 10 hours reviewing social mobilization to draw lessons about AI. I am less confident about these, in part because of the lack of research into AI protest groups. For disclosure: I have taken part in several AI protests, and have tried to limit the effects of personal bias on this piece.

I am particularly grateful to Luke Kemp for mentoring me during the ERA program, and to Joel Christoph for his help as research manager. I’d also like to  thank Alistair Stewart and Javier Torre de Silva for comments on this piece, and to Lara Mani for helping me with the academic survey. 

1) Executive Summary

Main research questions: 

  1. Based on historical lessons from GM protests, can protests bring about a US-led unilateral pause to AI in the short-term? 
  2. What are the most desirable strategies & messaging for AI protest groups?

Research Significance:

  • Key uncertainties exist surrounding AI existential risk protests: efficacy is a crucial consideration for AI protests, yet little research has been done 
  • GMO protests are a useful and relevant analogue for AI: they faced powerful corporate opponents, and GMOs did not show clear ‘warning shots’ (50-60%). GM protests successfully brought about a de-facto moratorium in Europe in late 1990s,   

Findings

Based on historical analysis of GM protests and review of social mobilisation literature, this project finds that: 

There are several reasons to be optimistic about AI protests: 

  • The Public Worries about AI risks: Polls show substantial, stable public concern about AI existential risk, and support for a pause in AI development. 
  • Biotech Lobby > AI lobby (for now): more resources were probably devoted to GM lobbying than current rates of AI-policy lobbying; however, this may quickly change, as Big Tech pivots towards AI. 
  • Small protests can work: Despite small numbers, AI protests have received outsized media coverage, like anti-GMO protests did. 

The key reason hindering AI protests is:

  • Lack political allies: Few policymakers view AI as an existential risk, and unclear if any currently support a pause.

Key uncertainties:

  • Will there be ‘trigger events’? GMOs saw key trigger events rapidly shift public opinion, which were crucial for Europe’s turn towards more regulation. Socially amplified risks from AI, or symbolic events of injustice (e.g., striking AI workers) might mobilize the public against AI. 
  • Will corporate campaigns be successful? Campaigns trying to directly influence AI firms are unlikely to stop AI development, as developers are less responsive to short-term profit than Biotech firms. There are several cases of public pressure influencing Big Tech decisions. 

Recommendations For Protest Groups: 

  • Emphasize Injustice: reasoned arguments around risk don’t mobilize the public
  • Look for Allies: Alliances with artists should be explored further; alliances with ‘AI Ethics’ protests seem more challenging. 
  • Don’t Sabotage Chip Production, Consider Disrupting high-profile AI events: Expert surveys endorse strategic disruption, but literature is mixed. Confrontational strategies risk alienating allies in AI labs. 

Conclusions:

  • Protests could increase perceptions of AI existential risk: media narratives could increase perception of existential risk. 
  • Pausing AI is likely to be challenging in the short-run: protests lack political allies, a key hindrance which outweighs reasons to be optimistic.  

2) Introduction 

In the early 1990s, “[t]he scientific profession, the media, venture capital, and Wall Street were abuzz with possibilities these new ‘recombinant DNA’ technologies held out for generating a whole new industrial frontier and for solving a host of agriculture- and health-related problems. For these enthusiasts, the new biotechnologies offered a novel way to … raise agricultural productivity, and to make better and cheaper medicines, all while representing a potentially enormous source of profit for the firms involved.”[1]

In 2023, we’ve seen the first public protests against the catastrophic risks from AI. Groups like the 'Campaign for AI Safety' and 'PauseAI' advocate for a moratorium on AI models more powerful than GPT-4. Efficacy is a crucial consideration AI protests, yet no significant research has been done. In addition, from personal conversations, there is significant uncertainty around what strategies and messages AI protests should use. 

This research analyzes the success of GM protests in Europe in the late 1990s, to ask two main questions. Could protests bring about a US-led unilateral pause to AI in the short-term? (i.e. next 5 years) And, what do effective strategies & messaging look like for AI protest groups?

I briefly want to set out why I choose this case study. As the above quote suggests, GMO resembles AI in some ways: it was seen as a revolutionary and highly profitable technology, which powerful companies were keen to deploy. Furthermore, GMOs did not have any clear ‘warning shots’, or high events which demonstrate the potential for large-scale harm. CFCs and nuclear power had high-profile events which demonstrated the potential for large-scale harm: the discovery of the ozone hole, and the Fukushima meltdown (along with other nuclear incidents) respectively. AI development has not had a ‘Fukushima’ moment yet and perhaps it never will, suggesting the GMO case is more relevant. 

GMOs and AI have several differences. First, GMOs had limited geopolitical significance. While different Western governments funded their respective biotechnology industries, this was perceived as an ‘arms race’, a frame increasingly used to describe AI competition between US and China. (Other technologies including nuclear power and nuclear weapons had clearer geopolitical drivers. I conduct shallow-dives into these cases, and others, here).

Furthermore, AI regulation may require an international agreement, particularly in the longer term. In contrast, national governments of European member states could dictate biotechnology policy without needing to agree to an international treaty. However, given its strategic control over hardware, the US could impose a temporary, unilateral moratorium on AI development. This is the most plausible scenario that GMOs can shed light on. 

This piece is focused on the effectiveness of protest. It does not delve into whether a pause is desirable, the credibility of AI safety concerns (which appear more legitimate than GMO concerns), nor other reasons for protest such as free expression.

This piece is organized into three main sections: first, I give a brief historical analysis of the history of GM crops in Europe (more detail can be found in the appendix); second, I analyze whether the important reasons why GM protests succeed correspond to the AI case; finally, I set out recommendations for protest groups. 

3) Brief History of GM Crops

(For more detail about GM history, please consult the appendix, found here.)

Background: 1973-1996, Birth of Genetic Engineering technology, Public & Politicians Didn’t Care:

  • In 1973, scientists spliced genes into E. coli bacteria. By 1982, scientists produced the first transgenic plant, and Genetically Modified (GM) crops were first commercialized by 1992. 
  • From 1980s, activist groups, including the (e.g., German Green Party, UK Green Alliance) questioned the safety of GMOs, and tried to persuade policymakers to regulate more.
  • Public Didn’t Care: in the mid-1980s the public was largely unaware of what GM technology was.
  • Politicians Didn’t Care: there were essentially no regulations on GMOs in Europe until 1990

In the late 1990s, this all changed dramatically – in three stages. 

Stage 1: Public Perceptions Shift Rapidly: 1996-1997  

  • Public opposition to GMOs in Europe rose dramatically from 1996-1999, with double digit increases in many countries.
  • This coincided with key 'trigger events' like Mad Cow Disease and the arrival of GM crops in March 1996, and cloning of Dolly the Sheep in February 1997
  • ‘Trigger events’ were not logically connected to GM food. This period coincided with increased anti-GM mobilization and expanded protest tactics by NGOs.
  • The quantity and negative tone of media coverage on GMOs increased during this period. Media coverage likely amplified public perceptions of risk.

Stage 2: National Policy Changes: 1997-1999

  • In 1997, many European countries enacted unilateral bans on GMOs which had been approved by the EU.
  • Corporate campaigns forced all major supermarkets to remove GMOs from products from March 1998 to Spring of 1999
  • Anti-GM protestors had elite allies (e.g., Green Parties) in national governments in many countries. This likely aided their policy success.
  • Decentralization of biotech policy was crucial - national governments could set their own rules.
  • There was a broad anti-GMO coalition of NGOs, consumer groups, religious groups, and farmers. There was not an ‘unholy alliance’ of environmentalists and protectionists (in farming and biotechnology). NGOs overwhelmed opposed vested interests who were opposed.

Stage 3: Strict Europe-Wide Regulation 1999-2001

  • Europe saw a de facto moratorium on new GMO approvals from 1999-2002 and tightened labeling laws, with long-lasting effects: by 2018, less than 0.1 million hectares of GM crops were grown annually in Europe, versus >70 million hectares in US
  • This 'ratcheting up' was driven by a patchwork of national regulations that led firms to support EU harmonization.
  • Decentralization allowed more stringent countries to block EU-level decisions, preventing downward harmonization.
  • Continued NGO campaigns increased issue salience and opposition.
  • Broader EU precautionary culture was relevant but differences from the US may be overstated.
  • Decentralization again stands out as a crucial factor enabling national policy shifts to drive Europe-wide ratcheting up.

4) Can Protests Pause AI? 

In this section, I analyse whether the factor which enabled GM protests to succeed correspond for AI. These factors are organized into ‘reasons for optimism’ (suggesting that AI protests might be effective now), ‘reasons for pessimism’ (hindrances for AI protests which may not change anytime soon), and ‘key uncertainties’. 

A) Reasons for Optimism

I) The public is worried 

Protests are more likely to influence policymakers if prior public opinion is supportive[2]. In the GMO case, many European countries had high levels of pre-existing opposition to GMOs before the ‘trigger events’ of 1996/1997. Opposition was particularly pronounced in the 5 countries essentially held the European Commission hostage in June 1999 by refusing to approve any new GMOs: the “hardest blow” to GM-friendly regulation.

Similarly, the public is worried about AI development. Around 45% of Americans think that AI could cause human extinction.[3] Over 50% of Americans would support a pause to some kinds of AI development.[4]

However, the public are more worried about other hazards, including nuclear weapons, world war, climate change, or a pandemic. In terms of perceived risk, AI ranked only slightly above natural disasters like ‘asteroids’ and ‘acts of God’.[5] Across multiple polls, other hazards tend to be viewed as riskier than AI by the public. Opposition is not yet overwhelming compared to other issues.

Another concern is whether opposition to AI development is stable. Polls might be biased by different ‘framing effects’, as established in other areas (e.g. government spending). When reviewing different AI surveys, AI Impacts found “substantial differences between responses to similar questions”. For example, Morning Consult (2017) found that 50% of Americans believed that AI is humanity's greatest existential threat, contradicting results from the surveys mentioned above. Other incoherencies are present on AI timelines, AI and jobs, and levels of positivity towards AI. However, unlike other areas, support for a pause seems fairly resistant to various different framings, as YouGov (April, 2023) found. 

Another concern is how ‘deep’ opposition to AI development is: would the public’s concern about AI development make them write to their MP, change their voting behavior, or incur some personal costs? One survey suggests that AI is the most important political issue for less than 0.5% of Americans. Perhaps, then, opposition to AI should be disregarded as shallow?

No. Alongside AI, less than 0.5% of Americans view Medicare or Social Security as their top priority. Does the public care about Medicare or Social Security? Studies on willingness to pay for regulation would reveal how deep concern for AI is: at present, it is unclear. 

Overall, the public is pessimistic about AI development, and broadly supportive of more caution, even if we can’t be sure how deep this concern runs. This pre-existing support is beneficial for AI protests.

II) Media Narratives Matter

An increasingly hostile media environment was crucial for public perceptions rapidly shifting against Europe in the late 1990s. Nearly every EU country saw GMO opposition rise from 1996 to 1999, most by double digits: France went from 46% opposed to 65%, Greece from 51% to 81%, Britain from 33% to 51%. The ‘dramatization’ of GMOs in newspapers, emphasizing its risks and de-emphasizing its benefits, made its readers more opposed. Media tone was likely more significant than media quantity, which, when considered alone, probably did not affect perceptions

The same may be true for AI: media tone is crucial. Experimental research from the Existential Risk Observatorysuggests that exposure to media coverage of AI existential-risk increases support for a pause an experiment from the suggested that exposing people to AI existential risk media increased support for pausing research.

In contrast, the sheer quantity of AI news, alone, does not make people more skeptical. While coverage of AI has increased substantially over the past decade, perceptions have not changed much. We might expect the release of Chat GPT to have changed this: anecdotally, existential risk seems to be in the news more, and people are speaking about it more. This doesn’t seem appear in the data. One study which uses the content of AI-related posts on Twitter/X as a proxy for public opinion, that post-GPT period saw heightened awareness but no significant shift in sentiment towards AI. Similarly, YouGov’s ‘Robot Tracker’ has not shown significant shifts in public perceptions in any of its three areas: future robot intelligence levels, optimism regarding ability to acquire new skills if automation leads to redundancy, and the perceived risks of lifetime unemployment due to robots

Public perceptions have not substantially shifted, in part, because prominent narratives about AI in the media are generally positive. ‘Apocalyptic narratives’ – those emphasizing existential risk – made up around 5% of news articles until 2020. More common themes included emphasizing the everyday benefits of AI (39%). Similarly to perceptions of nanotechnology in the 2000s and self-driving cars between 2017-2018[6], positive media tone outweighed the negative effect of increased attention. 

However, high profile AI protests could shift the media narrative towards existential risk, and thus push public perceptions further in their favor.

III) Messaging AI-risk isn’t hard

A reason for pessimism I considered was that messaging existential risk from AI might be significantly more difficult to message than the risks from GM crops: first, because pausing AI would impose higher costs on consumers; and second, because AI development is inevitably perceived as more natural than GMOs. 

The first claim is that, unlike GMOs, perhaps pausing AI may impose higher costs on consumers. A key factor explaining the public’s resistance to GM technology was that it failed to offer clear benefits to them. GM foods brought marginal cost reductions but did not offer much other benefits. Similar genetic technology, when used in a medical context, offered much greater personal benefits and public support. In contrast, AI offers tangible benefits, and is used by over a quarter of the public on a daily basis. In an analysis of tweets, one study found that occupations with high AI exposure expressed more positivity about AI – with illustrators being the one exception. If consumers genuinely value AI – the more they use it, the more positive they are – perhaps, a pause would impose high costs on consumers. 

However, restrictions on training future models wouldn’t necessitate banning the current AI systems that consumers use. Further, just because occupation exposure correlates with positivity, this does not mean the same is true at an individual level. Polling from YouGov actually found the opposite: finding that people who use AI tools more often have greater levels of concern about AI. I am skeptical, therefore, that pausing AI will entail great personal sacrifices from consumers.

Another worry about messaging AI risk is that AI development will be perceived as more ‘natural’ than GMOs. The development of GMOs was seen as a unique change of humanity’s relationship with nature: developing “Frankenfoods”, to take a common anti-GM slogan, disrupted of the quasi-religious ‘natural’ order. In contrast, continued AI development is, well, a continuation: another step in a completely man-made chain of digital innovation[7]

Finally, whilst activists believed that GM crops posed tangible near-term risks – toxicity to humans and animals, damage to eco-systems – AI protest groups tend to emphasize AI’s future risks.  

These last two problems are notable and provide strong motivations for protest groups to pay particular attention to messaging effectively (see below). However, they are not insurmountable: if AI-risk was impossible to communicate, why are such a large portion of the public concerned?

IV) Small Protests can Work

Small groups of anti-GM protestors, who took over Monsanto’s headquarters dressed as superheroes or who protested naked at the World Food Summit, gained significant media attention. I have not found any cases of large-scale mobilizations against GMOs in the late 1990s involving tens of thousands of people. 

Similarly, cases of small-scale AI protests have received outsized media attention: for example, one protest in May 2023, when UCL hosted Sam Altman, CEO of OpenAI, was covered in many news outlets[4] [8]. The strategic timing of this protest was key. Altman was on a ‘world-tour’ to discuss AI, and had met with Rishi Sunak the previous day, and media outlets wanted to cover AI policy anyway.  

While building larger protest movements is undoubtedly helpful, the GM case shows that large protests are not necessary for having widespread impacts on public opinion, and policy. This is not to suggest that small protests are always effective. It is also indicative that other factors external to protest groups (e.g., trigger events, political allies, pre-existing public opinion, allies within media organizations) are important. However, if you believe that AI protests are doomed to fail simply because they are small scale, the GM case shows a clear counterexample. 

V) Biotech Lobby > AI Lobby

The biotechnology industry organized to support GMOs in Europe, but they were overcome by activist campaigns. This holds an important lesson for AI policy today.

In the late 1990s, the GM seed market had suddenly become extremely lucrative, with global sales reaching $2.3 billion in 1999 – up from roughly $75 million in 1995. The GM sector was dominated by a series of powerful firms: Monsanto, Syngenta, and Aventis held roughly 50% market share. Monsanto had a huge market capitalization of over $25 billion: over $47 billion in today's terms.

How does the current ‘AI market’ compare? It is difficult to find equivalent figures for global sales from ‘frontier AI’ systems. The market capitalization of Big Tech firms working invested in ‘frontier AI’ systems is huge: almost $5 trillion for Amazon/Google/Amazon combined. (Individual AI labs are much smaller: OpenAI and Anthropic are valued at around $30 billion and $5 billion respectively). These figures suggest that Big Tech has a much larger lobbying potential than the biotechnology industry in the 90s. Indeed, technology firms spend over €97 million annually in Europe on lobbying, and spent $70 million in 2021 in US. 

Lobbying has extended to AI regulation. Google, Microsoft and OpenAI have all pushed to water down the EU AI Act. Over half of the expert group advising the European Commission were industry representatives, and firms have held private meetings with regulators. OpenAI lobbied for its systems like GPT-3 and DALL-E to not be designated high-risk, and several of its proposed amendments made it into the final legislation. In July 2023, major AI firms recently formed a "Frontier Model Forum". Spokesmen have emphasized this is not a lobbying organization – contrary to some commentators’ suggestions.

However, AI lobbying probably does not significantly exceed the levels that Biotech firms spent in the late 1990s. In 1998, Monsanto spent $5 million on a single ad campaign in Europe. In the US, biotech companies spent over $140 million between 1998 and 2003 on lobbying. I doubt Big Tech and AI companies are spending at these levels on AI lobbying. This may well change in the future. Corporate investment in AGI is booming. OpenAI has three opening in policy areas at time of writing, and Sam Atlman has suggested that OpenAI may try to raise as much as $100 billion, making OpenAI “the most capital-intensive startup in Silicon Valley history”. 

However, for now, AI lobbying is likely not more extensive than the powerful Biotechnology Lobby of the 1990s: a lobby which activists in Europe overwhelmed. 

B) Reasons for Pessimism

I) Who are the Political Allies?

Protests GM crops were undoubtedly helped by elite political allies – MPs from different parties (Greens & Christian Democrats) who were both in government, and sympathetic to protestors’ contentions.

Many elite allies take the existential implications of AI seriously, particularly in the UK. The UK’s Foundation Model Taskforce is chaired by Ian Hogarth, who wrote about AI existential risk in the FT, and is advised by Matt Clifford who has warned that AI could “kill many humans” in only two years’ time. Two of the four partner organizations for the Taskforce (ARC Evals, and Centre for AI Safety) are explicitly concerned with existential risk. In the US, several of the leading Senators working on AI policy have staffers who are funded by Open Philanthropy, including the three top lieutenants of Senate Majority Leader Chuck Schumer, as well as Sen. Richard Blumenthal. It remains to be seen how much influence these staffers have over final legislation.

It is less clear how many elite allies exist in Europe. The European Commission has declared that “Mitigating the risk of extinction from AI should be a global priority”. The European AI Act includes some restrictions on ‘general-purpose AI’. Some MEPs from the ‘Left’ and ‘Green’ block proposed amendments to classify these models as “high risk”. However, it is unclear how much support these amendments gained. Classifying GPAI as ‘high-risk’ might has had support from both the ‘AI ethics’ and ‘AI safety’ communities[9]: it is unclear what motivations these MEPs had. 

However, believing AI is an existential risk might even be anti-correlated for supporting a pause to AI development : if you think that transformative AI systems are coming soon, you might be particularly concerned about China developing them first. Rishi Sunak has advocated for “dramatically speeding up UK AI capability”. Chuck Schumer’s AI framework, “SAFE Innovation”, does not try to slow down AI development: as its name suggests, it promotes ‘ethical’ US-led AI innovation.

Protests calling for a pause to AI development do not have Political Parties allied with their goals who are in power – like GM protests did in the late 1990s. This factor clearly impedes their likelihood of achieving a pause to AI development at present. 

II) Messaging and Strategy

AI protests additionally have problems messaging ‘AI existential risk’ in a salient way, lack allies, and lack a diversity of protest strategies. I don’t think these problems are insurmountable, as I set out in the ‘Recommendations’ section. 

C) Key Uncertainties

I) Will there be AI ‘trigger events’?

The GMO case also shows that ‘trigger events’ – in combination with strategic activism – are vitally important for changes in public opinion. Three different high-profile events - the outbreak of Mad Cow Disease and the arrival of GM crops in Europe in March 1996, and the cloning of Dolly the Sheep in February 1997 - led to an exponential increase in media coverage, which was increasingly skeptical of GMOs.

The GMO case suggests that, conditional on ‘trigger events’ and protests, regulatory culture can rapidly becomemuch more precautionary – in the space of a few years. More generally, there is no God-given “American” approach to regulation which favors innovation, as opposed to the “European” alternative which prioritizes precaution. From the 1960s to mid-1980s, the US was more precautionary than Europe on various hazards. A study from 2013, averaging across different technologies, found no significant difference in overall levels of regulatory precaution.  

This gives reasons for optimism for AI protests. At present, the regulatory approach to AI is more precautionary in Europe, with bans mandated by the EU AI Act on certain ‘high-risk’ systems. In contrast, the US has no federal AI regulation, at present, instead has voluntary frameworks and self-assessment tools such as the AI Risk Management Framework and the Blueprint for an AI Bill of Rights. However, the US is not exactly the ‘wild West’ of AI regulation. Some US states have acted unilaterally to regulate AI[10]. Recently, there was a bipartisan proposal from Josh Hawley which included federal licensing for frontier models. The US Senate seems split on both the timeline for AI regulation, and how far it should go: some proposed frameworks have included third-party audits of AI systems, for example. 

In tandem with high-profile ‘trigger events’, AI protests might nudge US regulatory culture in a significantly more cautionary direction, as happened in Europe on GMOs.  

So, what would an AI Trigger Event Look Like? More generally, ‘trigger event’, as defined by theorist Bill Moyer, is a “highly publicized, shocking incident... which dramatically reveals a critical social problem to the public in a vivid way.” (A closely related concept is a “moment of the whirlwind”.)

Some would point to the release of Chat GPT3 in November 2022. But it did not lead to widespread public mobilization. As mentioned above, public sentiment towards AI likely didn’t even shift substantially.

Perhaps there will be genuine ‘trigger events’ in the future. What might they look like?

It is important to distinguish here between a ‘trigger event’, in the context of protest movements, from a ‘warning shot’. This latter term, within AI safety circles, refers to an AI incident which causes or demonstrates the potential for ‘significant harm’ to humanity, short of causing extinction. These ‘warning shots’ might include an AI trying to autonomously engineer a pathogen (actually causing harm), or attempting to hack a data-center (demonstrating harmful potential).[1]

Warning shots could indeed mobilize the public. Consider the meltdown of the Three Mile Island power station in 1979 which, demonstrated the risks of nuclear power (i.e. ‘warning shot’ , and thus led to large public mobilizations (‘trigger event’). If the only plausible events which could lead to public mobilization were  ‘warning shots’: some have suggested that pausing AI seems "highly implausible" without a disaster. Perhaps AI won’t have any high-profie, shocking ‘warning shots’. It if it does, it might be too late.  

However, ‘trigger events’ do not have be catastrophic disasters. Trigger events are often highly symbolic acts which protest an injustice. Consider the arrest of Rosa Parks in 1955, leading to a community-wide boycott, or the self-immolation of Muhammad Bouazizi in Tunisia, which catalyzed the Arab Spring Protests in 2o11. Corresponding ‘trigger events’ for AI might include AI safety researchers going on strike to protest existential risk, or large-scale redundancies from automation.

Additionally, ‘trigger events’ which don’t threaten harm can lead to a ‘social amplification’ of risk. The ‘trigger events’ in the GMO case – cloning of Dolly the Sheep, outbreak of an unrelated disease – fall squarely into this category. There is some evidence that the public’s perception of AI risk already has already been ‘socially amplified’. Risk perceptions have not responded to evidence of harm from AI: e.g. accidents involving driverless cars, or algorithms leading to market crashes. Instead, the positioning of AI experts (e.g. Stephen Hawking, Geoffrey Hinton) is the significant mover of public perceptions. 

What might socially-amplified ‘trigger events’ look like for AI? One example might be a new LLM passes some symbolic benchmark, e.g., Mustafa Suleyman’s “Modern Turing Test”[11]. Another might be a breakthrough in ‘Whole Brain Emulation’: like the cloning of Dolly the Sheep, this might represent a ‘Frankenstein moment’ indicative of an ‘unnatural’ relationship with technology. 

Thus, while AI protests have not benefited from any ‘trigger events’ thus far, these might happen in the coming years prior to any endgame ‘warning shots’. Activism could shift public perceptions of risk before it’s too late. But, when these ‘trigger event’ might occur is highly uncertain and out of the control of protest groups.

II) Will Corporate Campaigns Work for AI?

The success of consumer campaigns in Europe by NGOs were significant for eventual restrictions. Friends of the Earth pressured supermarkets to remove GMOs from their shelves, through leafleting and media campaigns. Iceland abandoned GMOs in March 1998 followed by other major European supermarkets and manufacturers in Spring 1999. These victories signaled to policymakers that consumers preferred traditional crops. The importance of corporate campaigns should not be overestimated: they reinforced a pre-existing trend towards stricter regulation of GMOs which started in 1997, with unilateral GMOs bans. However, corporate campaigns did, in part, help enable the de-facto GMO moratorium which began in June 1999. 

In contrast, corporate campaigns seem unlikely to stop AI development, for at least three reasons. First, firms are not building models like ChatGPT for near-term profit – OpenAI lost $540 million developing ChatGPT in 2022, and some estimates suggest it will make a loss of over $1 billion in 2023. Second, AI is spread across many uses making boycotts infeasible. Thirdly, many ‘frontier AI’ models are offered freely, limiting pressure from consumer preference shifts.

However, corporate campaigns could change AI firms’ behaviour: e.g. in adopting additional safety policies. There are several examples of Big Tech firms bowing to public pressure. 

For instance, YouTube adjusted its algorithm in 2019 to prevent right-wing radicalisation, following building public pressure[12]. Proceeding its decision, Caleb Cain released a personal testimonial into who he became radicalized via YouTube after dropping out from university. Robert Evans, an investigative journalist, found 15 fascist groups which credited their formation to experiences on YouTube. 

Another notable case is Google's involvement with Project Maven,  a Department of Defense project which uses AI to distinguish people and objects in drone video. After this project was disclosed in March 2018, 3,000 employees signed an Open Letter opposing this collaboration, prompting Google to announce in June that it would not renew the Project Maven contract. Thus, even if corporate campaigns cannot bring about a stop to AI development, they might change firms’ behaviour. This is unclear though: there has not been high-profile AI corporate campaigns to date.

III) A race-to-the-top in AI regulation?

Decentralization was an important feature of why GM crop protests succeeded in Europe. Decentralized, democratic decision-making gave more ‘entry points’ for activists, allowing protests to wield greater influence in Europe as compared to the centralized, bureaucratic decision making in the US. Once national policy changed in a few national European states, there was a ‘ratcheting up’: the Commission defaulted to the highest common denominator

How decentralized is AI regulation? While Italy was able to ban ChatGPT over privacy concerns in 2023, the EU AI Act will centralize AI regulation. Member countries won’t be able to implement more stringent rules on ‘frontier models’ than the provisions set out under the EU AIA[13]. (Perhaps they will be able to do so via the GDPR: I am unsure.) In contrast, AI regulation in the US is currently a patchwork of rules which differ by state and city[14].

Decentralization benefited GM protesters in Europe but may not aid AI protesters. Decentralized US policymaking gives more influence to corporate lobbyists - one study found similar success rates for both corporate and citizen group-run lobbying in Europe (around 60%), versus 40% for citizens and 89% for corporations in the US. 

There is unlikely to be a ‘race-to-the-top in AI regulation between different US states. Restrictions on the deployment of ‘frontier models’ at a state-level would face many difficulties. For example, after Italy banned Chat GPT, there was a 400% increase in VPN downloads. Montana’s proposed ban on TikTok, due to start in 2024, would be similarly difficult to enforce. Restrictions or bans on the development of ‘frontier models’ are possible but would also have to come from the federal government. If California decided to introduce sweeping restrictions on how ‘frontier models’ are trained, Big Tech companies could simply move their headquarters to Texas, as Oracle and Tesla have done. AI executives have already threatened restrictive jurisdictions: in 2023, Sam Altman threatened to "cease operating" in Europe if the EU's AI Act overregulated the industry.

While decentralization aided GM crop protesters in Europe, it may not benefit AI protesters. Centralized EU policymaking may reduce corporate lobbying power, and ‘ratcheting-up’ of regulation between US states is unlikely. 

5) Recommendations for Protest Groups

There are several areas within messaging and strategy in which AI protests could be more effective. 

A) Rage Against the Machine

Intellectualizing about future risks does not mobilize the public. Unlike GMO opponents, who  thought principally in terms of moral acceptability, AI protestors currently focus on risks, using slogans like, “10% chance of extinction is too high”.  Instead, they should focus on injustice. 

Intellectual arguments do not inspire collective action – they make bad ‘collective action frames’ – because emotions are the ‘moral battery’ which power intellectual arguments.  'Injustice' is ubiquitous across almost all social movements, except religious or self-help groups. The 'collective action frames' for basically all political groups present some wrong, perpetrated by some agent, which requires a solution.

It is fairly obvious that PauseAI is protesting against existential risk from AI, and that their desired solution a global moratorium. However, injustice also identifies a cause. While heavy rain might be a inconvenient problem with a clear solution (e.g. bring an umbrella) it is not an injustice. In the GM case, protestors targeted firms like Monsanto with phrases like ‘Monsatan’. In contrast, AI protests have been reluctant to target Big Tech, because the real fault lies with ‘capitalism’, ‘the AI arms race’, or “Moloch”, a monster symbolic of humanity’s endemic collective action problems. 

However, this is both conceptually wrong and rhetorically ineffective. Blaming ‘Moloch’ ignores the agency that humanity and protestors have over corporations (consider fossil fuels, CFCs, GMOS) and geopolitical incentives (consider nukes and nuclear power). Blaming the ‘AI arms race’ is rhetorically ineffective fails to identify a blameworthy agent. By identifying the ‘villains’ of AI development, Big Tech firms, AI protestors could build a stronger shared collective identity and appear more agentic. 

What might an ‘injustice framing’ look like? It might emphasize the hypocrisy of Tech CEOs and politicians who recognize existential risks from AI but want to continue development anyway. Or it might emphasize the unfair and anti-democratic nature of a group of elite CEOs unilaterally deciding to build AGI, on behalf of all of humanity. 

More sympathetic narratives might pose less risks of alienating allies within Big Tech. However, if AI protests are looking to broaden their appeal to the public they should focus on injustice. As William Gamson wrote: “Abstract arguments about complex indirect and future effects will not forge the emotional linkage even if people are convinced intellectually.”

B) Find Allies 

I) Artists are the Bootleggers

Alliances between activists and vested interests are particularly powerful. In the era of prohibition, ‘baptists’ who thought that alcohol was immoral allied with ‘bootleggers’, illicit alcohol vendors. And alliances of ‘baptists and bootleggers’, or moralists and vested interests, have helped enact regulation in countless areas: from recycling in Denmark and Canada, to car emissions control in Germany, to digital policy in Italy. The allies of GM protestors (e.g. farmers) had other important non-economic motivations (see appendix). However, the diversity of the anti-GM coalition, which included consumer groups, concerned scientists, and religious groups, was undoubtedly helpful: diverse protests are more likely to succeed

Artists could be natural allies for AI protests. Both PauseAI and artists are most concerned about ‘frontier models’ which scrape data from the internet, and can be used to generate ‘artistic’ content, thus threatening jobs. Writers and actors represented by the Writers Guild of America (WGA) and SAG-AFTRA respectively recently went on strike, with the WGA leader citing this as part of labor's wider struggle against technological change. Artists have started lobbying against GAI art, forming the European Guild for Artificial Intelligence Regulation in February 2023, and have filed six copyright lawsuits in the US. If these lawsuits are successful, it might become increasingly difficult to train ‘frontier models’: technology firms might have to manually remove artistic content from their training data as the artists demand[15]. AI existential risk protests could play a pivotal role by offering public support and fundraising.

Allying with artists would help AI protests develop a more diverse coalition, and could achieve significant legal outcomes. 

II) Fighting a Shared Battle?

Perhaps alliances with artists could enable broader coalitions with the ‘AI Ethics’ community – those concerned about the short-term, non-existential, risks from AI. There groups might include: the Campaign Against Killer Robots, who protest against autonomous weapons systems; Privacy International, who campaign against use of AI in facial recognition, targeted advertising, and immigration enforcement; and groups concerned about algorithmic discrimination, such as the Algorithmic Justice League (AJL) and Data for Black Lives (D4BL). 

I am skeptical. Stopping corporations from developing ‘frontier AI’ would not stop the development of autonomous weapons programs, facial recognition technology, or other ‘narrow AI’ systems. Unlike artists, PauseAI’s goals seem orthogonal to the Campaign Against Killer Robots, who demand international regulation on AWSs, and Privacy International, who AI to be subject to international rights standards. 

Perhaps there is more hope for algorithmic discrimination groups. In theory, the actions required to reduce both existential and non-existential harms from algorithms are similar: a pause would avert the immediate risks from biased AI systems, could spur greater government scrutiny, and mitigate future existential risk. The source of both types of harms are similar: a small group of AI companies. This might suggest that we are fighting a shared battle

In practice, however, groups like AJL and D4BL do not advocate for a pause. This disconnect is motivated, in part, by a broader distrust of AI existential risk concerns.  The AJL has described concerns about existential risk as a “corporate-funded junk science”. The founder of D4BL has recently debated against the motion, “AI Is an Existential Threat” at the Oxford Union. Trying to create a ‘Big Tent’ in which everyone is squabbling isn’t helpful. Whilst large, diverse protests are more likely to succeed, disunified ones are not. Even if we should be fighting a shared battle, this doesn’t mean coalitions are practically feasible. 

Thus, while alliances with artists might be tractable and strategic, alliances with other AI ethics communities seem more challenging. 

C) Diversify Our Tactics?

An important reason why GM protests succeeded was that they diversified their use of tactics in the mid-90s to embrace a broader range of tactics, including public protests (e.g. protesting international conferences), disruption (e.g. blocking ships), and targeted property destruction (destroying fields of GM crops). 

The AI safety community has expanded its repertoire of tactics to include public protests. This is a welcome step. Should they go further, and start using disruptive tactics, or even sabotage property?

I) Don’t Blow Up a Data Centre 

Violent tactics for AI protests might include sabotaging chip supply chains. AI firms rely heavily on advanced chips from just a few key suppliers like Nvidia, Intel, Qualcomm, and TSMC. Chip production is incredibly delicate: a single hair or a dust particle can ruin an entire chip. Whilst breaking into a chip production facility would be very difficult, blocking ships might be feasible. 

But would they be effective?

A thorough review of the literature from the Social Change Lab suggests that violent tactics are less successful – in shaping policy and public opinion – than non-violent ones. Yet, the definition of violence is contested. Some believethat property destruction isn’t ‘violent’. The literature review mentioned above includes protests which are “clearly violent”: e.g. physical harm, rioting etc.”[16]. Targeted property destruction doesn’t seem to fall into this category.

However, the two main reasons why violent tactics are less successful seem particularly prominent in the AI case. First, violence limits levels of participation in protests: movements often have to choose between fringe violence or broad support[17]. This wasn’t true in the GMO case – radicals maintained ties with mainstream NGOs despite crop trashing. But since AI protests are so new, violent tactics could quickly turn away potential activists. Secondly, government repression towards a violent movement is less likely to lead to public backlash[18]. Again, this mechanism was weak for GMO crop slashing: many activists were acquitted, and some repeated crop destruction post-release. However, the high strategic importance of chips could lead to a significant clampdown. If the government imprisoned activists for trying to blow up a data center, I doubt there would be little public backlash. 

This suggests that sabotaging chip supply chains would not be popular nor increase the likelihood of favorable policy. But could they constitute effective ‘direct action’: directly influencing firms’ behavior? 

The destruction of GM crops used in field experiments is a clear example of successful ‘direct action’. The aim was not to gain media attention – activists often destroyed fields in secret[19] –– – but instead to make future field experiments more expensive. For example, extra security for a UK GM wheat trial cost £180,000 extra. For example, extra security for a GM wheat trial in the UK cost an extra £180,000. These actions contributed to a sudden decline in field trials in Europe from a peak of 264 in 1997 to 56 in 2002.

I am unconvinced whether sabotaging chip supply chains would constitute effective ‘direct action’. Even if chips became scarcer, I doubt this would change firms’ behavior, because, as mentioned above, AI firms are not training ‘frontier models’ for short-run profit. 

Thus, I am skeptical whether sabotaging chip supply chains would either constitute effective ‘direct action’ or lead to greater levels of public support for AI protests – at least while AI protests are not in the political mainstream. A more effective form of ‘direct action’ might come from within the AI community: e.g. inserting worms to destroy frontier AI models. 

II) Shout at a Conference?

How about disruptive tactics? Alternative ‘disruptive’ tactics might include disrupting major AI conferences (e.g. NEURIPs) or AI summits (e.g. UK AI Summit). 

One knee-jerk argument against disruptive protests could be that it led to negative press coverage. Haven’t I just said that media narratives are important? However, even AI protestors are covered negatively, they still might draw attention to AI and existential risk, a neglected narrative in the media. [6] 

Rather than relying on knee-jerk reactions, perhaps we should consult experts. In surveys, they thought “strategic non-violent, disruptive tactics” was the most important organizational factor for protest groups. Further, experts believed that disruptive tactics are more effective for issues which already have broad support. As shown above, AI safety has high levels of public support. Additionally, public awareness is fairly high: over 70% of UK adults could give at least a partial explanation of what AI is. 

However, this survey alone does not constitute a formidable endorsement of all disruptive strategies. The framing of “strategic” disruptive protests might imply only the successful, or well thought-through, use of these tactics). 

The literature on ‘disruptive protests’ is mixed: some studies suggest they can increase public support, others suggest the opposite[20]. Sometimes the literature implies that disruptive tactics are, by definition, unpopular or inconvenient for the public[21]. In contrast, blocking ships containing GMOs was undoubtedly disruptive, but it may not have been unpopular. These tactics led to little backlash from policymakers and seem to have benefitted the GM protests. The same could be true with blocking the entrance to an AI lab. 

In addition, there is emerging empirical evidence that disruptive protests can increase public support for mainstream groups. While this empirical literature is, at present, limited to protests about climate change and animal rights[22], it suggests that the careful use of more disruptive tactics could benefit the mainstream ‘flank’ of the AI safety movement. 

Conversely, more disruptive tactics could alienate AI safety researchers who work at Big Tech companies. This problem makes the AI case unique. GM protestors didn’t have allies within Monsanto; Greta Thunberg does not have many friends at ExxonMobil. Indeed, I doubt any group who have protested multi-billion-dollar companies have prioritized their relationship with corporations over mobilizing public opinion. And given that protest groups are desperate to succeed and adapt their tactics to suit their political environment[23], most strategies are quasi-rational. Is the AI case so unique to justify a more corporate-friendly protest strategy?

I am unsure. More research should be done into non-inconvenient disruptive protest outcomes, and the strategic importance of allies within industry. 

6) Conclusion

In this project, I hoped to answer the following question: Can AI protests help enable a unilateral American pause to AI development in the short-run? 

The experience of GM crops suggests it is possible. Like AI, GMOs were a novel, highly profitable technology, driven by powerful companies. Yet, within the space of only a few years, between 1996 and 1999, public opinions shifted rapidly against GMOs, and Europe enforced a de-facto moratorium on GMOs. Only 1 GM crop is planted  in Europe and the region accounts for 0.05% of total GMOs grown worldwide.

There are several reasons to be optimistic about AI protests. First, like pre-existing support for GM protests, the public is worried about AI development. Secondly, AI existential risk narratives are highly neglected in the media, and more coverage could make the public more supportive of a pause. So, like in the GM case, activists could shape the public discourse. Thirdly, small Pause AI protests have received media attention already, and the GM case suggests that mass-mobilization is not a necessary condition for a successful protest. Lastly, GMO activists were able to overcome the biotechnology lobby, which likely had more lobbying power than the AI lobby (currently) has. 

To increase their chances of success, AI protests should adapt their messaging and strategies. First, they should de-prioritize esoteric p(doom) arguments, and instead emphasize injustice. Second, they should look to build a broader and more diverse coalition by allying with the artists are filing lawsuits against Generative AI. Thirdly, they should look carefully as to whether to adopt more disruptive protests. 

The key factor preventing AI protests from being effective is the lack of political allies for a pause. The current absence of politicians or parties advocating for a pause outweighs other reasons for optimism. A sympathetic government in power is important for protest outcomes[24]

In addition, there are key uncertainties surrounding AI protests. Another crucial factor for GM protests were a series of high-profile trigger events, such as the Mad Cow Disease scandal, cloning of ‘Dolly the Sheep’. We may not need to wait until a ‘warning shot’, like an AI-engineered pathogen, for the public to mobilize: the release of a more capable AI, or symbolic acts like AI workers going on strike could mobilize the public. Building up organizations in anticipation of future ‘trigger events’ is vital for protests, so that they can mobilize and scale in response – the organizational factor which experts thought was most important for protests. However, whether such ‘trigger events’ occur anytime soon is uncertain. 

Bringing about a unilateral pause to AI development is going to be challenging in the short-term. Where does this leave us? Does this mean that AI protests are doomed to fail? No. Protest movement can have various other ‘theories of change’, aside from achieving its ultimate goals. 

Firstly, AI protests can shift the ‘Overton Window’ of politically acceptable policies. If the public sees more stories about AI existential risk in the media, they will likely become more supportive of a Pause. Other protests such as Civil Rights and Black Lives Matter have shifted the public discourse[25]. AI protests could do the same, and direct public attention towards existential risk. If concern about existential risk rises, a global moratorium on AI development might become a more mainstream political demand. Additional policy asks (e.g., licenses for ‘frontier models’, etc.) become more likely to succeed. For example, after Extinction Rebellion called for a ‘net-zero’ target of 2025, the UK government adopted a 2050 pledge. Radical demands bring additional asks into the political mainstream. 

Secondly, greater salience of AI risks could benefit groups demanding companies to adopt specific safety policies. There are several examples of Big Tech firms bowing to public pressure, including YouTube’s changes to its algorithm in 2019 and Google’s cancellation of its ties with Project Maven in 2018. 

Thirdly, greater salience of AI existential risk would help to give legitimacy to existing safety initiatives led by governments. Recently, there has been media focus around the influence of billionaires linked to Effective Altruism on AI policy: both in the UK, and in the US. If the extensive resources spent by the government on AI safety, like the £100 million set aside for the UK’s Frontier AI Taskforce, are increasingly seen as illegitimate, there might be political pressure to cut them. 

Fourthly, greater public salience would help avoid the “quiet politics” which favors corporate lobbying. OpenAI’s watering down of the EU AI Act would have been significantly less likely if AI existential risk was a more salient topic. 

There is much future work to be done on AI protests, including on fleshing out these four theories of change – ‘Overton Window’ shifts enabling additional safety policies; legitimating existing safety initiatives; changing corporate behaviour; and reducing the power of corporate lobbying. 

Another unanswered question is whether and how protest groups like PauseAI can win over political allies[26].

Further ‘deep dives’ into other protest groups would be particularly interesting. The GM case is limited as an analogue because of its limited geopolitical importance and lack of international governance: research into protests nuclear power, CFCs, nuclear weapons would be particularly useful for this. 

Finally, a thorough literature review differentiating between publicly inconvenient and non-inconvenient tactics would be useful for answering whether AI protests should adopt more disruptive tactics. 

I hope that this piece can form the start of a serious conversation about the efficacy of AI protests. 


[1]Found in Mohorich (2018). Originally from Rachel Schurman, and William Munro. Fighting for the Future of Food. Minneapolis: University of Minnesota Press, 2010. 

[2] See https://www.socialchangelab.org/_files/ugd/503ba4_e21c47302af942878411eab654fe7780.pdf for a literature review. 

See https://www.apollosurveys.org/social-change-and-protests/ for a survey of 120 experts: 64% said that the ‘state of public opinion’ was at least ‘quite important’ for protest movements to influence policymakers. 

[3] YouGov (April, 2023) found 47% of US adults were  "very concerned" or "somewhat concerned"; Campaign for AI Safety (April, 2023) found roughly 44%; YouGov (July 2023) found 46%

[4] YouGov (April, 2023) found 69% either strongly supported or somewhat supported a six-month pause in "some kinds of AI development". Using the same framing YouGov (July, 2023) found 61%; with a slightly different framing, Rethink Priorities (Apr, 2023) found 51%. 

[5] YouGov (July 2023): 46% of people were at least somewhat concerned about AI, versus nuclear weapons (at 66%), world war (65%), climate change (52%), and a pandemic (52%), an act of God (42%) and an asteroid impact (37%). From AI Impacts: Rethink Priorities’ US public opinion of AI policy and risk (2023) asked a similar question, giving respondents a list of 5 possible threats plus “Some other cause.” AI ranked last, with just 4% choosing it as most likely to cause human extinction. Public First asked about 7 potential dangers over the next 50 years; AI was 6th on worry (56% worried) and 7th on “risk that it could lead to a breakdown in human civilization” (28% think there is a real risk). Fox News asked about concern about 16 issues; AI was 13th (56% concerned).

[6] From Sentience Institute: https://www.sentienceinstitute.org/gm-foods#ftnt84: “For example, self-driving cars received a great deal of press attention coverage from 2017 to 2018, much of it positive (this period predates the 2018 self-driving deaths associated with Uber and Tesla). American Automobile Association polling spanning that period indicated that “63 percent of U.S. drivers report feeling afraid to ride in a fully self-driving vehicle [in early 2018], a significant decrease from 78 percent in early 2017.”

[7] Thank you to Franz Seifert for this phrase, and for his comments for this section. 

[8]  BBCTime MagazineThe VergeUS Today NewsFortune, among others. Author disclosure: I was at these protests.

[9] See signatories to the policy brief from AI Now Institute

[10] Illinois bans uncontrolled use of AI video interviews, Vermont banned police use of facial recognition, and New York requires impact assessments (Nature, 2023). 

[11] https://www.technologyreview.com/2023/07/14/1076296/mustafa-suleyman-my-new-turing-test-would-see-if-ai-can-make-1-million/: Go make $1 million on a retail web platform in a few months with just a $100,000 investment.

[12] See Acemoglu and Johnson, Power and Progress (2023): p362, p378

[13] Member states will only be able to enforce more stringent regulation than the AIA in biometric identification

[14] For example, Illinois requires firms to announce and explain the use of AI to analyze employment interviews, and Massachusettshas introduced privacy and transparency restrictions on Generative AI. This may change in the future with federal regulation

[15] https://www.egair.eu/: “Any data related to people or works, in any form, be it digital data – such as text files, audios, videos or images – or captured from reality by camera, microphones or any other mean of registration, shall not be used to train AI model without the explicit and informed consent of its owner.”

[16] From literature review from Social Change Lab, 2022: Many studies only focus on 'violence' defined as clashes between protestors and police, general property destruction like broken windows (Wouters, 2019; Budgen 2020), or clashes between groups of protestors (Simpson et al, 2018; Feingberg et al, 2017). Other studies suggest violent tactics worsen voting outcomes compared to peaceful protests, but focus on rioting (Wasow, 2020; Munoz and Anduiza, 2019; Huet-Vaughn, 2013) or violent uprisings (Chenoweth & Stephan, 2011) rather than targeted property destruction.

[17] Erica Chenoweth, a key social mobilization scholar, quoted as saying, “[G]generally movements have to choose between fringe violence and diverse participation. It’s hard to have both” (2021: 162).

[18] Social Change Lab, 2022: “government repression towards a nonviolent movement is much more likely to lead to a backfire effect, where the public tends to sympathize much more with the nonviolent movement rather than a violent movement”

[19] O’Brien (2021) found that 33 out of 50 direct actions against field tests in UK were covert 

[20] See https://www.socialchangelab.org/_files/ugd/503ba4_9ab7ad4e86f443b9bdef66b431c277f3.pdf p4-5. 

[21] E.g. https://www.socialchangelab.org/_files/ugd/503ba4_9ab7ad4e86f443b9bdef66b431c277f3.pdf: “By its nature, disruptive protest is unpopular”

[22] Experimental evidence suggests that radical tactics in animal rights and climate contexts made the moderate faction appear less extreme, and helped it gain more support (Simpson, 2022). Research from the Social Change Lab – the first research into the radical flank effect using national polling – suggests disruptive climate protests increased support for mainstream climate groups. 

[23] E.g. for the GM Case: https://onlinelibrary.wiley.com/doi/full/10.1111/j.1467-9523.2008.00473.x

For the anti-nuclear protests: https://www.cambridge.org/core/journals/british-journal-of-political-science/article/political-opportunity-structures-and-political-protest-antinuclear-movements-in-four-democracies/CD35E132C21E7AD3BB031BC58BD5710A

[24] https://www.apollosurveys.org/social-change-and-protests/: 67% of experts thought that having a sympathetic government in power waw as at least quite important. 

[25] https://www.socialchangelab.org/_files/ugd/503ba4_052959e2ee8d4924934b7efe3916981e.pdf. See ‘6. Public Discourse and Media’

[26] In surveys, experts suggested ‘winning over political allies’ was the most important immediate goal for protest groups, but how this process occurs is underreached. I have not found any literature on this topic. Another researcher I spoke with also knew of none. 

 

  1. ^

     

48

2
0

Reactions

2
0

More posts like this

Comments11
Sorted by Click to highlight new comments since: Today at 1:53 PM

Interesting post, certainly an interesting comparison and an existence proof that technology that's somewhat difficult to create but trivial to distribute and reproduce can be regulated to oblivion in large sections of the world for decades.

A point conspicuous by its absence: the overregulation of GM crops was (and remains) a mistake, or at the least is nearly universally agreed to be so by the people with the most technical knowledge (e.g. people with PhDs in plant biology). 

I understand whether it was wise to grossly curb deployment of GM crops was not the point of the post, merely whether it was politically feasible starting with a relatively small contingent of protesters. I'm still miffed that the anti-science and overall almost certainly negative-EV-ness of the GM overregulation wasn't mentioned, especially given quotes like "This coincided with key 'trigger events' like Mad Cow Disease and the arrival of GM crops in March 1996" which would suggest a causal connection between the two. 

Thank you for your comments Kasey! Glad you think it's an interesting comparison. I agree with you that GMOs were over-regulated in Europe. Perhaps I should have said explicitly that the scientific consensus is that GMOs are safe. I do make a brief caveat in the Intro that I'm not comparing the "credibility of AI safety concerns (which appear more legitimate than GMO concerns)", though this deserves more detail.

I suppose an interesting exercise for another research project could be to try to tally up in hindsight how many activist/protest movements seem directionally correct or mistaken in retrospect (eg anti-GM seems wrong, anti-nuclear seems wrong, anti-fossil-fuels seems right). I think even if the data came in that activists are usually wrong this wouldn't actually move me very much as the inside view arguments are quite strong for AI risk I think.

Sounds interesting Oscar, though I wonder what reference class you'd use ... all protests? A unique feature of AI protests is that many AI researchers are themselves protesting. If we are comparing groups on epistemics, the Bulletin of the Atomic Scientists (founded by Manhattan Project scientists) might be a closer comparison than GM protestors (who were led by Greenpeace, farmers etc., not people working in biotech). I also agree that considering inside-view arguments about AI risk are important. 

How epistemically and optically dangerous do you think the communication and allyship tactics you propose are? Reasons to worry are that:

  • If we start talking about injustices and near-term non-existential risks that are more 'sexy' and easy to grasp, maybe this starts shaping our own thinking as well, which seems bad.
  • Conversely, if we maintain an x-risk focus while espousing other issues and allying ourselves with other groups, this is (or may be perceived) as a bit deceptive and manipulative.

I think you may well still be right regardless of these risks, but they seem important to consider.

Thanks for these questions Oscar! To be clear, I was suggesting that effective messaging would emphasise the injustice of continued AI development in an emotionally compelling way: e.g. lack of democratic input to corporate attempts to build AGI. I wasn't talking so much about communicating near-term injustices. Though, I take your point, that by allying with other groups suffering from near-term harms, this would imply a combined near-term and long-term message. 

On your first question: would thinking about near-term & LT harms lead to worse thinking? Do you mean this would make us care about AI x-risk less? 

And on your second point, on whether it would be perceived as manipulative. I don't think so. If AI protest can effectively communicate a 'We are fighting a shared battle' message, as @Gideon Futerman has written about, this could make AI protests seem less niche/esoteric. Identifying concrete examples of harms to specific people/groups is important part of 'injustice frames', and could make AI risk more salient. In addition, broad 'coalitions of the willing' (i.e. Baptists and Bootleggers) are very common in politics. What do you think?

I suppose I meant something similar to what Chris has also written. I think being single-minded can be valuable. Hopefully it is possible to engage productively with non x-risk focused communities without being either deceptive or manipulative, I think it is doable, just requires some care I imagine.

Regarding allies:

• I agree that working with other groups is great when we have a common interest. Take, for example, the FLI letter. This was a highly successful example of a collaboration with some AI ethics people.

• At the same time, I'm less optimistic about any plans that involve developing our strategy in broad-tent groups which would possibly dilute our focus. This doesn't just apply to the AI ethics community, with whom we have an unfortunately fractious relationship, but would also apply to artists as well. Of course, think it makes sense to collaborate with them when our interests align.

• I'm less a fan of disruptive tactics, especially since we have allies within these firms. There's a sense in which these are a cheap way to get attention and I suspect that if we're strategic we can find other ways to draw attention to our concerns without risking turning the public against us. For example, persuading a large number of people to wear the same t-shirt at a conference might actually be more effective.

Hi Chris, thank you for this. 

1) Nice! Agreed

2) It really depends on what form of alliance this takes. It could be implicit: fundraising for artists' lawsuits for example, without any major change to public messaging. I don't think this would dilute the focus on existential risk. When Baptists allied with Bootleggers in the prohibition era, this did not dilute their focus away from Christianity! I also think that there are indeed common interests here: restrictions on GAI models. (https://forum.effectivealtruism.org/posts/q8jxedwSKBdWA3nH7/we-are-not-alone-many-communities-want-to-stop-big-tech-from). 

That being said, if PauseAI did try to become a broad 'AI protest group', including via its messaging, this would dilute the focus on x-risk. Though, mixture of near-term and long-term messaging may more effective in reaching a broader audience. As mentioned in another comment, identifying concrete examples of harms to specific people/groups is important part of 'injustice frames'. (I am more unsure about this, though.)

3) I am also hesitant about more disruptive research tactics, in particular because of allies within firms. But, I don't think that disruptive protests necessarily have to turn the public against us... no more than blocking ships made GMO protestors unpopular. Efficacy of disruptive tactics are quite issue-dependent...  I think it would be useful if someone did a thorough lit review of disruptive protests.

I hadn’t made the GMO protests - AI protests connection.

This reads as a well-researched piece.

The analysis makes sense to me – with an exception to seeing efforts to restrict facial recognition, the Kill Cloud, etc, as orthogonal. I would also focus more on preventing increasing AI harms and Big Tech power consolidation, which most AI-concerned communities agree on.

Appreciate that @Remmelt Ellen! In theory, I think these messages could work together. Though, given animosity between these communities, I think alliances are more challenging. Also I'm curious - what sort of policies would be mutually beneficial for people concerned about facial recognition and x-risk? 

Curated and popular this week
Relevant opportunities