Recently a tweet from Nigella Lawson popped up on my Twitter feed with the caption “ Sorry to do this to you on a Monday morning, but the end of the world is nigh” linking to a Financial Times article on the inherent risks of uncontrollable Artificial Intelligence. Nigella, a British food writer, and television chef isn’t known for her expertise and opinions on technology matters which can only mean that the relatively fringe “AI Cautionist” view is starting to gain traction with the wider public.  featuring on the BBC, Vox, and New York Times. Eliezer Yudkowsky, an AI researcher and arguably the leading voice pushing back against AI advancement, recently published an article in Time Magazine stating a controversial comment “ Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined)…. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue data center by airstrike.”

Now for those in the climate movement, this sequence of events isn’t anything that hasn’t been seen before. It usually starts with researchers within a field highlighting the dangers of a technology without safeguards, this then slowly catches on to a determined group of followers who create movements and organisations to raise the alarm all in the hope of ultimately getting the people with power- world governments and intergovernmental organisations such as the UN to address it. The problem is that this sequence of events is not linear and usually, when said movement goes up directly against powerful interest groups who profit from the bounty that the technology brings, the call to accountability gets stuck in a state of purgatory with researchers and campaigners constantly sounding the alarm just to receive a shrug of the shoulders. This then escalates to campaigners taken further radical action such as demonstrations and protests. When things still don’t gain enough traction we then see what is known in the political activist world as “direct action”, this is actively targeting the centres, groups or property that are causing the harm or destruction, usually taking the forms of strikes, sit-ins, hacktivism blockades or (see Germany’s Ende Gelande movement which actively targets the development of coal mines in by occupying the space) not too disimilar from what Yudkowsky suggested.

Now, isn’t this getting a bit too far ahead of itself? How can we compare a hypothetical scenario of a yet uncreated intelligent system with something like the climate crisis which is, at this very moment, causing record-breaking temperature events along with all the weather-related events it brings with it? According to a 2021 survey of 44 researchers working on reducing existential risks from Artificial Intelligence- the median risk was 32.5%- ranging from 2% to 98%. This is a large variation, which shows just how unpredictable this field is, but also broadly aligns with other estimates ranging from 10% upwards. In comparison, if there was a 10% chance of a meteor careering toward Earth and destroying all life, you can be pretty sure that world governments will crack heads together to generate a robust plan addressing this issue. Unfortunately, it would appear that there are roughly only 400 people around the world working directly on reducing the chances of AI-related existential catastrophe, and this sounds to me like something we should be concerned about.

So, what does that mean for AI Cautionists? Well, for a start signing petitions and open letters to address the issue, as a recent call for a pause in development with one of the signatories being Elon Musk, might not have enough of an impact in generating broader awareness and action. If an existential threat does appear more and more likely, then further tactics and methods such as civil disobedience and direct action will need to be considered. What does that mean in practice though? Well, we can’t be sure because, as we’ve painstakingly learned in the climate movement, a cookie-cutter, copy-and-paste method from one movement to another just doesn’t work as cultural, organisational, and historical contexts all need to be considered before launching any effective movement. What we do know is that some of the biggest response from governments to climate change has come within the last few years, largely in part to the monumental, near tireless civil disobedience actions from organisations such as Extinction Rebellion in the UK, Fridays for Future in Europe and the Sunrise Movement in the US. Shifting the Overton window (the policies and ideas that are found acceptable to the mainstream population) by disrupting business as usual has been shown to work. As stated recently in a Twitter thread regarding the recent Just Stop Oil action at the Snooker World Championships “ if you have people willing to do very outlandish things in public space…well maybe there is a crisis”.

The people within the rationalist and effective altruist community are by nature utilitarian and can take notice when certain tactics work and when they don’t. The problem is the lack of urgent action so far on what Effective Altruist Organisations deem “one of the world’s most pressing problems”, except for highly technical LessWrong posts debating the finer points of the debate. This tension lies in the difficulty of measuring something as complex as social movements as to whether they do more harm or good to a specific cause. There is also a general lack of research and interest from the effective altruist community in researching social movement tactics and what effects they have on policy change and generating wider public interest. Notably, there is some recent good work done by the Social Change Lab which finds evidence that nonviolent direct action does have important outcomes on public opinion, behaviour and policy. Even ignoring the wider research, we can even see real-world successful demonstrations within the tech industry itself, notably when employees from Amazon, Google, Microsoft, Facebook and Twitter organised a climate walk-out across 25 cities resulting in Amazon committing to zero carbon by 2040 and Google announcing a $2 billion investment in renewable energy infrastructure.

Existentially threatening AI might not come to be, but given the stakes involved as well as the rapid capability increase, this might be the best moment for AI cautionists to go from behind their computers to the streets demanding greater accountability and research into safer AI because until then, the urgency just isn’t showing.

Comments3
Sorted by Click to highlight new comments since: Today at 3:51 PM

I think the EA/LW community still has a lot of updating to do in this new post-GPT-4+plugins+AutoGPT era. There simply isn't time for alignment to be solved with business as usual (even if a Manhattan Project for Alignment was started tomorrow). The world is charging full steam ahead toward global catastrophe. We need a global moratorium on AGI asap. DM me if you want to get involved with this.

we can even see real-world successful demonstrations within the tech industry itself, notably when employees from Amazon, Google, Microsoft, Facebook and Twitter organised a climate walk-out across 25 cities resulting in Amazon committing to zero carbon by 2040 and Google announcing a $2 billion investment in renewable energy infrastructure.

This is an encouraging precedent! I hope there are people within these companies organising similar over uncontrollable AI now.

if there was a 10% chance of a meteor careering toward Earth and destroying all life, you can be pretty sure that world governments will crack heads together to generate a robust plan addressing this issue.

I'm really not sure if this is necessarily the case. This is like something straight out of the film Don't Look Up. (And we already had ~half of the experts saying there is a 10% risk of doom from AI, before GPT-4+plugins+AutoGPT!).

Curated and popular this week
Relevant opportunities