Hide table of contents

I've been worried about x-risk for a very long time. But for some reason AI x-risk has never felt intuitive to me.

On an intellectual level AI risk makes sense to me, but I would like to be able to visualize what a few plausible AI take-over scenarios could concretely look like. I would really appreciate if anyone could share books and movies with realistic AI elements and great storytelling that helped you to connect emotionally to the scariness of AI x-risk. 

Alternatively, I would appreciate recommendations for any content, posts, ideas, revelations, experiences, "a-ha" or "0h-shit" moments, etc. - anything that helped you realize how seriously scary AI is and motivated you to work on it.

Thank you for the advice!

11

0
0

Reactions

0
0
New Answer
New Comment


7 Answers sorted by

My favorite is probably the movie Colossus: the Forbin Project. For this, would also weakly recommend the first section of Life 3.0. 

Ex Machina does a great job of demonstrating the AI boxing experiment.

I think the series Next did a pretty good job of making me scared. It's not an amazing production in itself, but worth watching.

Besides the typical (over-dramatised) killer robot scenarios, I would add the perspective of looking for infrastructure breakdowns or societal chaos. Imagine a book like Blackout with a disruptive, national blackout but that was caused by powerful intelligent (control) systems. 
Or a movie like Don't Look Up where AI is used to or actively spreads misinformation that severely impact public opinion and taking effective action. 
In movies and books it might commonly be portrayed (in part) as human failure; but it could have been the result of correlated automation failure or goal mis-specification or a power-seeking AI system. 

The consequences that individual humans and society at large suffer from critical infrastructure breaking down can be quite realistic and visceral. 

Some examples not mentioned already, (and of course YMMV for each):

That Alien Message by Yudkowsky is very short, and might get at the intuition

It Looks Like You’re Trying To Take Over The World by Gwern is full of annonated links, if you want to get very technical and concrete about things

For a full-length book, I think that the Crystal Trilogy by Max Harms does a very good job, and would recommend if you like similar sci-fi

Finally, I would actually recommend the film Upgrade. While nowhere near as good a film as Ex-Machina, it's a good illustration that the AI risk is about malicious software agents, and not "evil robots" (i.e. Terminator) which I still think is the dominant public image when prompted about AI risk. Also, really cool fight choreography.

What this post and answer shows me… is that we probably need more material? If we want to ‘scare people’ about AI and make more people aware of it as an issue.

We’ve got lots of great (and dense) YouTube and blog content on the technical aspects. We could do with more more “Holy Shit Imagine an AI That Does This!!” type-content.

For me, the easiest to imagine model of how an AI takeover could look like has been depicted in Black Mirror: Shut Up and Dance (the episodes are fully independent stories). It's probably just meant to show scary things humans can do with current technology, but such schemes could be trivial for a superintelligence with future technology.

Curated and popular this week
 ·  · 1m read
 · 
Science just released an article, with an accompanying technical report, about a neglected source of biological risk. From the abstract of the technical report: > This report describes the technical feasibility of creating mirror bacteria and the potentially serious and wide-ranging risks that they could pose to humans, other animals, plants, and the environment...  > > In a mirror bacterium, all of the chiral molecules of existing bacteria—proteins, nucleic acids, and metabolites—are replaced by their mirror images. Mirror bacteria could not evolve from existing life, but their creation will become increasingly feasible as science advances. Interactions between organisms often depend on chirality, and so interactions between natural organisms and mirror bacteria would be profoundly different from those between natural organisms. Most importantly, immune defenses and predation typically rely on interactions between chiral molecules that could often fail to detect or kill mirror bacteria due to their reversed chirality. It therefore appears plausible, even likely, that sufficiently robust mirror bacteria could spread through the environment unchecked by natural biological controls and act as dangerous opportunistic pathogens in an unprecedentedly wide range of other multicellular organisms, including humans. > > This report draws on expertise from synthetic biology, immunology, ecology, and related fields to provide the first comprehensive assessment of the risks from mirror bacteria.  Open Philanthropy helped to support this work and is now supporting the Mirror Biology Dialogues Fund (MBDF), along with the Sloan Foundation, the Packard Foundation, the Gordon and Betty Moore Foundation, and Patrick Collison. The Fund will coordinate scientific efforts to evaluate and address risks from mirror bacteria. It was deeply concerning to learn about this risk, but gratifying to see how seriously the scientific community is taking the issue. Given the potential infoha
 ·  · 1m read
 · 
 ·  · 14m read
 · 
1. Introduction My blog, Reflective Altruism, aims to use academic research to drive positive change within and around the effective altruism movement. Part of that mission involves engagement with the effective altruism community. For this reason, I try to give periodic updates on blog content and future directions (previous updates: here and here) In today’s post, I want to say a bit about new content published in 2024 (Sections 2-3) and give an overview of other content published so far (Section 4). I’ll also say a bit about upcoming content (Section 5) as well as my broader academic work (Section 6) and talks (Section 7) related to longtermism. Section 8 concludes with a few notes about other changes to the blog. I would be keen to hear reactions to existing content or suggestions for new content. Thanks for reading. 2. New series this year I’ve begun five new series since last December. 1. Against the singularity hypothesis: One of the most prominent arguments for existential risk from artificial agents is the singularity hypothesis. The singularity hypothesis holds roughly that self-improving artificial agents will grow at an accelerating rate until they are orders of magnitude more intelligent than the average human. I think that the singularity hypothesis is not on as firm ground as many advocates believe. My paper, “Against the singularity hypothesis,” makes the case for this conclusion. I’ve written a six-part series Against the singularity hypothesis summarizing this paper. Part 1 introduces the singularity hypothesis. Part 2 and Part 3 together give five preliminary reasons for doubt. The next two posts examine defenses of the singularity hypothesis by Dave Chalmers (Part 4) and Nick Bostrom (Part 5). Part 6 draws lessons from this discussion. 2. Harms: Existential risk mitigation efforts have important benefits but also identifiable harms. This series discusses some of the most important harms of existential risk mitigation efforts. Part 1 discus