TL;DR: Someone should probably [write a grant to] produce a spreadsheet/dataset of past instances where people claimed a new technology would lead to societal catastrophe, but with informative variables such as “multiple people working on the tech believed it was dangerous.” This could help address the common objection from AI risk skeptics that panic about new technology is omnipresent and basically always wrong, "so this time with AI isn't different."

———

I have asked multiple people in the AI safety space if they knew of any kind of "dataset for past predictions of doom (from new technology)", but I still have not encountered such a project. There have been some articles and arguments floating around such as "Tech Panics, Generative AI, and the Need for Regulatory Caution", in which skeptics say we shouldn't worry about AI x-risk because there are many past cases where people in society made overblown claims that some new technology (e.g., bicycles, electricity) would be disastrous for society.

While I think it's right to consider the "outside view" on these kinds of things, I think that most of these claims 1) ignore examples of where there were legitimate reasons to fear the technology (e.g., nuclear weapons, maybe synthetic biology?), and 2) imply the current worries about AI are about as baseless as claims like "electricity will destroy society," whereas I would argue that the claim "AI x-risk is >1%" stands up quite well against most current scrutiny.

(These claims also ignore the anthropic argument/survivor bias—that if they ever were right about doom we wouldn't be around to observe it—but this is less important.)

I especially would like to see a dataset that tracks things like "were the people warning of the risks also the people who were building the technology?" More generally, some measurement of "analytical rigor" also seems really important, e.g., "could the claims have stood up to an ounce of contemporary scrutiny (i.e., without the benefit of hindsight)?"

It really seems worth spending up to $20K to hire researchers to produce such a spreadsheet within the next two-ish months… This could be a critical time period where people are more receptive to new arguments/responses.

———

[Note on this post: this is just a minimally edited shortform I wrote about 5 months ago, and it became my highest upvoted shortform (in fact, one of my highest upvoted posts). At the time I didn't have the motivation or time to make it into a full post. I currently lack the time to make it a formal/detailed post (let alone work on the project myself), but after a string of discussions both in person and online where someone objected to AI risks along the lines I described, I decided I should probably just convert the shortform into a normal post rather than continually wait in silence in the hopes that someone else would say or do something...]

4

0
0

Reactions

0
0
Comments2
Sorted by Click to highlight new comments since: Today at 4:43 AM

I think the most potentially similar false panic is the concern over "grey goo" and molecular nanotech in the early 2000's. 

I mean, take a look at the concerns of the "centre for responsible nanotech", which talk about "nanotech arms races" and preventing "rogue use of nanotech", because "grey goo and military nanobots will not respect borders". They propose severe international restrictions to keep nanotech development to a single international entity, and predict nanotech will arrive in ten years. (this was written circa 2005).  

I hope the parallels to current day AI fears are obvious. Some of the people that bought into the hype (like Drexler and Yudkowsky) are now in the AI risk movement, using the exact same language. 

In reality, actual efforts to create molecular nanotech stalled out, because the physics and engineering barriers turned out to be utterly, ridiculously difficult, and beyond the reach of available technology. Generally people accept now that, (absent speedup from an AGI), molecular nanotech is decades or even centuries away, if it's even possible at all. And pretty much nobody believes that "accidental grey goo" is remotely feasible, given the engineering challenges involved. 

This doesn't mean that AGI fears will turn out the same way, of course, just that a similar panic has occurred before and turned out to be fine. 

Good comment, but Drexler actually strikes me as both more moderate and more interesting on AI than just "same as Yudkowsky". He thinks really intelligent AIs probably won't be agents with goals at all (at least the first ones we build), and that this means that takeover worries of the Bostrom/Yudkowsky kind are overrated. It's true that he doesn't think the risks are zero, but if you look at the section titles of his FHI report, a lot of it is actually devoted to debunking various claims Bostrom/Yudkowksy make in support of the view that takeover risk is high: https://www.fhi.ox.ac.uk/wp-content/uploads/Reframing_Superintelligence_FHI-TR-2019-1.1-1.pdf

I don't think this effects the point your making, it just seemed a bit unfair on Drexler if I didn't mention this.