TL;DR: Someone should probably [write a grant to] produce a spreadsheet/dataset of past instances where people claimed a new technology would lead to societal catastrophe, but with informative variables such as “multiple people working on the tech believed it was dangerous.” This could help address the common objection from AI risk skeptics that panic about new technology is omnipresent and basically always wrong, "so this time with AI isn't different."
I have asked multiple people in the AI safety space if they knew of any kind of "dataset for past predictions of doom (from new technology)", but I still have not encountered such a project. There have been some articles and arguments floating around such as "Tech Panics, Generative AI, and the Need for Regulatory Caution", in which skeptics say we shouldn't worry about AI x-risk because there are many past cases where people in society made overblown claims that some new technology (e.g., bicycles, electricity) would be disastrous for society.
While I think it's right to consider the "outside view" on these kinds of things, I think that most of these claims 1) ignore examples of where there were legitimate reasons to fear the technology (e.g., nuclear weapons, maybe synthetic biology?), and 2) imply the current worries about AI are about as baseless as claims like "electricity will destroy society," whereas I would argue that the claim "AI x-risk is >1%" stands up quite well against most current scrutiny.
(These claims also ignore the anthropic argument/survivor bias—that if they ever were right about doom we wouldn't be around to observe it—but this is less important.)
I especially would like to see a dataset that tracks things like "were the people warning of the risks also the people who were building the technology?" More generally, some measurement of "analytical rigor" also seems really important, e.g., "could the claims have stood up to an ounce of contemporary scrutiny (i.e., without the benefit of hindsight)?"
It really seems worth spending up to $20K to hire researchers to produce such a spreadsheet within the next two-ish months… This could be a critical time period where people are more receptive to new arguments/responses.
[Note on this post: this is just a minimally edited shortform I wrote about 5 months ago, and it became my highest upvoted shortform (in fact, one of my highest upvoted posts). At the time I didn't have the motivation or time to make it into a full post. I currently lack the time to make it a formal/detailed post (let alone work on the project myself), but after a string of discussions both in person and online where someone objected to AI risks along the lines I described, I decided I should probably just convert the shortform into a normal post rather than continually wait in silence in the hopes that someone else would say or do something...]