I tend to disagree with most EAs about existential risk from AI. Unfortunately, my disagreements are all over the place. It's not that I disagree with one or two key points: there are many elements of the standard argument that I diverge from, and depending on the audience, I don't know which points of disagreement people think are most important.
I want to write a post highlighting all the important areas where I disagree, and offering my own counterarguments as an alternative. This post would benefit from responding to an existing piece, along the same lines as Quintin Pope's article "My Objections to "We’re All Gonna Die with Eliezer Yudkowsky"". By contrast, it would be intended to address the EA community as a whole, since I'm aware many EAs already disagree with Yudkowsky even if they buy the basic arguments for AI x-risks.
My question is: what is the current best single article (or set of articles) that provide a well-reasoned and comprehensive case for believing that there is a substantial (>10%) probability of an AI catastrophe this century?
I was considering replying to Joseph Carlsmith's article, "Is Power-Seeking AI an Existential Risk?", since it seemed reasonably comprehensive and representative of the concerns EAs have about AI x-risk. However, I'm a bit worried that the article is not very representative of EAs who have substantial probabilities of doom, since he originally estimated a total risk of catastrophe at only 5% before 2070. In May 2022, Carlsmith changed his mind and reported a higher probability, but I am not sure whether this is because he has been exposed to new arguments, or because he simply thinks the stated arguments are stronger than he originally thought.
I suspect I have both significant moral disagreements and significant empirical disagreements with EAs, and I want to include both in such an article, while mainly focusing on the empirical points. For example, I have the feeling that I disagree with most EAs about:
- How bad human disempowerment would likely be from a utilitarian perspective, and what "human disempowerment" even means in the first place
- Whether there will be a treacherous turn event, during which AIs violently take over the world after previously having been behaviorally aligned with humans
- How likely AIs are to coordinate near-perfectly with each other as a unified front, leaving humans out of their coalition
- Whether we should expect AI values to be "alien" (like paperclip maximizers) in the absence of extraordinary efforts to align them with humans
- Whether the AIs themselves will be significant moral patients, on par with humans
- Whether there will be a qualitative moment when "the AGI" is created, rather than systems incrementally getting more advanced, with no clear finish line
- Whether we get only "one critical try" to align AGI
- Whether "AI lab leaks" are an important source of AI risk
- How likely AIs are to kill every single human if they are unaligned with humans
- Whether there will be a "value lock-in" event soon after we create powerful AI that causes values to cease their evolution over the coming billions of years
- How bad problems related to "specification gaming" will be in the future
- How society is likely to respond to AI risks, and whether they'll sleepwalk into a catastrophe
However, I also disagree with points made by many other EAs who have argued against the standard AI risk case. For example, I think that,
- AIs will eventually become vastly more powerful and smarter than humans. So, I think AIs will eventually be able to "defeat all of us combined"
- I think a benign "AI takeover" event is very likely even if we align AIs successfully
- AIs will likely be goal-directed in the future. I don't think, for instance, that we can just "not give the AIs goals" and then everything will be OK.
- I think it's highly plausible that AIs will end up with substantially different values from humans (although I don't think this will necessarily cause a catastrophe).
- I don't think we have strong evidence that deceptive alignment is an easy problem to solve at the moment
- I think it's plausible that AI takeoff will be relatively fast, and the world will be dramatically transformed over a period of several months or a few years
- I think short timelines, meaning a dramatic transformation of the world within 10 years from now, is pretty plausible
I'd like to elaborate on as many of these points as possible, preferably by responding to direct quotes from the representative article arguing for the alternative, more standard EA perspective.
Ah, sorry. I indeed interpreted you as saying that we would reduce p(doom) to 0.01-0.1% per year, rather than saying that each year of delay reduces p(doom) by that amount. I think that view is more reasonable, but I'd still likely put the go-ahead-number higher.
Apologies again for misinterpreting. I didn't know how much weight to put on the word "potentially" in your comment. Although note that I said, "Even when an EA insists their concern isn't about the human species per se I typically end up disagreeing on some other fundamental point here that seems like roughly the same thing I'm pointing at." I don't think the problem is literally that EAs are anthropocentric, but I think they often have anthropocentric intuitions that influence these estimates.
Maybe a more accurate summary is that people have a bias towards "evolved" or "biological" beings, which I think might explain why you'd be a little happier to hand over the universe to aliens, or dogs, but not AIs.
I guess I mostly think that's a pretty bizarre view, with some obvious reasons for doubt, and I don't know what would be driving it. The process through which aliens would get values like ours seems much less robust than the process through which AIs gets our values. AIs are trained on our data, and humans will presumably care a lot about aligning them (at least at first).
From my perspective this is a bit like saying you'd prefer aliens to take over the universe rather than handing control over to our genetically engineered human descendants. I'd be very skeptical of that view too for some basic reasons.
Overall, upon learning your view here, I don't think I'd necessarily diagnose you as having the intuitions I alluded to in my original comment, but I think there's likely something underneath your views that I would strongly disagree with, if I understood your views further. I find it highly unlikely that AGIs will be even more "alien" from the perspective of our values than literal aliens (especially if we're talking about aliens who themselves build their own AIs, genetically engineer themselves, and so on).