AI is getting more powerful, and at some point could cause significant damage.
There's a certain amount of damage that an AI could do that would scare the whole world (similar effects on government and populace psychology as coronavirus -- both willing to make sacrifices). The AI that could cause this (I naively expect) could be well short of the sophistication / power needed to really "rule the world", be unstoppable by humans.
So it seems likely to me (but not certain) that we will get a rude (coronavirus-like) awakening as a species before it gets to the point that we are totally helpless to suppress AI. This awakening would give us the political will / sense of urgency to be willing and feel compelled to do something to limit AI.
(Limiting / suppressing AI: making it hard to make supercomputers, concentrate compute -- technologically, legally. Also, maybe, making the world less computer-legible so that AI that do get made have less data to work with / have less connection to the world. Making it so that the inevitable non-aligned AI are stoppable. Or anything else along those lines.)
It seems like if suppressing AI were easy / safe, that would have been the first choice of AI safety people, at least until such time as alignment is thoroughly solved. But it seems like most discussion is about alignment, not suppression, I would assume on the assumption that suppression is not actually a viable option. However, given the possibility that governments may all be scrambling to do something in the wake of a "coronavirus AI", what kind of AI suppression techniques would they be likely to try? What problems could come from them? (One obvious fear being that a government powerful enough to suppress AI could itself cause a persistent dystopia.) Is there a good, or at least better, way to deal with this situation, which EAs might work toward?
(Minor, tangential point)
I don't think it's inevitable that there'll ever be a "significantly" non-aligned AI that's "significantly" powerful, let alone "unstoppable by default". (I'm aware that that's not a well-defined sentence.)
In a trivial sense, there are already non-aligned AIs, as shown e.g. by the OpenAI boat game example. But those AIs are already "stoppable".
If you mean to imply that it's inevitable that there'll be an AI that (a) is non-aligned in a way that's quite bad (rather than perhaps slightly imperfect alignment that never really matters much), and (b) would be unstoppable if not for some effort by longtermist-type-people to change that situation, then I'd disagree. I'm not sure how likely that is, but it doesn't seem inevitable.
(It's also possible you didn't mean "inevitable" to be interpreted literally, and/or that you didn't think much about the precise phrasing you used in that particular sentence.)
Yes, I agree that there's a difference.
I wrote up a longer reply to your first comment (the one marked "Answer'), but then I looked up your AI safety doc and realized that I might better read through the readings in that first.