A

anson

260 karmaJoined Jul 2021

Participation

    Sequences
    1

    A Tour of AI Timelines

    Comments
    5

    Hi! I noticed that many footnote links don't work my laptop's Chrome - e.g. both of the following links in the semi-informative priors report link to the same thing

    Also:

    Not sure if this is just a problem on my end though, anyone else encountering the same issue?

    I appreciate the post, although I’m still worried that comparisons between AI risk and The Terminator are more harmful than helpful.

    One major reservation I have is with the whole framing of the argument, which is about “AI risk”. I guess you’re implicitly talking about AI catastrophic risk, which IMO is much more specific than AI risk in general. I would be very uncomfortable saying that near-term AI risks (e.g. due to algorithmic bias) are “like The Terminator”.

    Even if we solely consider catastrophic risks due to AI, I think catastrophes don’t necessarily need to look anything like The Terminator. What about risks from AI-enabled mass surveillance? Or the difficulty of navigating the transition to a world where transformative AI plays a large role in the global economy?

    If we restrict ourselves to AI existential risks (arguably some of the previous examples fall into this category), I’m still hesitant to compare these risks to The Terminator. This depends on what exactly we mean by “like The Terminator”, because there are some things between the two that are similar (as you point out), and many things that are not.  

    In general, I worry that too much is being shoved into the word “AI risk”, which could really mean a whole host of different things, and I feel that drawing analogy to The Terminator for these risks is harmful conflation. 

    1. We may eventually create artificial intelligence more powerful than human beings; and
    2. That artificial intelligence may not necessarily share our goals.

    Those two statements are obviously at least plausible, which is why there are so many popular stories about rogue AI. 

    I don’t think it’s immediately obvious to a person who hasn’t heard AI safety arguments why these should be plausible. In my experience, a common reaction to (1) is “Seriously? We don’t even have reliable self-driving cars!”, and to (2) is “Why would anybody build such a thing?”. I doubt that the Terminator movies answer these questions appropriately.

    “People think the plot of Terminator is silly in large part because it involves an AI exterminating humanity.” 

    I feel that this is too superficial - if you then ask people why they think AI-induced human extinction is unlikely, I expect that the answer would be along the lines of “we would never do something so silly". So I claim that a bigger reason why people think the plot is silly is that it’s not plausible, not the fact that it involves “an AI exterminating humanity” per se. To me, this is a very large part of AI safety arguments, and is left completely unaddressed by the Terminator movies.

    Maybe comparing AI risk and the Terminator movies can convince people who are already more sympathetic to thoughts that are “out there”, but I think this would have a negative effect on most other people. Generally, I suspect this comparison underestimates the significance of broader public acceptance, or credibility within government. 

    Perhaps it might make sense to say “certain AI existential risk scenarios and The Terminator are superficially similar, in the sense that they both involve superintelligent AI that may not be beneficial by default”. At least currently, I’m much more hesitant to say “AI risk is like The Terminator”.  

    (Edited because the above no longer matches my views or experiences)

    I think these are all good points, thanks for sharing! 

    To push back on the point about lay people innumeracy a bit, doesn't expected value also need a somewhat lengthy explanation? In addition, I think a common mistake is to conflate EV and averages, so should we have similar concerns about EV as well?

    Maybe a counterargument to this would be that "nines of safety" has obvious alternatives (e.g. ratios, as you point out), but perhaps it's harder to do this for EV?