This just appeared in this week’s MIT Technology Review: Oren Etzioni, “How to know if AI is about to destroy civilization.” Etzioni is a noted skeptic of AI risk. Here are some things I jotted down:

Etzioni’s key points / arguments:

  • Warning signs that AGI is coming soon (like canaries in a coal mine, where if they start dying we should get worried)
    • Automatic formulation of learning problems
    • Fully self-driving cars
    • AI doctors
    • Limited versions of the Turing test (like Winograd Schemas)
      • If we get to the Turing test itself then it'll be too late
    • [Note: I think if we get to practically deployed fully self-driving cars and AI doctors, then we will have already had to solve more limited versions of AI safety. It’s a separate debate whether those solutions would scale up to AGI safety though. We might also get the capabilities without actually being able to deploy them due to safety concerns.]
  • We are decades away from the versatile abilities of a 5 year old
  • Preparing anyway even if it's very low probability because of extreme consequences is Pascal's Wager
    • [Note: This is a decision theory question, and I don’t think that's his area of expertise. I’ve researched PW extensively, and it’s not at all clear to me where to draw the line between low probability - high consequence scenarios that we should be factoring into our decisions, vs. very low probability – very high consequence that we should not factor into our decisions. I’m not sure there is any principled way of drawing a line between those, which might be a problem if it turns out that AI risk is a borderline case.]
  • If and when a canary "collapses" we will have ample time to design off switches and identify red lines we don't want AI to cross
  • "AI eschatology without empirical canaries is a distraction from addressing existing issues like how to regulate AI’s impact on employment or ensure that its use in criminal sentencing or credit scoring doesn’t discriminate against certain groups."
  • Agrees with Andrew Ng that it's too far off to worry about now

But he seems to agree with the following:

  • If we don’t end up doing anything about it then yes, superintelligence would be incredibly dangerous
  • If we get to human level AI then superintelligence will be very soon afterwards so it'll be too late at that point
  • If it were a lot sooner (as other experts expect) then it sounds like he would agree with the alarmists
  • Even if it was more than a tiny probability then again it sounds like he'd agree because he wouldn't consider it Pascal's Wager
  • If there's not ample time between "canaries collapsing" and AGI (as I think other experts expect) then we should be worried a lot sooner
  • If it wouldn't distract from other issues like regulating AI's impact on employment, it sounds like he might agree that it's reasonable to put some effort into it (although this point is a little less clear)

See also Eliezer Yudkowsky, “There's no fire alarm for Artificial General Intelligence

Comments3
Sorted by Click to highlight new comments since:

It feels like Etzioni is misunderstanding Bostrom in this article, but I'm not sure. His point about Pascal's Wager confuses me:

Some theorists, like Bostrom, argue that we must nonetheless plan for very low-probability but high-consequence events as though they were inevitable

Etzioni seems to be saying that Bostrom argues that we must prepare for short AI timelines even though developing HLMI on a short timeline is (in Etzioni's view) a very low-probability event?

I don't know whether Bostrom thinks this or not, but isn't Bostrom's main point that even if AI systems sufficiently-powerful to cause an existential catastrophe are not coming for at least a few decades (or even a century or longer), we should still think about and see what we can do today to prepare for the eventual development of such AI systems if we believe that there are good reasons to think that they may cause an x-catastrophe when they eventually are developed and deployed?

It doesn't seem that Etzioni addresses this, except to imply that he disagrees with the view by saying it's unreasonable to worry about AI risk now and by saying that we'll (definitely?) have time to adequately address any existential risk that future AI systems may pose if we wait to start addressing those risks until after the canaries start collapsing.

Etzioni's implicit argument against AI posing a nontrivial existential risk seems to be the following:

(a) The probability of human-level AI being developed on a short timeline (less than a couple decades) is trivial.

(b) Before human-level AI is developed, there will be 'canaries collapsing' warning us that human-level AI is potentially coming soon or at least is no longer a "very low probability" on the timescale of a couple decades.

(c) "If and when a canary “collapses,” we will have ample time before the emergence of human-level AI to design robust “off-switches” and to identify red lines we don’t want AI to cross"

(d) Therefore, AI does not pose a nontrivial existential risk.

It seems to me that if there is a nontrivial probability that he is wrong about 'c' then in fact it is meaningful to say that AI does pose a nontrivial existential risk that we should start preparing for before the canaries he mentions start collapsing.

Etzioni also appears to agree that once canaries start collapsing it is reasonable to worry about AI threatening the existence of all of humanity.

As Andrew Ng, one of the world’s most prominent AI experts, has said, “Worrying about AI turning evil is a little bit like worrying about overpopulation on Mars.” Until the canaries start dying, he is entirely correct.

Curated and popular this week
Relevant opportunities