Does the following seem like a reasonable brief summary of the key disagreements regarding AI risk?

Among those experts (AI researchers, economists, careful knowledgeable thinkers in general) who appear to be familiar with the arguments:

  • Seems to be broad (but not universal?) agreement that:
    • Superintelligent AI (in some form, perhaps distributed rather than single-agent) is possible and will probably be created one day
    • By default there is at least a decent chance that the AI will not be aligned
    • If it is not aligned or controlled in some way then there is at least a decent chance that it will be incredibly dangerous by default
  • Some core disagreements (starred questions are at least partially social science / economics questions):
    • * Just how likely are all of the above?
    • * Will we have enough time to see it coming, and will it be obvious enough, that people will react appropriately in time to prevent bad outcomes?
      • Still might be useful to have some people keeping tabs on it (Robin Hanson thinks about 100), but not that many
    • How hard is it to solve?
      • If easy then less time needed to see it coming, or inventors more likely to incorporate solutions by default
      • If really hard then may need a long time in advance
    • * How far away is it?
    • Can we work on it profitably now given that we don't know how AGI will work?
      • If current ML scales to AGI then presumably yes, otherwise disagreement
    • * Will something less than superhuman AI pose similar extreme risks? If yes: How much less, how far in advance will we see it coming, when will it come, how easy is it to solve?
    • * Will we need coordination mechanisms in place to prevent dangerous races to the bottom? If yes, how far in advance will we need them?
    • If it's a low probability of something really catastrophic, how much should we be spending on it now? (Where is the cutoff where we stop worrying about finite versions of Pascal’s Wager?)
  • * What about misuse risks, structural risks, or future moral risks?
  • Various combinations of these and related arguments result in anything from "we don't need to worry about this at all yet" to "we should be pouring massive amounts of research into this"

31

0
0

Reactions

0
0
New Answer
New Comment

2 Answers sorted by

To add on to what you already have, there's also a flavor of "urgency / pessimism despite slow takeoff" that comes from pessimistic answers to the following 2 questions:

  • How early do the development paths between "safe AGI" and "default AGI" diverge?

On one extreme, they might not diverge at all: we build "default AGI", and fix problems as we find them, and we wind up with "safe AGI". On the opposite extreme, they may diverge very early (or already!), with entirely different R&D paths requiring dozens of non-overlapping insights and programming tools and practices.

I personally put a lot of weight on "already", on the theory that there are right now dozens of quite different lines of ongoing ML / AI research that seem to lead towards quite different AGI destinations, and it seems implausible to me that they will all wind up at the same destination (or fail), or that the destinations will all be more-or-less equally good / safe / beneficial.

  • If we know how to build an AGI in a way that is knowably and unfixably dangerous, can we coordinate on not doing so?

One extreme would be "yes we can coordinate, even if there's already code for such an AGI published on GitHub that runs on commodity hardware". The other extreme would be "No, we can't coordinate; the best we can hope for is delaying the inevitable, hopefully long enough to develop a safe AGI along a different path."

Again I personally put a lot of weight on the pessimistic view, see my discussion here; but others seem to be more optimistic that this kind of coordination problem might be solvable, e.g. Rohin Shah here.

"* Will something less than superhuman AI pose similar extreme risks? If yes: How much less, how far in advance will we see it coming, when will it come, how easy is it to solve?"

I don't think there is any disagreement that there are such things. I think that the key disagreement is whether there will be sufficient warning , and how easy it will be to solve / prevent.

Not to speak on their behalf, but my understanding of MIRI's view on this issue is that there are likely to be such issues, but they aren't as fundamentally hard as ASI alignment, and while there should be people working on the pre-ASI risks, we need all the time we can invest on solving the really hard parts of the eventual risk from ASI.

Maybe we should add: Does working on pre-ASI risks improve our prospects of solving ASI (I think that's the core of the conciliation between near-term and long-term concerns about AI... but up to what point?), or does it worsen it?