Hide table of contents

Many people associated with the effective altruism world believe that AI safety research is very valuable. But, there must be good work written on the other side of that debate! So, my question is: what are the best arguments that AI risks are overblown or that AI safety research should not be prioritized? I would prefer links to existing work, but if you feel like writing an essay in the comments I'm not going to stop you.

Another question here seems related, but is not asking the same thing: https://forum.effectivealtruism.org/posts/u3ePLsbtpkmFdD7Nb/how-much-ea-analysis-of-ai-safety-as-a-cause-area-exists-1

15

0
0

Reactions

0
0
New Answer
New Comment

4 Answers sorted by

Robin Hanson is the best critic imo. He has many arguments, or one very developed one, but big pieces are:

  • Innovation in general is not very "lumpy" (discontinuous). So we should assume that AI innovation will also not be. So no one AI lab will pull far ahead of the others at AGI time. So there won't be a 'singleton', a hugely dangerous world-controlling system.
     
  • Long timelines [100 years+] + fire alarms
     
  • Opportunity cost of spending / shouting now 
    "we are far from human level AGI now, we'll get more warnings as we get closer, and by saving $ you get 2x as much to spend each 15 years you wait."
    "having so many people publicly worrying about AI risk before it is an acute problem will mean it is taken less seriously when it is, because the public will have learned to think of such concerns as erroneous fear mongering."
     
  • The automation of labour isn't accelerating (therefore current AI is not being deployed to notable effect, therefore current AI progress is not yet world-changing in one sense)

He might not be what you had in mind: Hanson argues that we should wait to work on AGI risk, rather than that safety work is forever unnecessary or ineffective. The latter claim seems extreme to me and I'd be surprised to find a really good argument for it.

You might consider the lack of consensus about basic questions, mechanisms, solutions amongst safety researchers to be a bad sign.

Nostalgebraist (2019) sees AGI alignment as equivalent to solving large parts of philosophy: a noble but quixotic quest.

Melanie Mitchell also argues for long timelines. Her view is closer to the received view in the field (but this isn't necessarily a compliment).

On the topic of a hard take-off specifically: A Contra AI Foom Reading List

Alignment by default: if we have very strong reasons to expect that the methods that are best suited for ensuring that AI is aligned are the same as the methods that are best suited for ensuring that we have AI that is capable enough to understand what we want and act on it, in the first place.

To the extent that alignment by default is likely we don't need a special effort to be put into AI safety because we can assume that the economic incentives will be such that we will put as much effort into AI safety as is needed, and if we don't put the sufficient effort into AI safety, we won't have capable AI or transformative AI anyway

Stuart Russell talks about this as a real possibility but see also, https://www.lesswrong.com/posts/Nwgdq6kHke5LY692J/alignment-by-default

Drexler's CAIS framework attacks several of the premises underlying standard AI risk arguments (although iirc he also argues that CAIS-specific safety work would be valuable). Since his original report is rather long, here are two summaries.

Comments1
Sorted by Click to highlight new comments since: Today at 12:43 PM

A related talk by Ben Garfinkel that raises some specific questions about AI Safety and calls for further investigation - How Sure Are We About This AI Stuff?

Curated and popular this week
Relevant opportunities