Robin Hanson is the best critic imo. He has many arguments, or one very developed one, but big pieces are:
- Innovation in general is not very "lumpy" (discontinuous). So we should assume that AI innovation will also not be. So no one AI lab will pull far ahead of the others at AGI time. So there won't be a 'singleton', a hugely dangerous world-controlling system.
- Long timelines [100 years+] + fire alarms
- Opportunity cost of spending / shouting now
"we are far from human level AGI now, we'll get more warnings as we get closer, and by saving $ you get 2x as much to spend each 15 years you wait."
"having so many people publicly worrying about AI risk before it is an acute problem will mean it is taken less seriously when it is, because the public will have learned to think of such concerns as erroneous fear mongering."
- The automation of labour isn't accelerating (therefore current AI is not being deployed to notable effect, therefore current AI progress is not yet world-changing in one sense)
He might not be what you had in mind: Hanson argues that we should wait to work on AGI risk, rather than that safety work is forever unnecessary or ineffective. The latter claim seems extreme to me and I'd be surprised to find a really good argument for it.
You might consider the lack of consensus about basic questions, mechanisms, solutions amongst safety researchers to be a bad sign.
Nostalgebraist (2019) sees AGI alignment as equivalent to solving large parts of philosophy: a noble but quixotic quest.
Melanie Mitchell also argues for long timelines. Her view is closer to the received view in the field (but this isn't necessarily a compliment).
A related talk by Ben Garfinkel that raises some specific questions about AI Safety and calls for further investigation - How Sure Are We About This AI Stuff?