Many people associated with the effective altruism world believe that AI safety research is very valuable. But, there must be good work written on the other side of that debate! So, my question is: what are the best arguments that AI risks are overblown or that AI safety research should not be prioritized? I would prefer links to existing work, but if you feel like writing an essay in the comments I'm not going to stop you.
Another question here seems related, but is not asking the same thing: https://forum.effectivealtruism.org/posts/u3ePLsbtpkmFdD7Nb/how-much-ea-analysis-of-ai-safety-as-a-cause-area-exists-1
Drexler's CAIS framework attacks several of the premises underlying standard AI risk arguments (although iirc he also argues that CAIS-specific safety work would be valuable). Since his original report is rather long, here are two summaries.