I wrote a post to my Substack attempting to compile all of the best arguments against AI as an existential threat.
Some arguments that I discuss include: international game theory dynamics, reference class problems, knightian uncertainty, superforecaster and domain expert disagreement, the issue with long-winded arguments, and more!
Please tell me why I'm wrong, and if you like the article, subscribe and share it with friends!
Why would Knightian uncertainty be an argument against AI as an existential risk? If anything, our deep uncertainty about the possible outcomes of AI should lead us to be even more careful.