I feel like I'm taking crazy pills.
It appears that many EAs believe we shouldn't pause AI capabilities until it can be proven to have < ~ 0.1% chance of X-risk.
Put less confusingly, it appears many EAs believe we should allow capabilities development to continue despite the current X-risks.
This feels obviously a terrible thing to me.
What are the best reasons EA shouldn't be pushing for an indefinite pause on AI capabilities development??
This sequence is still in progress but is the best collection of resources that I know of regarding slowing AI (including an indefinite pause).
Worth cross-posting to the EA Forum. @Zach Stein-Perlman