I feel like I'm taking crazy pills.
It appears that many EAs believe we shouldn't pause AI capabilities until it can be proven to have < ~ 0.1% chance of X-risk.
Put less confusingly, it appears many EAs believe we should allow capabilities development to continue despite the current X-risks.
This feels obviously a terrible thing to me.
What are the best reasons EA shouldn't be pushing for an indefinite pause on AI capabilities development??
The public is very concerned about powerful AI and want something done about it.
If anyone is outside the overton window its EAs.