I feel like I'm taking crazy pills.
It appears that many EAs believe we shouldn't pause AI capabilities until it can be proven to have < ~ 0.1% chance of X-risk.
Put less confusingly, it appears many EAs believe we should allow capabilities development to continue despite the current X-risks.
This feels obviously a terrible thing to me.
What are the best reasons EA shouldn't be pushing for an indefinite pause on AI capabilities development??
I agree that the public has been pretty receptive to AI safety messaging. Much more than I would have expected a few years ago.
It sounds like you already have some takes on this question — in that case, it could be worth writing something up to make the case for why EAs should be advocating for a pause. I’d be happy to offer feedback if you do.