Humans have some history of banning dangerous techniques, like: human cloning, GMO food, CRISPR on human.(I'm thinking up of others). Also, I guess most people(non-EAs) will think AGI risks are as serious as biorisks. There has been some news(ex:Italy banning ChatGPT, Musk called to stop developing AI) recently. Do you think government will start banning AGI development in a few years? Will the engineers slow down AI development themselves? Or do you predict human will be too late to ban AGI before it will be able to bring catastrophe?
Is this question too hard to answer? Also, if we predict AGI will be banned or at least restricted in a few years, then, this problem won't be that urgent, so the priority of AI safety will be lowered?