Humans have some history of banning dangerous techniques, like: human cloning, GMO food, CRISPR on human.(I'm thinking up of others). Also, I guess most people(non-EAs) will think AGI risks are as serious as biorisks. There has been some news(ex:Italy banning ChatGPT, Musk called to stop developing AI) recently. Do you think government will start banning AGI development in a few years? Will the engineers slow down AI development themselves? Or do you predict human will be too late to ban AGI before it will be able to bring catastrophe?