Whenever I get to feel too anxious about the risks from misaligned AI, I really wish the community could invest in slowing down AI progress to buy out some time for alignment researchers and that this plan would actually work without causing other serious problems. What is the EA consensus? Would this be a good solution? A desperate solution? How much has the community thought about this and what are some conclusions/suggestions for next steps? I've only found this recent blog post on the topic.
A good place for a group like this to start could be looking at previous efforts where new technologies have been regulated successfully.
Examples: CRISPR on human embryos, the Treaty on the Prohibition of nuclear weapons, The Biological Weapons Convention, and maybe slightly different angle: the Antarctic Treaty seemed to be effective at slowing down something that was competitive between countries and yet allowed important scientific research to continue in an international cooperation.
Maybe looking at some of these case studies could be a good starting point to consider the similarities/differences to AI? E.g.