Whenever I get to feel too anxious about the risks from misaligned AI, I really wish the community could invest in slowing down AI progress to buy out some time for alignment researchers and that this plan would actually work without causing other serious problems. What is the EA consensus? Would this be a good solution? A desperate solution? How much has the community thought about this and what are some conclusions/suggestions for next steps? I've only found this recent blog post on the topic.

New Answer
Ask Related Question
New Comment

4 Answers sorted by

A few suggestions for next steps:

Has anything else been written on this topic?

I would be surprised if we could do much to slow AI, but I agree that at least a few people should look into this approach.

I think it could be a highly valuable project for someone to form a community around this as long as they were careful not to allow the discussion of extreme options within the group.

A good place for a group like this to start could be looking at previous efforts where new technologies have been regulated successfully. 

Examples: CRISPR on human embryos, the Treaty on the Prohibition of nuclear weapons, The Biological Weapons Convention, and maybe slightly different angle: the Antarctic Treaty seemed to be effective at slowing down something that was competitive between countries and yet allowed important scientific research to continue in an international cooperation. 

Maybe looking at some of these case studies could be a goo... (read more)

2Chris Leong15d
Great suggestion. I would love to see someone diving deeper into these topics.
2Michael Huang15d
Thanks for bringing up the idea of case studies. It would also be useful to study verification, compliance and enforcement of these regulations: "Trust, but verify [https://en.wikipedia.org/wiki/Trust,_but_verify]."
2Eleni_A15d
Thank you, that's great. I'd be keen to start a project on this. For whoever is interested, please DM me and we can start brainstorming and form a group etc.