Whenever I get to feel too anxious about the risks from misaligned AI, I really wish the community could invest in slowing down AI progress to buy out some time for alignment researchers and that this plan would actually work without causing other serious problems. What is the EA consensus? Would this be a good solution? A desperate solution? How much has the community thought about this and what are some conclusions/suggestions for next steps? I've only found this recent blog post on the topic.

New Answer
Ask Related Question
New Comment

4 Answers sorted by

Michael Huang

Jul 26, 2022

60

A few suggestions for next steps:

Sam Clarke

Aug 10, 2022

50

Relevant discussion from a couple of days ago: https://astralcodexten.substack.com/p/why-not-slow-ai-progress

tyleralterman

Jul 29, 2022

50

Has anything else been written on this topic?

Chris Leong

Jul 26, 2022

30

I would be surprised if we could do much to slow AI, but I agree that at least a few people should look into this approach.

I think it could be a highly valuable project for someone to form a community around this as long as they were careful not to allow the discussion of extreme options within the group.

A good place for a group like this to start could be looking at previous efforts where new technologies have been regulated successfully. 

Examples: CRISPR on human embryos, the Treaty on the Prohibition of nuclear weapons, The Biological Weapons Convention, and maybe slightly different angle: the Antarctic Treaty seemed to be effective at slowing down something that was competitive between countries and yet allowed important scientific research to continue in an international cooperation. 

Maybe looking at some of these case studies could be a good starting point to consider the similarities/differences to AI? E.g. 

  • Were the above regulations on technology that was already in use, or emerging? 
  • Was public opinion important in pushing for the regulation? Or was the public hardly involved at all?
  • Who was incentivised to develop this new technology/research? Who was it funded by? (companies, governments, philanthropy, NGOs, users) 
  • Who served to benefit from the  technology being developed? Did people make counterfactual arguments at the time? (e.g. if we develop the tech further, it's extremely likely that great medicine will be developed, alongside something dangerous / untenable) 
2
Chris Leong
10mo
Great suggestion. I would love to see someone diving deeper into these topics.
2
Michael Huang
10mo
Thanks for bringing up the idea of case studies. It would also be useful to study verification, compliance and enforcement of these regulations: "Trust, but verify [https://en.wikipedia.org/wiki/Trust,_but_verify]."
2
Eleni_A
10mo
Thank you, that's great. I'd be keen to start a project on this. For whoever is interested, please DM me and we can start brainstorming and form a group etc.