In March of this year, 30,000 people, including leading AI figures like Yoshua Bengio and Stuart Russell, signed a letter calling on AI labs to pause the training of AI systems. While it seems unlikely that this letter will succeed in pausing the development of AI, it did draw substantial attention to slowing AI as a strategy for reducing existential risk.
While initial work has been done on this topic (this sequence links to some relevant work), many areas of uncertainty remain. I’ve asked a group of participants to discuss and debate various aspects of the value of advocating for a pause on the development of AI on the EA Forum, in a format loosely inspired by Cato Unbound.
- On September 16, we will launch with three posts:
- David Manheim will share a post giving an overview of what a pause would include, how a pause would work, and some possible concrete steps forward
- Nora Belrose will post outlining some of the risks of a pause
- Thomas Larsen will post a concrete policy proposal
- After this, we will release one post per day, each from a different author
- Many of the participants will also be commenting on each other’s work
Responses from Forum users are encouraged; you can share your own posts on this topic or comment on the posts from participants. You’ll be able to find the posts by looking at this tag (remember that you can subscribe to tags to be notified of new posts).
I think it is unlikely that this debate will result in a consensus agreement, but I hope that it will clarify the space of policy options, why those options may be beneficial or harmful, and what future work is needed.
People who have agreed to participate
These are in random order, and they’re participating as individuals, not representing any institution:
- David Manheim (ALTER)
- Matthew Barnett (Epoch AI)
- Zach Stein-Perlman (AI Impacts)
- Holly Elmore (AI pause advocate)
- Buck Shlegeris (Redwood Research)
- Anonymous researcher (Major AI lab)
- Anonymous professor (Anonymous University)
- Rob Bensinger (Machine Intelligence Research Institute)
- Nora Belrose (EleutherAI)
- Thomas Larsen (Center for AI Policy)
- Quintin Pope (Oregon State University)
Scott Alexander will be writing a summary/conclusion of the debate at the end.
Thanks to Lizka Vaintrob, JP Addison, and Jessica McCurdy for help organizing this, and Lizka (+ Midjourney) for the picture.
Perhaps. I can't really engage on that because "moral disgust" doesn't explain multiple distinct nations with slightly different views on morality all refusing to practice it. My main comment is I think it's helpful to look at the potential gain vs potential risks.
Potential gain : yes you could identify alleles with promoters associated with the nervous system and statistically correlated with higher IQ. 20-30 years later, if this were done on large scales, people might be marginally smarter.
But how much gain is this? How long has it been since humans had the tools to even attempt genetic engineering. Can you project any gain whatsoever 20-30 years from now?
I would argue the answers are minimal, less than 10 years since reliable tools existed, and almost no gain, because in 20-30 years any task that "average or below" IQ individuals struggle with, AI tools will be able to complete in seconds.
Potential risks : each editing error undetected in early fetal development saddles the individual human with lifelong birth defects which may require a permanent caretaker or hospitalization. This can cost many millions, and is essentially so much liability that only a government could afford to practice genetic engineering.
Governments are slow, remember it's only been about 10 years since it has been even feasible.
Conclusion: the risk to benefits ratio for human genetic engineering offers minimal gain and even in the best case is slow, which means little annual roi. The decision to say "industrialize China" has generated more wealth than all of humanity had prior to that point, and took only slightly longer than one iteration of human genetic engineering.
The potential benefits for AI could double the wealth of humanity in a few years, aka 10-100 percent annual ROI.