Hide table of contents

A extremely common rebuttal to a pause on AI capability development that I hear is that another party will not stop developing and they will achieve the technology instead of the party that overregulated/paused. I have heard this from the E/acc community, basically every friend of mine, and many people in the AI safety space when prompted about stepping back from the race to AGI. On the other hand, I have heard valid arguments for how a global pause (with the hope of global AGI collaboration afterwards) is possible and has precedent (e.g. global ban on human cloning).

I am here asking for opinions on the topic of a global pause/moratorium on AGI development. Why is this idea barely discussed? How does the EA community tend to skew on this topic?

Your engagement is appreciated.




New Answer
New Comment

1 Answers sorted by

No one is really suggesting that a unilateral "pause" is effective, but there is growing support for some non-unilateral version as an important approach to be negotiated.

There was a quite serious discussion of the question, and different views, on the forum late last year (which I participated in,) summarized by Scott Alexander here; https://forum.effectivealtruism.org/posts/7WfMYzLfcTyDtD6Gn/pause-for-thought-the-ai-pause-debate

I think it's probably important to note that some people (ie me) do in fact think a unilateral pause by one of the major players (eg USA, China, UK, EU) may actually be pretty effective if done in the right way with the right messaging (likely to be useful in pushing towards a coordinated or uncoordinated global pause). Particularly if the US paused, I could very much see this starting a change reaction

Curated and popular this week
Relevant opportunities