Global moratorium on AGI, now (Twitter). Founder of CEEALAR (née the EA Hotel; ceealar.org)
Agree. But I'm sceptical that we could robustly align or control a large population of such AIs (and how would we cap the population?), especially considering the speed advantage they are likely to have.
Why do you think this? What make you think that it's possible at all?[1] And what do you mean by "large minority"? Can you give an approximate percentage?
Or to paraphrase Yampolskiy: what makes it possible for a less intelligent species to indefinitely control a more intelligent species (when this has never happened before)?
Thinking about it some more, I think I mean something more like "subjective decades of strategising and preparation at the level of intelligence of the second mover", so it would be able to counter anything the second mover does to try and gain power.
But also there would be software intelligence explosion effects (I think the figures you have in your footnote 37 are overly conservative - human level is probably closer to "GPT-5").
Vinding says:
But he does not justify this equality. It seems highly likely to me that ASI-induced s-risks are on a much larger scale than human-induced ones (down to ASI being much more powerful than humanity), creating a (massive) asymmetry in favour of preventing ASI.