Greg_Colbourn ⏸️

5647 karmaJoined
Interests:
Slowing down AI

Bio

Participation
4

Global moratorium on AGI, now (Twitter). Founder of CEEALAR (née the EA Hotel; ceealar.org)

Posts
27

Sorted by New

Comments
1109

Vinding says

There is a key point on which I agree strongly with advocates for an AI pause: there is a massive moral urgency in ensuring that we do not end up with horrific AI-controlled outcomes. Too few people appreciate this insight, and even fewer seem to be deeply moved by it.

At the same time, I think there is a similarly massive urgency in ensuring that we do not end up with horrific human-controlled outcomes. And humanity’s current trajectory is unfortunately not all that reassuring with respect to either of these broad classes of risks ...

The upshot for me is that there is a roughly equal moral urgency in avoiding each of these categories of worst-case risks

But he does not justify this equality. It seems highly likely to me that ASI-induced s-risks are on a much larger scale than human-induced ones (down to ASI being much more powerful than humanity), creating a (massive) asymmetry in favour of preventing ASI.

Agree. But I'm sceptical that we could robustly align or control a large population of such AIs (and how would we cap the population?), especially considering the speed advantage they are likely to have.

I think human extinction (from ASI) is highly likely to happen, and soon, unless we stop it from being built[1]

See my comments in the Symposium for further discussion

  1. ^

    And that the ASI that wipes us out won't matter morally (to address footnote 2 on the statement)

Yeah, I think a lot of the overall debate -- including what is most ethical to focus on(!) -- depends on AI trajectories and control.

What level of intelligence are you imagining such a system as being at? Some percentile on the scale of top performing humans? Somewhat above the most intelligent humans?

Not sure why this is downvoted, it isn't a rhetorical question - I genuinely want to know the answer.

Fair point. But AI is indeed unlikely to top out at merely "slighlty more" intelligent. And it has the potential for a massive speed/numbers advantage too.

Why do you think this? What make you think that it's possible at all?[1] And what do you mean by "large minority"? Can you give an approximate percentage?

  1. ^

    Or to paraphrase Yampolskiy: what makes it possible for a less intelligent species to indefinitely control a more intelligent species (when this has never happened before)?

Thinking about it some more, I think I mean something more like "subjective decades of strategising and preparation at the level of intelligence of the second mover", so it would be able to counter anything the second mover does to try and gain power. 

But also there would be software intelligence explosion effects (I think the figures you have in your footnote 37 are overly conservative - human level is probably closer to "GPT-5").

I don't think this is likely to happen though, absent something like moral realism being true, centred around sentient experiences, and the AI discovering this.

Load more