Greg_Colbourn

3457Joined Sep 2014

Bio

Founder of CEEALAR (née the EA Hotel; ceealar.org)

Comments
603

Even a 10 year delay is worth a huge amount (in expectation). We may well have a very different view of alignment by then (including perhaps being pretty solid on it's impossibility? Or perhaps a detailed plan for implementing it? (Or even the seemingly very unlikely "..there's nothing to worry about")), which would allow us to iterate on a better strategy (we shouldn't assume that our outlook will be the same after 10 years!)

but we should try to do both if there are sane ways to pursue both options today.

Yes! (And I think there are sane ways).

If you want MIRI to update from "both seem good, but alignment is the top priority" to your view, you should probably be arguing (or gathering evidence) against one or more of these claims:

  • AGI alignment is a solvable problem.

There are people working on this (e.g. Yampolskiy, Landry & Ellen, and this is definitely something I want to spend more time on (note that the writings so far could definitely do with a more accessible distillation).

Absent aligned AGI, there isn't a known clearly-viable way for humanity to achieve a sufficiently-long reflection

I really don't think we need to be worry about this now. AGI x-risk is an emergency - we need to deal with that emergency  first (e.g. kick the can down the road 10 years with a moratorium on AGI research); then when we can relax a little, we can have the luxury to think about long term flourishing.

Humanity has never succeeded in any political task remotely as difficult as the political challenge of creating an enforced and effective 50+ year global moratorium on AGI.

I think this can definitely be argued against (and I will try and write more as/when I make a more fleshed out post calling for a global AGI moratorium). For a start, without all the work on nuclear proliferation and risk, we may well not be here today. Yes there has been proliferation, but there hasn't been an all-out nuclear exchange yet! It's now 77 years since a nuclear weapon was used in anger. That's a pretty big result I think! Also, global taboos around bio topics such as human genetic engineering are well established. If such a taboo is established, enforcement becomes a lesser concern, as you are  then only fighting against isolated rogue elements rather than established megacorporations. Katja discusses such taboos in her post on slowing down AI.

  • EAs have not demonstrated the ability to succeed in political tasks that are way harder than any political task any past humans have succeeded on.

Fair point. I think we should be thinking much wider than EA here. This needs to become mainstream, and fast.

Also, I should say that I don't think MIRI should necessarily be diverting resources to work on a moratorium. Alignment is your comparative advantage so you should probably stick to that. What I'm saying is that you should be publicly and loudly calling for a moratorium. That would be very easy for you to do (a quick blog post/press release). But it could have a huge effect in terms of shifting the Overton Window on this. As I've said, it doesn't make sense for this not to be part of any "Death with Dignity" strategy. The sensible thing when faced with ~0% survival odds is to say "FOR FUCK'S SAKE CAN WE AT LEAST TRY AND PULL THE PLUG ON HUMANS DOING AGI RESEARCH!!", or even "STOP BUILDING AGI YOU FUCKS!" [Sorry for the language, but I think it's appropriate given the gravity of the situation, as assumed by talk of 100% chance of death etc.]

I'm saying all this because I'm not afraid of treading on any toes. I don't depend on EA money  (or anyone's money) for my livelihood or career[1] . I'm financially independent. In fact, my life is pretty good, all apart from facing impending doom from this! I mean, I don't need to work to survive[2], I've got an amazing partner and and a supportive family. All that is missing is existential security!  I'd be happy to have "completed it mate" (i.e I've basically done this with the normal life of house, car, spouse, family, financial security etc); but I haven't -  remaining is this small issue of surviving for a normal lifespan, having my children survive and  thrive / ensuring the continuation of the sentient universe as we know it...

  1. ^

     Although I still care about my reputation in EA to be fair (can't really avoid this as a human)

  2. ^

    All my EA work is voluntary

I think what's happened with Google/Deepmind and OpenAI/Microsoft has been much worse than safety washing. In effect it's been "existential safety washing"! The EA and AI x-risk communities have been far too placated by the existence of x-safety teams at these big AGI capabilities companies. I think at this point we need to be trying other things, like pushing for a moratorium on AGI development.

This is good, but I don't think it goes far enough. And I agree with your comments re "might not want MIRI to say "that move isn't available to us"". It might not be realistic to get the entire world to take a break on AGI work, but it's certainly conceivable, and I think maybe at this point more realistic than expecting alignment to be solved in time (or at all?). It seems reasonable to direct marginal resources toward pushing for a moratorium on AGI rather than more alignment work (although I still think this should at least be tried too!)

Your's and Nate's statement still implicitly assumes that AGI capabilities orgs are "on our side". The evidence is that they are clearly not.  Demis is voicing caution at the same time that Google leadership have started a race with OpenAI (Microsoft). It's out of Demis' (and his seemingly toothless ethics board's) hands.  Less  accepting what has been tantamount to "existential safety washing", and more realpolitik, is needed. Better now might be to directly appeal to the public and policymakers. Or find a way to strategise with those with power. For example, should the UN Security Council be approached somehow? This isn't "defection".

The idea that Bostrom or Yudkowsky ever thought "the alignment problem is a major issue, but let's accelerate to AGI as quickly as possible for the sake of reaching the Glorious Transhumanist Future sooner" seems like revisionism to me

I'm not saying this is (was) the case. It's more subtle than that. It's the kind of background worldview that makes people post this (or talk of "pivotal acts") rather than this

The message of differential technological development clearly hasn't had the needed effect. There has been no meaningful heed paid to it by the top AI companies. What we need now is much stronger statements. i.e. ones that use the word "moratorium". Why isn't MIRI making such statements? It doesn't make sense to go to 0 hope of survival without even seriously attempting a moratorium (or at the very least, publicly advocating for one).

If the AGI is so intelligent and powerful that it represents an existential risk to humanity, surely it is definitionally impossible for us to rein it in? And therefore surely the best approach would be ... to prevent work to develop AI

I'm starting to think that this intuition may be right (further thoughts in linked comment thread).

Yes, I think it's good that there is basically consensus here on AGI doom being a serious problem; the argument seems to be one of degree. Even OP says p(AGI doom by 2070) ~ 10%.

Don't get me wrong, I'd love to live in a glorious transhuman future (like e.g. Iain M Bank's Culture), but I just don't think it's worth the risk of doom, as things stand. Maybe after a few decades of moratorium, when we know a lot more, we can reassess (and hopefully we will still be able to have life extension so will personally still be around).

It now seems unfortunate that the AI x-risk prevention community was seeded from the transhumanist/techno-utopian community (e.g. Yudkowsky and Bostrom). This historical contingency is probably a large part of the reason why a global moratorium on AGI has never been seriously proposed/attempted.

The conjunctive/disjunctive dichotomy seems to be a major crux when it comes to AI x-risk. How much do belief in human progress, belief in a just world, the Long Peace, or even deep-rooted-by-evolution (genetic) collective optimism (all things in the "memetic water supply") play into the belief that the default is not-doom? Even as an atheist, I think it's sometimes difficult (not least because it's depressing to think about) to fully appreciate that we are "beyond the reach of God".

Social/moral consensus? There is precedent with e.g. recombinant DNA or human genetic engineering (if only the AI Asilomar conference was similarly focused on a moratorium!) It might be hard to indefinitely enforce globally, but we might at least be able to kick the can down the road a couple of decades (as seems to have happened with the problematic bio research).

(It should be achieved without such AGIs running around, if we want to minimise x-risk. Indeed, we should have started on this already! I'm starting to wonder whether it might actually be the best option we have, given the difficulty, or perhaps impossibility(?) of alignment.)
 

Load More