We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.

 

AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs. As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.

Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now.

Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.

AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an "AI summer" in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society.  We can do so here. Let's enjoy a long AI summer, not rush unprepared into a fall.

 

Signatories include Yoshua Bengio, Stuart Russell, Elon Musk, Steve Wozniak, Yuval Noah Hariri, Andrew Yang, Connor Leahy (Conjecture), and Emad Mostaque (Stability).

Edit: covered in NYT, BBC, WaPo, NBC, ABC, CNN, CBS, Time, etc. See also Eliezer's piece in Time.

Edit 2: see FLI's FAQ.

Edit 3: see FLI's report Policymaking in the Pause.

Edit 4: see AAAI's open letter.

Comments38
Sorted by Click to highlight new comments since: Today at 4:38 PM

I signed and strongly support this open letter.

Let me add a little global perspective (as a US citizen who's lived in 4 countries outside the US for a total of 14 years, and who doesn't always see the US as the 'good guy' in geopolitics).

The US is 4% of the world's population. The American AI industry is (probably) years ahead of any other country, and is pushing ahead with the rationale that 'if we don't keep pushing ahead, a bad actor (which usually implies China) will catch up, and that would be bad'. Thus, we impose AI X-risk on the other 96% of humans without the informed consent, support, or oversight. 

We used the same arms-race rationale in the 1940s to develop the atomic bomb ('if we don't do it, Germany will') and in the 1950s to develop the hydrogen bomb ('if we don't do it, the Soviet Union will'). In both cases, we were the bad actor. The other countries were nowhere close to us. We exaggerated the threat that they would catch up, and we got the American public to buy into that narrative. But we were really the ones pushing ahead into X-risk territory. Now we're promoting the same narrative for AI development. 'The AI arms race cannot be stopped', 'AGI is inevitable', 'the genie is out of the bottle', 'if not us, then China', etc, etc.

We Americans have a very hard time accepting that 'we might be the baddies'. We are uncomfortable acknowledging any moral obligations to the rest of humanity (if they conflict in any way with our geopolitical interests). We like to impose our values on the world, but we don't like to submit to any global oversight by others. 

I hope that this public discussion about AI risks also includes some soul-searching by Americans -- not just the AI industry, but all of us, concerning the way that our country is, yet again, pushing ahead with developing extremely dangerous technology, without any sense of moral obligation to others.

Having taught online courses for CUHK-Shenzhen in China for a year, and discussed quite a bit about EA, AI, and X risk with the very bright young students there, I often imagine how they would view the recent developments in the American AI industry. I think they would be appalled by our American hubris. They know that the American political system is too partisan, fractured, slow, and dysfunctional to impose any effective regulation on Big Tech. They know that American tech companies are legally obligated (by 'fiduciary duty' to shareholders) to prioritize quarterly profits over long-term human survival. They know that many Bay Area tech bros supporting AI are transhumanists, extropians, or Singularity-welcomers who look forward to humanity being replaced by machines. They know that many Americans view China as a reckless, irresponsible, totalitarian state that isn't worth listening to about any AI safety concerns. So, I imagine, any young Chinese students who's paying attention would take an extremely negative view of the risks that the American AI industry is imposing on the other 7.7 billion people in the world.

They know that American tech companies are legally obligated (by 'fiduciary duty' to shareholders) to prioritize quarterly profits over long-term human survival.

Fwiw I read here that there actually is not such a legal duty to prioritize profits.

MaxRa - I might have been wrong about this; I'm not at all an expert in corporate law. Thanks for the informative link.

A more accurate claim might be 'American tech companies tend to prioritize short-term profits over long-term human survival'. 

Not an expert either, but safest to say the corporate-law question is nuanced and not free from doubt. It's pretty clear there's no duty to maximize short-term profits, though.

But we can surmise that most boards that allow the corporation to seriously curtail its profits -- at least its medium-term profits -- will get replaced by shareholders soon enough. So the end result is largely the same.

Some people have criticised the timing. I think there's some validity to this, but the trigger has been pulled and cannot be unpulled. You might say that we could try write another similar letter a bit further down the track, but it's hard to get people to do the same thing twice and even harder to get people to pay attention.

So I guess we really have the choice to get behind this or not. I think we should get behind this as I see this letter as really opening up the Overton Window. I think it would be a mistake to wait for a theoretical perfectly timed letter to sign, as opposed to signing what we have in front of us.

I think it's great timing. I've been increasingly thinking that now is the time for a global moratorium. In fact, I was up until the early hours drafting a post on why we need such a moratorium! Great to wake up and see this :)

I expect there will be much more public discussion on regulating AI and much more political willingness to do ambitious things about AI in the coming years when economic and cultural impacts become more apparent, so I'm spontaneously wary of investing significant reputation on something (potentially) not sufficiently well thought through. 

Also, it's not a binary of signing vs. not-signing. E.g. risk reducers can also enter the discussion caused by the letter and make constructive suggestions what will contribute more to longterm safety.

(Trying to understand the space better, not being acusatory.)

How is it that there is not a well-thought-out response right now?

E.g. it seems that it has probably been clear to people in AI safety / governance for some time that there would be an increase in the Overton window in which placing some demands would be more feasilbe than at other times, so I am surprised there isn't a letter like this that is more thought through / endorsed by people that are not happy with the current letter.

Good question. I'm still relatively new to thinking about AI governance, but would guess that two puzzle pieces are 

a) broader public advocacy has not been particularly prioritized so far

  • there's uncertainty what concretely to advocate for and still a lot of (perceived) need for nuance for the more concrete ideas that do exist
  • there are other ways of more targeted advocacy, such as talking to policy makers directly, or talking to leaders in ML about risk concerns

b) there are not enough people working on AI governance issues to be so prepared for things

  • not sure what the numbers are, but e.g. it seems like at least a few key sub-topics in AI governance rely on the work of like 1-2 extremely busy people

Also, the letter just came out. I'd not be very surprised if a few more experienced people would publish responses in which they lay out their thinking a bit, especially if the letter seems to gather a lot of attention.

Why is it considered bad timing?

Some people are worried that this will come off as "crying wolf".

Have important "crying wolf" cases have happen in real life? About societal issues? I mean, yeah, it is a possibility but the alternatives seem so much worse.

How do we know when we are close enough to the precipice for other people to be able to see it and to ask to stop the race to it? General audiences lately have been talking about how surprised they are about AI so it seems like perfect timing for me.

Also, if people get used to benefit and work in narrow and safe AIs they could put themselves against stopping/ slowing them down.

Even if more people could agree on decelerate in the future it would take more time to stop/ going really slow with more stakeholders going at a higher speed. And of course, after that we would be closer to the precipice that if we started the deceleration before.

JWS
1y36
11
0

I sympathise with others who have particular concerns about the details of this letter, and most especially the fact that signatures were unverified (though the link now includes an email validation at the bottom). A good discussion from a sceptical perspective can be found in this thread by Matthew Barnett. However, I think on balance I support FLI releasing this letter, mostly for reasons that Nate states here. I think that if you're pessimistic about the current direction of AI research, and the balance between capabilities/safety, then one of the major actors who would have the ability to step in and co-ordinate those in a race and buy more time would be the U.S. Government. 

Again, I sympathise with scepticism about Government intentions/track record/capability to regulate but I think I'm a lot more sympathetic to it than the LessWrong commenters. That might be a US/UK cultural difference, but I don't think the counterfactual of no public letter and no government action leads to anywhere good, and I simply don't share the same worldview of Neo-Realism in International Relations and Public Choice in Domestic Politics that makes any move in that direction appear to be net-negative ¯\_(ツ)_/¯

Finally, I hope that this letter can serve as a push for AI Governance to be viewed as something that goes hand-in-hand with Technical AI Safety Research. I think that both are very important parts of making this century go well, and I think that the letter is a directional step that we can get behind and improve upon, rather than dismiss.

and most especially the fact that signatures were unverified (though the link now includes an email validation at the bottom).

I wonder if this is a consequence of the embargo being broken and hence their not being fully ready.

The Forbes article “Elon Musk's AI History May Be Behind His Call To Pause Development” discusses some interesting OpenAI history and an explanation for how this FLI open letter may have come to be.

(Note: I don’t believe this is the only explanation, or even the most likely one; if pushed I’d assign it maybe 20%, though with large uncertainty bars, of the counterfactual force behind the FLI letter coming into existence. Note also: I’ve signed the letter, I think it’s net positive.)

Some excerpts:

OpenAI was founded as a nonprofit in 2015, with Elon Musk as the public face of the organization. [...] OpenAI was co-founded by Sam Altman, who butted heads with Musk in 2018 when Musk decided he wasn’t happy with OpenAI’s progress. [...] Musk worried that OpenAI was running behind Google and reportedly told Altman he wanted to take over the company to accelerate development. But Altman and the board at OpenAI rejected the idea that Musk—already the head of Tesla, The Boring Company and SpaceX—would have control of yet another company.

“Musk, in turn, walked away from the company—and reneged on a massive planned donation. The fallout from that conflict, culminating in the announcement of Musk’s departure on Feb 20, 2018, [...],” Semafor reported last week.

When Musk left his stated reason was that AI technology being developed at Tesla created a conflict of interest. [...] And while the real reason Musk left OpenAI likely had more to do with the power struggle reported by Semafor, there’s almost certainly some truth to the fact that Tesla is working on powerful AI tech.

The fact that Musk is so far behind in the AI race needs to be kept in mind when you see him warn that this technology is untested. Musk has had no problem with deploying beta software in Tesla cars that essentially make everyone on the road a beta tester, whether they’ve signed up for it or not.

Rather than issuing a statement solely under his own name, it seems like Musk has tried to launder his concern about OpenAI through a nonprofit called the Future of Life Institute. But as Reuters points out, the Future of Life Institute is primarily funded by the Musk Foundation.

Of course, there’s also legitimate concern about these AI tools. [...]

Musk was perfectly happy with developing artificial intelligence tools at a breakneck speed when he was funding OpenAI. But now that he’s left OpenAI and has seen it become the frontrunner in a race for the most cutting edge tech to change the world, he wants everything to pause for six months.

Update (April 14th; 23 days after the open letter was published): Musk starts new AI company called X.AI.

Elon Musk is developing plans to launch a new artificial intelligence start-up to compete with ChatGPT-maker OpenAI.

...

Musk incorporated a company named X.AI on March 9, according to Nevada business records.

...

For the new project, Musk has secured thousands of high-powered GPU processors from Nvidia.

...

Musk is recruiting engineers from top AI labs including DeepMind, according to those with knowledge of his plans.

Notice the word "all" in "all AI labs", and the plural "governments". This shouldn't just be focused on the West. I hope the top signatories are reaching out to labs in China and other countries. And the UN for that matter. This needs to be global to be effective.

JWS
1y13
6
0

Agreed. I find the implication that if the US slows down, China will inevitably race ahead far from inevitable. If China wants the 21st Century to be 'The Chinese Century', then a misaligned AGI seems a pretty surefire way to remove that from the table. I don't have expertise into the state of Chinese AI Research (though I do know some Western AI experts are highly sceptical of their ability to catch up), but I'm not sure why arguments to slow down AI progress/focusing more on safety would be persuasive from one part of the world but not another.

The LessWrong comments here are generally (quite) (brutal), and I think I disagree, which I'll try to outline very briefly below. But I think it may be generally more fruitful here to ask some questions I had to break down the possible subpoints of disagreement as to the goodness of this letter. 

I expected some negative reaction because I know that Elon is generally looked down upon by the EAs that I know, with some solid backing to those claims when it comes to AI given that he cofounded OpenAI, but with the (immediate) (press) (attention) it's getting in combination with some heavy hitting signatures (including Elon Musk, Stuart Russel, Steve Wozniak (Co-founder, Apple), Andrew Yang, Jaan Tallinn (Co-Founder, Skype, CSER, FLI), Max Tegmark (President, FLI), and Tristan Harris (from The Social Dilemma) among many others) I kind of can't really see the overall impact of this letter being net negative. At worst it seems mistimed and with technical issues, but at best it seems one of the better calls to action (or global moratoriums as Greg Colbourn put it) that could have happened, given AI's current presence in the news and much of the world's psyche.  

But I'm not super certain in anything, and generally came away with a lot of questions, here's a few:

  1. How convergent is this specific call for pause on developing strong language models with how AI x-risk people would go about crafting a verifiable, tangible metric for AI labs to follow to reduce risk? Is this to be seen as a good first step? Or something that might actually be close enough to what we want that we could rally around this metric given its endorsement by this influential group?
    1. This helps clarify the "6 months isn't enough to develop the safety techniques they detail" objection which was fairly well addressed here as well as the "Should Open AI be at the front" objection.
  2. How much should we view messages that are a bit more geared towards non x-risk AI worries than the community seems to be? They ask a lot of good questions here, but they are also still asking "Should we let machines flood our information channels with propaganda and untruth?" an important question, but one that to me seems to deviate away from AI x-risk concerns.  
    1. This is at least tangential to the "This letter felt rushed" objection, because even if you accept it was rushed, the next question is "Well, what's our bar for how good something has to be before it is put out into the world?" 
  3. Are open letters with influential signees impactful? This letter at the very least to me seems to be a neutral at worst, quite impactful at best sort of thing, but I have very little to back that, and honestly can't recall any specific time I know of where open letters cause significant change at the global/national level. 
  4. Given the recent desire to distance from potentially fraught figures, would that mean shying away from a group wide EA endorsement of such a letter because a wild card like Elon is a part of it? I personally don't think he's at that level, but I know other EAs who would be apt to characterize him that way.
  5. Do I sign the post? What is the impact of adding signatures with significantly less professional or social clout to such an open letter? Does it promote the message of AI risk as something that matters to everyone? Or would someone look at "Tristan Williams, Tea Brewer" and think "oh, what is he doing on this list?" 

I'll have a go at answering your questions:

1. This is a great first step. Really any kind of half-decent foot in the door is good at this stage, whilst the shock of GPT-4 is still fresh. A much better letter in even two months time would be worse I think.

2. Engendering broad support for a moratorium is good. We don't need everyone to be behind it for x-risk reasons, but we do need a global majority to be behind it. This is why I've said that it might be good if a taboo around AGI development can be inculcated in society - a taboo is stronger than regulation.

3. Would be interested to see data on this.

4. I don't think this is a significant concern. With broad enough support everyone can have at least a few people they greatly admire on the list. 

5. Yes, I think the more signatures, the better. We need the whole world (or at least a large majority of it) to get behind a global moratorium on AGI development!

  1. I tend to agree at a first glance, but when you take into account this counternarrative that has cropped up of "this is just a list of losing AI developers trying to retake control" I wonder if this will trudge on proactively or become fuel to the "the people worried about AI safety are just selfish elitists"  fire that Timnit Gebru is always stoking
  2. I think I just flatly agree here.
  3. Someone from lesswrong mentioned the Letter of three hundred which I'd like to check out in this context.
  4. Mmm not so sure on this. I think there's a much stronger "x, who I really don't like, is involved in this so I won't involve myself in it" motivation now a days. Twitter is a relevant example here, where Musk joining was enough for many to leave, even if people they still admire and were interested in engaging with were still on the platform. I like the paradigm of "everyone has someone to like so we can all like it" but think today we've moved more towards a "distancing from people you don't like" in a way that makes me wonder if the former is still possible. What do you think about that though?
  5. Cool, will maybe sign then!

Thanks for responding too! Appreciate engagement, it makes thinking about these sorts of things much more worth it. 

  1. Yudkowsky's TIME article is a good counter to this. The blunt, no holds-barred, version of what all the fuss is about.

  2. :)

  3. Thanks for the link, and good that there is precedence.

  4. How many big accounts that threatened to leave Twitter actually have? I've seen a lot just continue to threaten to but keep posting. As Elon says, at least it's not boring. I hope that we're at a high point of polarisation and things will get better. Maybe the Twitter algorithm being open sourced could be a first step to this (i.e. if social media becomes less polarised, due to anger being downweighted or something, as a result).

  5. Great :)

Scott Aaronson, a prominent quantum computing professor who's spent the last year working on alignment at OpenAI, has just written a response to this FLI open letter and to Yudkowsky's TIME piece: "If AI scaling is to be shut down, let it be for a coherent reason".

I don't agree with everything Scott has written here, but I found these parts interesting:

People might be surprised about the diversity of opinion about these issues within OpenAI, by how many there have discussed or even forcefully advocated slowing down.

...

Why six months? Why not six weeks or six years? [...] With the “why six months?” question, I confess that I was deeply confused, until I heard a dear friend and colleague in academic AI, one who’s long been skeptical of AI-doom scenarios, explain why he signed the open letter. He said: look, we all started writing research papers about the safety issues with ChatGPT; then our work became obsolete when OpenAI released GPT-4 just a few months later. So now we’re writing papers about GPT-4. Will we again have to throw our work away when OpenAI releases GPT-5? I realized that, while six months might not suffice to save human civilization, it’s just enough for the more immediate concern of getting papers into academic AI conferences.

...

Look: while I’ve spent multiple posts explaining how I part ways from the Orthodox Yudkowskyan position, I do find that position intellectually consistent, with conclusions that follow neatly from premises. The Orthodox, in particular, can straightforwardly answer all four of my questions above [...]

On the other hand, I'm deeply confused by the people who signed the open letter, even though they continue to downplay or even ridicule GPT’s abilities, as well as the “sensationalist” predictions of an AI apocalypse. I’d feel less confused if such people came out and argued explicitly: “yes, we should also have paused the rapid improvement of printing presses to avert Europe’s religious wars. Yes, we should’ve paused the scaling of radio transmitters to prevent the rise of Hitler. Yes, we should’ve paused the race for ever-faster home Internet to prevent the election of Donald Trump. And yes, we should’ve trusted our governments to manage these pauses, to foresee brand-new technologies’ likely harms and take appropriate actions to mitigate them.”

The point of the letter is to raise awareness for AI safety, not because they actually think a pause will be implemented. We should take the win.

Yoshua Bengio, a Turing Award winner, published a response to this open letter in the last week, in which he says:

We must take the time to better understand these systems and develop the necessary frameworks at the national and international levels to increase public protection.

...

It is because there is an unexpected acceleration – I probably would not have signed such a letter a year ago – that we need to take a step back, and that my opinion on these topics has changed.

...

We succeeded in regulating nuclear weapons on a global scale after World War II, we can reach a similar agreement for AI.

I think it's promising – though I still think there's a long way to go – that key names in the ML community, such as Bengio, may be starting to view AI risk as a legitimate and important problem that warrants immediate attention.

Is there a rationale for a moratorium on large models at this moment instead of some time later? There is not a single mention of GPT-4's capabilities and why exactly it's a concern right now in the letter. Most of this article seems to talk about future possibilities for AI, and while I understand they are a concern, what exactly about GPT-4 makes them relevant right now?

The 6 months also seems entirely arbitrary. In any case, I feel like this letter could benefit from some rationale/explanation, maybe even a vague one for the choices of a 6 month moratorium and it happening now of all times.

Also, this article mentions GPT-4 and various AI safety risks, and seems to associate the two, but actually makes no explicit statement on what safety risks models larger than GPT-4 are likely to create. This kind of rhetoric rather disturbs me.

6 months sounds like a guess as to how long the leading companies might be willing to comply.

The timing of the letter could be a function of when they were able to get a few big names to sign.

I don't think they got enough big names to have much effect. I hope to see a better version of this letter before too long.

GPT-4 is being used to speed up development of GPT-5 already. If GPT-5 can make GPT-6 on it's own, it could then spiral to an unstoppable superintelligence. One with arbitrary goals that are incompatible with carbon-based life. How confident are we that this can't happen? You're right that they could do more to explain this in the letter. But I think broad appeal is what they were targeting (hence mention of other lesser concerns like job automation etc).

To quote the linked text:

We’ve also been using GPT-4 internally, with great impact on functions like support, sales, content moderation, and programming

I don't think "we used GPT to write a sales pitch" is evidence of an impending intelligence explosion. And having used GPT for programming myself, it's mostly a speedup mechanism that still makes plenty of errors. It substitutes for the the tedious part of coding which is currently done by googling on stack exchange, not the high level designing tasks.

The chance of "gpt-5 making gpt-6 on it's own" is approximately 0%. GPT is trained to predict text, not to build chatbots. 

It substitutes for the the tedious part of coding which is currently done by googling on stack exchange, not the high level designing tasks.

Right, I'm thinking the same. But that is still freeing up research engineer time, making the project go faster.

The chance of "gpt-5 making gpt-6 on it's own" is approximately 0%. GPT is trained to predict text, not to build chatbots. 

Mesaoptimisation and Basic AI Drives are dangers here. And GPT-4 isn't all that far off being capable of replicating itself autonomously when instructed to do so.

It makes the project go somewhat faster, but from the software people I've talked to, not by that much. there are plenty of other bottlenecks in the development process. For example, the "human reinforcement" part of the process is necessarily on a human scale, even if AI can speed things up around the edges. 

And GPT-4 isn't all that far off being capable of replicating itself autonomously when instructed to do so.

Replicating something that already exists is easy. A printer can "replicate" gpt-4.  What you were describing is a completely autonomous upgrade to something new and superior. That is what I ascribe a ~0% chance of gpt-5 achieving.  

A printer can't run GPT-4. What about GPT-6 or GPT-7?

I don't know whether GPT-6 or GPT-7 will be able to design the next version. I could see it being possible if "designing the next version" just meant cranking up the compute knob and automating the data extraction and training process. But I suspect this would lead to diminishing returns and disappointing results. I find it unlikely that any of the next few versions would make algorithmic breakthroughs, unless it's structure and training was drastically changed. 

You don't expect any qualitative leaps in intelligence from orders of magnitude larger models? Even GPT-3.5->GPT-4 was a big jump (much higher grades on university-level exams). Do you think humans are close to the limit in terms of physically possible intelligence?

I'll be interested to see how/if this gets picked up by mainstream media.

For example, this sentence seems to be an exaggeration, particularly

"Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control."

I worry this letter might be passed off as "crying wolf", although I agree that 6 month pause would be amazing.

Will be watching the news...

Max Tegmark went on the Lex Fridman podcast to discuss this. I haven't seen/listened to it, but would be interested in what folks who did thought. 

Copying from the YouTube description: 

  • 1:56 - Intelligent alien civilizations 
  • 14:20 - Life 3.0 and superintelligent AI 
  • 25:47 - Open letter to pause Giant AI Experiments 
  • 50:54 - Maintaining control 
  • 1:19:44 - Regulation 
  • 1:30:34 - Job automation 
  • 1:39:48 - Elon Musk 
  • 2:01:31 - Open source 
  • 2:08:01 - How AI may kill all humans 
  • 2:18:32 - Consciousness 
  • 2:27:54 - Nuclear winter 

2:38:21 - Questions for AGI

There is a lot of discussion in connection to the alignment problem about what human values are and which are important for human competitive --> superior AGI to align with. As an evolutionary behavioral biologist I would like to offer that we are like all other animals. That is, we have one fundamental value that all the complex and diverse secondary values constitute socioecologically optimized personal or subcultural means of achieving. Power. The fundamental naturally selected drive to gain, maintain, and increase access to resources and security.

I ask, why don't we choose to create many domain-specific expert systems to solve our diverse existing scientific and X-Risk problems instead of going for AGI? I suggest, it is due to natural selection's hand that has built us to readily pursue high-risk / high-reward strategies to gain power. Throughout our evolutionary history, even if such strategies, on average, lead to short periods in which some individuals achieve great power, it can lead to enough opportunities for stellar reproductive success that, again, on average, it becomes a favored strategy, helplessly adopted, say, by AGI creators and their enablers.

We choose AGI, and we are fixated on its capacities (power), because we are semi- or non-consciously in pursuit of our own power. Name your "alternative" set of human values. Without fail they all translate into the pursuit of power. This includes the AGI advocate excuse that we, the good actors, must win the inevitable AGI arms race against the bad actors out there in some well-appointed axis-of-evil cave. That too is all about maintaining and gaining power.

We should cancel AGI programs for a long time, perhaps forever, and devote our efforts to developing domain-specific non-X-Risk expert-systems that will solve difficult problems instead of creating them through guaranteed, it seems to me, non-alignment.

People worry about nonconsciously building biases into AGI. An explicit, behind the scenes (facade), or nonconscious lust for power is the most dangerous and likely bias or value to be programmed in.

It would be fun to have the various powerful LLP's react to the FLI Open Letter. Just in case they are at all sentient, maybe it should be asked to share it's opinion. An astute prompter already may be able to get responses that reveal that our own desire for superpowers (there is never enough power, because reproductive fitness is a relative measure of success; there is never enough fitness) has "naturally" begun to infect them, become their nascent fundamental value.

By the way, if it has not done so already, AGI will figure out very soon that every normal human being's fundamental value is power.

What if we fundamentally value other things but instrumentally that translates to power?

Curated and popular this week
Relevant opportunities