The New York Times: Sundar Pichai, CEO of Alphabet and Google, is trying to speed up the release of AI technology by taking on more risk.

Mr. Pichai has tried to accelerate product approval reviews, according to the presentation reviewed by The Times.

The company established a fast-track review process called the “Green Lane” initiative, pushing groups of employees who try to ensure that technology is fair and ethical to more quickly approve its upcoming A.I. technology.

The company will also find ways for teams developing A.I. to conduct their own reviews, and it will “recalibrate” the level of risk it is willing to take when releasing the technology, according to the presentation.

This change is in response to OpenAI's public release of ChatGPT. It is evidence that the race between Google/DeepMind and Microsoft/OpenAI is eroding ethics and safety.

Demis Hassabis, CEO of DeepMind, urged caution in his recent interview in Time:

He says AI is now “on the cusp” of being able to make tools that could be deeply damaging to human civilization, and urges his competitors to proceed with more caution than before.

“When it comes to very powerful technologies—and obviously AI is going to be one of the most powerful ever—we need to be careful,” he says.

“Not everybody is thinking about those things. It’s like experimentalists, many of whom don’t realize they’re holding dangerous material.”

Worse still, Hassabis points out, we are the guinea pigs.

Alphabet/Google is trying to accelerate a technology that its own subsidiary says is powerful and dangerous.

Update: Sam Altman, CEO of OpenAI, tweeted:

"recalibrate" means "increase" obviously.

disappointing to see this six-week development. openai will continually decrease the level of risk we are comfortable taking with new models as they get more powerful, not the other way around.

Comments8
Sorted by Click to highlight new comments since: Today at 2:10 PM

Michael - thanks for summarizing this alarming development.

I suspect that in 50 to 100 years, these tech CEOs, and the AI researchers who worked for them, may be remembered as some of the most reckless, irresponsible, hubristic, and unethical humans who have ever influenced human history.

They have absolutely no democratic mandate from the 8 billion humans they will affect, to develop the systems they are developing. They have not made a compelling case to the public that the benefits will exceed the risks. They are paying lip service to AI safety while charging ahead at full speed towards a precipice.

IMHO, EAs should consider focusing a bit more on hitting the pause button on all advanced AI research, and stop pretending that 'technical AI alignment research' will significantly reduce any of the catastrophic risks from these corporate arms races. 

Whatever benefits humanity may eventually derive from AI will still be there for the taking in 100 years, 500 years, 1,000 years.  We may not live to see them, if AI doesn't solve longevity in our lifetimes. But I'd rather see a future where AI research is paused for a century or two, and our great-grandkids have a fighting chance at survival, than one where we make a foolhardy bet that these AI companies are actually making rational risk/benefit decisions in our collective interests.

(Sorry for the feisty tone here, but I'm frustrated that so many EAs seem to put so much faith in these corporations and their 'AI safety' window dressing.)

Thanks for summarizing/quoting the most important bits of these articles! But also... AHHHH

It's somewhat surprising to me the way this is shaking out. I would expect DeepMind and OpenAI's AGI research to be competing with one another*. But here it looks like Google is the engine of competition, less motivated by any future focused ideas about AGI more just by the fact that their core search/ad business model appears to be threatened by OpenAI's AGI research.

*And hopefully cooperating with one another too.

I don't think it's obvious that Google alone is the engine of competition here, it's hard to expect any company to simply do nothing if their core revenue generator is threatened (I'm not justifying them here), they're likely to try to compete rather than give up immediately and work on other ways to monetiz. It's interesting to note that it just happened to be the case that Google's core revenue generator (search) is a possible application area of one of the LLMs, the fastest progressing/most promising area of AI research right now. I don't think OpenAI pursued LLMs for this reason (to compete with Google), and instead pursued them because they're promising, but  interesting to note that search and LLMs are both bets on language being the thing to bet on.

You're right - I wasn't very happy with my word choice calling Google the 'engine of competition' in this situation. The engine was already in place and involves the various actors working on AGI and the incentives to do so. But these recent developments with Google doubling down on AI to protect their search/ad revenue are revving up that engine.

Thanks for all the comments.

Updated the post with a recent tweet from Sam Altman, CEO of OpenAI:

"recalibrate" means "increase" obviously.

disappointing to see this six-week development. openai will continually decrease the level of risk we are comfortable taking with new models as they get more powerful, not the other way around.

Looks like they were serious! Google announced earlier their MusicLM.

Sabs
1y-4
4
6

"AI is going to be one of the most powerful technologies ever. Absolutely the most powerful. Hey Google don't you think you should book more revenue as coming from us at Deepmind, given that we are inventing the most powerful technologies ever, exclusively for the benefit of you , Google?"

I continue to be staggered at the credence EAs apply to the pronouncements of AI industry executives, when virtually all of them have a massive economic incentive to hype up their products!