tl:dr: there are indications that ML engineers will migrate to environments with less AI governance in place, which has implications for the tech industry and global AI governance efforts.

=========================

I just wanted to raise something to the community's attention about the coverage of AI companies within the media. The media-source is 'The Information', which is a tech-business focused online news source. Link: https://www.theinformation.com/. I'll also note that their articles are (to my knowledge) all behind a paywall.

The first article in question is titled "Alphabet Needs to Replace Sundar Pichai".

It outlines how Google stocks have stagnated in 2023 compared to other tech stocks such as Meta's. 

Here's their mention of Google's actions throughout GPT-mania:

"The other side of this equation is the performance of Alphabet management. Most recently, the company’s bungling of its AI efforts—allowing Microsoft to get the jump on rolling out an AI-powered search engine—was the latest sign of how Alphabet’s lumbering management style is holding it back. (Symbolically, as The Information reported, Microsoft was helped by former Google AI employees!)."

This brings us to the second article: "OpenAI’s Hidden Weapon: Ex-Google Engineers"

"As OpenAI’s web chatbot became a global sensation in recent months, artificial intelligence practitioners and investors have wondered how a seven-year-old startup beat Google to the punch.

...

After it hoovered up much of the world’s machine-learning talent, Google is now playing catch-up in launching AI-centric products to the public. On the one hand, Google’s approach was deliberate, reflecting the company’s enormous reach and high stakes in case something went wrong with the nascent technology. It also costs more to deliver humanlike answers from a chatbot than it does classic search results. On the other hand, startups including OpenAI have taken some of the AI research advances Google incubated and, unlike Google, have turned them into new types of revenue-generating services, including chatbots and systems that generate images and videos based on text prompts. They’re also grabbing some of Google’s prized talent.

Two people who recently worked at Google Brain said some staff felt the unit’s culture had become lethargic, with product initiatives marked by excess caution and layers of red tape. That has prompted some employees to seek opportunities elsewhere, including OpenAI, they said."

Although there are many concerning themes here, I think the key point is in this last paragraph.

I've heard speculation in the EA / tech community that AI will trend towards alignment & safety because technology companies will be risk-averse enough to build alignment into their practices.

I think the articles show that this dynamic is playing out to some degree - Google at least seems to be taking a more risk-averse approach to deploying of AI systems.

The concerning observation is that there has been a two-pronged backlash against Google's 'conservative' approach. Not only is the stockmarket punishing Google for 'lagging' behind the competition (despite having equal or better capability to deploy similar systems), according to this article, elite machine-learning talent is also pushing back on this approach.

To me this is doubly concerning. The 'excess caution and layers of red tape' in the article is potentially the same types of measures that AI safety proponents would deem to be useful and necessary. Regardless, it appears that the engineers themselves are willing to jump ship in order to circumvent these safety measures.

Although further evidence would be valuable, it seems that there might be a trend unfolding whereby firms are not only punished by financial markets, they're also forced to weigh up the risks of not being able to retain ML engineers who would rather work for firms with less AI governance measures.

From my limited understand of industry economics, this dynamic makes sense; I recall reading in Michael Porter's 'Competitive Advantage' that lower-ranked firms are more likely to take actions that damage an the overall industry in order to advance their own position in the short-term. In this instance, it means Microsoft are pushing the rate of AI deployment in ways that Google considers to be risky.

Overall, this trend seems to provide another counter-argument to the hypothesis that markets incentives will provide sufficient levels of alignment. There are also concerning implications for AI governance in the global AI ecosystem: in the case that some nations are able to implement effective AI governance policies, will this simply cause a migration of AI talent towards lower-governance zones?

I'd enjoy hearing what other thinking and research has been done on this topic, as it appears to add a new dimension to the already tremendously complex issue of AI safety.

Comments5
Sorted by Click to highlight new comments since:

This is really interesting. I recommend posting it to lesswrong, the people there will probably find it more interesting than here.

Thanks very much for the recommendation, I'll do that now

Justin -- an important and alarming post; thank you. 

The ability of ML/AI researchers to leave companies & nations that impose tighter controls, regulations, and safety norms, in favor of those that aren't as concerned about AI safety, is truly worrying. And I think it highlights the fact that voluntary, local regulations -- whether adopted by individual companies or nation-states -- will not be effective at slowing risky global AI development. Risky AI research can move wherever risky AI research can thrive.

We need robust, global strategies for slowing AI capabilities development until AI safety catches up. Legal regulations, professional ethics, and social norms aren't enough, because they're too easy to escape, to game, and to pay lip service to.  

The only alternative I can see is promoting a global, informal, but fierce moral stigmatization of AI research for the next few decades -- a stigmatization that will follow AI researchers wherever they go, and that will handicap their ability to impose X risks on everybody else.

Hi Geoffrey

Thanks for the kind words.

I did have a bit of a think about what the implications are for finding feasible AI governance solutions, and here's my personal take:

If it is true that 'inhibitive' governance measures (perhaps like those that are in effect at Google) cause ML engineers to move to more dangerous research zones, I believe it might be prudent to explore models of AI governance that 'accelerate' progress towards alignment, rather than slow down the progression towards misalignment.

My general argument would be as follows:

If we assume that it will be unfeasible to buy-out or convince most of the ML engineers on the planet to intrinsically value alignment, then it means that global actors with poor intentions (e.g. imperialist autocracies) will benefit from a system where well-intentioned actors have created a comparatively frustrating & unproductive environment for ML engineers. I.e. not only will they have a more efficient R&D pipeline due to lower restrictions, they may also have better capacity to hire & retain talent over the long-term.

One possible implication from this assertion is that the best course of action is to initiate an AI-alignment Manhattan project that focuses on working towards a state of 'stabilisation' in the geopolitical/technology realm. The intention of this is to change the structure of the AI ecosystem so that it favours 'aligned' AI by promoting progress in that area, rather than accidentally proliferating 'misaligned' AI by stifling progress in 'pro-alignment' zones.

I find this conclusion fairly disturbing and I hope there's some research out there that can disprove it.

Hi Justin, thanks for this reply. Lots to think about. For the moment, just one point:

I worry that EA culture tends to trust the US/Western companies, governments, & culture too much, and is too quick to portray China as an 'imperialist autocracy' that can't be trusted at all, and that's incapable of taking a long view about humanity in general, or about X risks in particular. (Not that this is what you're necessarily doing here; your comment just provoked this mini-rant about EA views of China in general). 

I'm far from a China expert, but I have some experience teaching at a Chinese university, reading a fair amount about China, and following their rise rather closely over the last few decades. My sense is that Chinese government and people are somewhat more likely to value AI alignment than American politicians, media, and voters do. 

And that they have good reasons not to trust any American political or cultural strategy for trying to make AI research safe. They see the US as much more aggressively imperialistic over the last couple of hundred years than China has ever been. They understand that the US fancies itself a representative democracy, but that, in practice, it is, like all stable countries, an oligarchy pretending to be something other than an oligarchy. They see their system as, at least, honest about the nature of its political power; whereas Americans look deluded into thinking that their votes can actually change the political power structure. 

I worry that the US/UK strategies for trying to make AI research safer will simply not be credible to Chinese leaders, AI researchers, or ordinary people, and will be seen as just another form of American exceptionalism, in which we act as if we're the only people in the world who can be trusted to reduce global X risks. From what I've seen so far (e.g. a virtual absence of any serious political debate about AI in the US), China would be right not to trust our capacity to take this problem seriously, let alone to solve it.

Curated and popular this week
Relevant opportunities