It was subtle, but Microsoft's CEO Satya Nadella just said he doesn't believe in artificial general intelligence (AGI).

The interviewer, Dwarkesh Patel, asked this question:

We will eventually have models, if they get to human level, which will have this ability to continuously learn on the job. That will drive so much value to the model company that is ahead, at least in my view, because you have copies of one model broadly deployed through the economy learning how to do every single job. And unlike humans, they can amalgamate their learnings to that model. So there’s this sort of continuous learning exponential feedback loop, which almost looks like a sort of intelligence explosion.

If that happens and Microsoft isn’t the leading model company by that time… You’re saying that well, we substitute one model for another, et cetera. Doesn’t that then matter less? Because it’s like this one model knows how to do every single job in the economy, the others in the long tail don’t.

This was Satya Nadella's response:

Your point, if there’s one model that is the only model that’s most broadly deployed in the world and it sees all the data and it does continuous learning, that’s game set match and you stop shop. The reality that at least I see is that in the world today, for all the dominance of any one model, that is not the case. Take coding, there are multiple models. In fact, everyday it’s less the case. There is not one model that is getting deployed broadly. There are multiple models that are getting deployed. It’s like databases. It’s always the thing, “Can one database be the one that is just used everywhere?” Except it’s not. There are multiple types of databases that are getting deployed for different use cases.

I think that there are going to be some network effects of continual learning—I call it data liquidity—that any one model has. Is it going to happen in all domains? I don’t think so. Is it going to happen in all geos? I don’t think so. Is it going to happen in all segments? I don’t think so. It’ll happen in all categories at the same time? I don’t think so. So therefore I feel like the design space is so large that there’s plenty of opportunity.

Nadella then steers the conversation toward practical business and technology concerns:

But your fundamental point is having a capability which is at the infrastructure layer, model layer, and at the scaffolding layer, and then being able to compose these things not just as a vertical stack, but to be able to compose each thing for what its purpose is. You can’t build an infrastructure that’s optimized for one model. If you do that, what if you fall behind? In fact, all the infrastructure you built will be a waste. You kind of need to build an infrastructure that’s capable of supporting multiple families and lineages of models. Otherwise the capital you put in, which is optimized for one model architecture, means you’re one tweak away, some MoE-like breakthrough that happens, and your entire network topology goes out of the window. That’s a scary thing.

Therefore you kind of want the infrastructure to support whatever may come in your own model family and other model families.

I looked back to see what Nadella's answer was the last time he was on Dwarkesh Patel's podcast, the last time Dwarkesh Patel asked him about AGI. This was earlier this year, in February 2025. Nadella actually said something similar then:

This is where I have a problem with the definitions of how people talk about it. Cognitive labor is not a static thing. There is cognitive labor today. If I have an inbox that is managing all my agents, is that new cognitive labor?

Today's cognitive labor may be automated. What about the new cognitive labor that gets created? Both of those things have to be thought of, which is the shifting…

That's why I make this distinction, at least in my head: Don't conflate knowledge worker with knowledge work. The knowledge work of today could probably be automated. Who said my life's goal is to triage my email? Let an AI agent triage my email.

But after having triaged my email, give me a higher-level cognitive labor task of, "Hey, these are the three drafts I really want you to review." That's a different abstraction.

When asked if Microsoft could ever have an AI serve on its board, Nadella says no, but maybe an AI could be a helpful assistant in Microsoft's board meetings:

It's a great example. One of the things we added was a facilitator agent in Teams. The goal there, it's in the early stages, is can that facilitator agent use long-term memory, not just on the context of the meeting, but with the context of projects I'm working on, and the team, and what have you, be a great facilitator?

I would love it even in a board meeting, where it's easy to get distracted. After all, board members come once a quarter, and they're trying to digest what is happening with a complex company like Microsoft. A facilitator agent that actually helped human beings all stay on topic and focus on the issues that matter, that's fantastic.

That's kind of literally having, to your point about even going back to your previous question, having something that has infinite memory that can even help us. You know, after all, what is that Herbert Simon thing? We are all bounded rationality. So if the bounded rationality of humans can actually be dealt with because there is a cognitive amplifier outside, that's great.

Nadella is a charming interview, and he has a knack for framing his answers to questions in a friendly, supportive, and positive way. But if you listen to (or read) what he actually says, he is saying he doesn't buy the idea that there will be AGI anytime soon.

In the recent interview, Nadella also expressed skepticism about OpenAI's projection that it will make $100 billion in revenue in 2027, but in his typical polite and cheerful way. The question from Dwarkesh Patel was:

Do you buy… These labs are now projecting revenues of $100 billion in 2027–28 and they’re projecting revenue to keep growing at this rate of 3x, 2x a year…

Nadella replied:

In the marketplace there’s all kinds of incentives right now, and rightfully so. What do you expect an independent lab that is sort of trying to raise money to do? They have to put some numbers out there such that they can actually go raise money so that they can pay their bills for compute and what have you.

And it’s a good thing. Someone’s going to take some risk and put it in there, and they’ve shown traction. It’s not like it’s all risk without seeing the fact that they’ve been performing, whether it’s OpenAI, or whether it’s Anthropic. So I feel great about what they’ve done, and we have a massive book of business with these chaps. So therefore that’s all good.

12

0
1

Reactions

0
1

More posts like this

Comments1
Sorted by Click to highlight new comments since:

>Today's cognitive labor may be automated. What about the new cognitive labor that gets created? Both of those things have to be thought of, which is the shifting…

This comment does seem to point to a possible disagreement with the AGI concept. I interpreted some of the other comments a little differently though. For example, 

>Your point, if there’s one model that is the only model that’s most broadly deployed in the world and it sees all the data and it does continuous learning, that’s game set match and you stop shop. The reality that at least I see is that in the world today, for all the dominance of any one model, that is not the case. Take coding, there are multiple models. In fact, everyday it’s less the case. There is not one model that is getting deployed broadly. There are multiple models that are getting deployed.

I would say humans are general intelligences, but obviously different humans are good at different things. If we had the ability to cheaply copy people like software, I don’t think we would pick some really smart guy and deploy just him across the economy. I guess what Dwarkesh is saying is that the way continual learning will play out in AIs this will be different because we’ll be able to amalgamate all the stuff AIs learn into a single model, but I don’t think it’s obvious that the way things will play out this will be the most efficient way to do things. 

Curated and popular this week
Relevant opportunities