Hide table of contents
Photo by gibblesmash asdf

Self-driving cars are not a solved problem, nor are they close to getting solved. I will explain further, but since this is a case where the messenger matters more than the message, first listen to Andrej Karpathy tell you the same thing.

Karpathy is as credentialed as it comes: he was the lead AI researcher responsible for the development of Tesla’s Full Self-Driving software from 2017 to 2022. (Karpathy also did two stints as a researcher at OpenAI, taught a deep learning course at Stanford, and coined the term “vibe coding”.)

Here’s a long quote from Karpathy’s October 17, 2025 interview with Dwarkesh Patel:

Dwarkesh Patel 01:42:55

You’ve talked about how you were at Tesla leading self-driving from 2017 to 2022. And you firsthand saw this progress from cool demos to now thousands of cars out there actually autonomously doing drives. Why did that take a decade? What was happening through that time?

Andrej Karpathy 01:43:11

One thing I will almost instantly push back on is that this is not even near done, in a bunch of ways that I’m going to get to. Self-driving is very interesting because it’s definitely where I get a lot of my intuitions because I spent five years on it. It has this entire history where the first demos of self-driving go all the way to the 1980s. You can see a demo from CMU in 1986. There’s a truck that’s driving itself on roads.

Fast forward. When I was joining Tesla, I had a very early demo of Waymo. It basically gave me a perfect drive in 2014 or something like that, so a perfect Waymo drive a decade ago. It took us around Palo Alto and so on because I had a friend who worked there. I thought it was very close and then it still took a long time.

For some kinds of tasks and jobs and so on, there’s a very large demo-to-product gap where the demo is very easy, but the product is very hard. It’s especially the case in cases like self-driving where the cost of failure is too high. Many industries, tasks, and jobs maybe don’t have that property, but when you do have that property, that definitely increases the timelines.

For example, in software engineering, I do think that property does exist. For a lot of vibe coding, it doesn’t. But if you’re writing actual production-grade code, that property should exist, because any kind of mistake leads to a security vulnerability or something like that. Millions and hundreds of millions of people’s personal Social Security numbers get leaked or something like that. So in software, people should be careful, kind of like in self-driving. In self-driving, if things go wrong, you might get injured. There are worse outcomes. But in software, it’s almost unbounded how terrible something could be.

I do think that they share that property. What takes the long amount of time and the way to think about it is that it’s a march of nines. Every single nine is a constant amount of work. Every single nine is the same amount of work. When you get a demo and something works 90% of the time, that’s just the first nine. Then you need the second nine, a third nine, a fourth nine, a fifth nine. While I was at Tesla for five years or so, we went through maybe three nines or two nines. I don’t know what it is, but multiple nines of iteration. There are still more nines to go.

That’s why these things take so long. It’s definitely formative for me, seeing something that was a demo. I’m very unimpressed by demos. Whenever I see demos of anything, I’m extremely unimpressed by that. If it’s a demo that someone cooked up just to show you, it’s worse. If you can interact with it, it’s a bit better. But even then, you’re not done. You need the actual product. It’s going to face all these challenges when it comes in contact with reality and all these different pockets of behavior that need patching.

We’re going to see all this stuff play out. It’s a march of nines. Each nine is constant. Demos are encouraging. It’s still a huge amount of work to do. It is a critical safety domain, unless you’re doing vibe coding, which is all nice and fun and so on. That’s why this also enforced my timelines from that perspective.

Karpathy elaborated later in the interview:

The other aspect that I wanted to return to is that self-driving cars are nowhere near done still. The deployments are pretty minimal. Even Waymo and so on has very few cars. They’re doing that roughly speaking because they’re not economical. They’ve built something that lives in the future. They’ve had to pull back the future, but they had to make it uneconomical. There are all these costs, not just marginal costs for those cars and their operation and maintenance, but also the capex of the entire thing. Making it economical is still going to be a slog for them.

Also, when you look at these cars and there’s no one driving, I actually think it’s a little bit deceiving because there are very elaborate teleoperation centers of people kind of in a loop with these cars. I don’t have the full extent of it, but there’s more human-in-the-loop than you might expect. There are people somewhere out there beaming in from the sky. I don’t know if they’re fully in the loop with the driving. Some of the time they are, but they’re certainly involved and there are people. In some sense, we haven’t actually removed the person, we’ve moved them to somewhere where you can’t see them.

I still think there will be some work, as you mentioned, going from environment to environment. There are still challenges to make self-driving real. But I do agree that it’s definitely crossed a threshold where it kind of feels real, unless it’s really teleoperated. For example, Waymo can’t go to all the different parts of the city. My suspicion is that it’s parts of the city where you don’t get good signal. Anyway, I don’t know anything about the stack. I’m just making stuff up.

Dwarkesh Patel 01:50:23

You led self-driving for five years at Tesla.

Andrej Karpathy 01:50:27

Sorry, I don’t know anything about the specifics of Waymo. By the way, I love Waymo and I take it all the time. I just think that people are sometimes a little bit too naive about some of the progress and there’s still a huge amount of work. Tesla took in my mind a much more scalable approach and the team is doing extremely well. I’m kind of on the record for predicting how this thing will go. Waymo had an early start because you can package up so many sensors. But I do think Tesla is taking the more scalable strategy and it’s going to look a lot more like that. So this will still have to play out and hasn’t. But I don’t want to talk about self-driving as something that took a decade because it didn’t take it yet, if that makes sense.

Dwarkesh Patel 01:51:08

Because one, the start is at 1980 and not 10 years ago, and then two, the end is not here yet.

Andrej Karpathy 01:51:14

The end is not near yet because when we’re talking about self-driving, usually in my mind it’s self-driving at scale. People don’t have to get a driver’s license, etc.

Now that I’ve established that Karpathy’s expert opinion is behind me, I will get into the details.

Do self-driving car companies believe self-driving cars are a solved problem?

The self-driving car startup Cruise Automation was once considered a frontrunner in the race to fully autonomous vehicles and was valued at $30 billion, with billions in investment from SoftBank, Honda, Microsoft, and its parent company, General Motors. As of 2023, Cruise had 400 autonomous vehicle prototypes operating in San Francisco and 200 in Texas and Arizona. In San Francisco, a robotaxi service was operated for Cruise employees during the day and for the general public late at night.

In December 2024, Cruise threw in the towel. The robotaxi endeavour was dead. General Motors absorbed whatever remained of Cruise to apply it to its advanced driver assistance systems (ADAS) and maybe, possibly, one day fully autonomous vehicles that could be purchased by individuals rather than hailed as a robotaxi. Cruise Automation did not believe self-driving cars were a solved problem as of December 2024. At least, General Motors did not.

Cruise is not an outlier, but part of a trend. In 2021, Uber gave up on its self-driving car ambitions and sold its self-driving car unit, Uber Advanced Technologies Group or Uber ATG. The remnants of Uber ATG were sold to a self-driving car company called Aurora Innovation. Aurora, for its part, has abandoned trying to make self-driving cars work and has pivoted to self-driving semi trucks. Earlier this year, one of Aurora’s co-founders left the company to work on ADAS at General Motors.

Aurora is not likely to have better luck with autonomous semi trucks. In 2023, Waymo abandoned its autonomous semi truck program. The startup Pronto AI also tried to make autonomous semi trucks work, but pivoted to working on autonomous off-road trucks that haul rocks at mines.

Many other self-driving car companies have given up or run out of money, such as Voyage, Argo, Drive.ai, and Ghost Locomotion. Sometimes whatever remains of the company is sold to an acquirer that itself subsequently throws in the towel. Lyft sold its Level 5 division to Toyota. Apple shut down Project Titan. Everywhere you look, self-driving car companies and divisions are not working out.

Which brings us, of course, finally, to Waymo. Does Waymo believe it has solved self-driving? In short, no. Look at what Waymo is actually doing. As of August, Waymo had 2,000 vehicles in its commercial fleet, and perhaps a bit more in its test fleet. For comparison, five years ago, Waymo had 600 vehicles (possibly only in its commercial fleet or possibly combining the commercial and test fleets).

The smallness of Waymo’s fleet is not due to constraints on producing more vehicles. In 2018, Waymo and Jaguar announced a deal where Jaguar would produce up to 20,000 I-Pace hatchbacks for Waymo. The same year, Waymo and Chrysler announced a similar deal for up to 62,000 Pacifica minivans. Yet, seven years later, the fleet is only around 2,000 vehicles. Waymo only plans to build around 2,000 more vehicles in 2026. If self-driving is a solved problem, why not deploy a lot more vehicles?

It’s also worth considering that Alphabet, Waymo’s parent company, decided to open up Waymo to external investment, rather than continue to fully fund Waymo itself. Alphabet could certainly afford it, given that it has $98 billion in cash and short-term investments. If self-driving cars were really solved, they would be an enormous financial opportunity. By bringing in outside investors, Alphabet is reducing its ownership stake in Waymo and thereby reducing its ability to profit from that opportunity.

One potentially benign explanation is that Alphabet simply wanted to put a price on Waymo’s equity so that its employees would know the value of their shares and potentially cash out. (According to Hiive, a marketplace for selling shares in private companies, Waymo shares are currently worth about $280.) However, it would have been sufficient to do just one round of outside investment. Waymo has done four rounds of funding since 2020, with the latest in 2024, bringing in billions in outside capital.

Alphabet doesn’t disclose its ownership stake in Waymo. Things are further complicated by the fact that Alphabet led the most recent investment round. However, if Alphabet provided half of the capital in the most recent round, my math is that Alphabet has diluted its stake in Waymo down to around 75%. (I’m assuming $5.55 billion was invested in 2020 and 2021 at a $30 billion valuation, giving outside investors ownership of 18.5% of Waymo. I’m assuming $2.8 billion was contributed by outside investors in 2024 at a $45 billion valuation, giving them another 6.2% of Waymo.)

Another logical question to ask: does Waymo actually say self-driving is a solved problem? The answer is not exactly clear. When speaking or writing publicly, people at Waymo are cagey and choose their words carefully, neither saying outright that self-driving is a solved problem or an unsolved problem. In a June interview, Waymo’s co-CEO Dmitri Dolgov was asked, “How long do you think it will be before we get to full Level 5 autonomy?” Dolgov gave a cagey answer:

A simpler way to think about autonomy is this: either you need a human driver behind the wheel, or you don’t. Everything else is nuance.

But if we stick with the Society of Automotive Engineers (SAE) framework, Waymo currently operates on Level 4—vehicles that operate without human intervention, within specific domains. Level 5, by comparison, means full autonomy anywhere, in any condition a human could drive—from remote mountain roads to extreme winter weather conditions, with no operational limitations.

I tend to avoid making specific predictions about Level 5 timelines. While additional advances in AI will enable us to gradually expand our operational domains, the real challenge isn’t tech capabilities—it’s building confidence that the technology can reliably manage any situation, in any condition. Our focus is on safely scaling our service and delivering real value to the riders who rely on it every day.

Dolgov acknowledges that Waymo still hasn’t solved Level 5 autonomy and he seems to indicate it won’t be solved soon. On the topic of weather conditions, an October blog post by Waymo notes that snow poses an ongoing challenge.

There are more statements we can analyze. A 2024 Waymo blog post quotes a VP of Engineering who says, “The problem we’re trying to solve is how to build autonomous agents that navigate in the real world.” That’s a curiously present-tense statement. Similar statements can be found throughout the post, such as:

What makes our work particularly compelling – and challenging – is building state-of-the-art models that handle the full complexity of real-world driving, a social task that at scale necessarily encompasses many long-tail scenarios. From erratic behavior of fellow road users to rapidly changing weather conditions, our goal is to build a system that consistently and reliably handles these edge cases.

This implies the goal of reliably handling edge cases has not yet been achieved. The post also advises, “The most interesting work for Waymo in AI is still ahead, as we continue to scale the Waymo Driver.” It ends with a call for applicants who are “passionate about solving some of AI’s biggest challenges in autonomous vehicles, robotics, and beyond”, which implies unsolved challenges.

These implications are consistent with Waymo’s most in-depth technical presentations. In a 2021 talk, Drago Anguelov, a Vice President at Waymo, formerly its Head of Research and now head of its AI Foundations Team, described one aspect of the self-driving car problem this way (emphasis added):

What’s also really great in our autonomous vehicle domain that some other robotics domain do not share is that when we drive, we capture the behavior of all these humans that do the same task that we’re trying to perfect. You can capture 10 to hundreds of humans driving, and showing you how they do it. Of course, some are real experts, and some not quite so much. But it’s all very useful signal for machine learning.

At the same time, there is a requirement for robustness. You need to be able to also do reasonable things in very rare cases of the kinds I showed you. And so, in that case, it does help for machine learning to be complimented with expert domain knowledge. Because having machine learning deal robustly with cases when there’s almost no examples is still an open research problem. And so, an autonomous driving stack needs to be designed to leverage as fully as possible these trends in machine learning, while mitigating its weaknesses. And that how we’ve tried to build our stack.

This is still an open research problem. Deep learning remains highly data inefficient, deep reinforcement learning remains highly sample inefficient, and generalizing beyond beyond the training data to rare edge cases is something AI models still struggle badly with.

In subsequent talks in 2023 and 2024, Anguelov revists this topic. He outlines some of the ongoing challenges with using machine learning in an autonomous driving context, such as problems with the perception system detecting unfamiliar objects and the problems with using imitation learning and reinforcement learning for planning in unfamiliar scenarios. He describes some of the techniques Waymo is trying. He doesn’t give me the impression he thinks the problem is solved. Anguelov also reiterates machine learning’s difficulties with edge cases in a 2024 podcast interview.

Vincent Vanhoucke, a Waymo engineer, gave an interview in February where he also emphasized the difficulty of solving for rare edge cases:

Now the big challenges really are about, essentially, scaling. All the issues that have to do with what happens when you drive millions of miles. The long tail of problems that you have to deal with at that scale kind of dominates the equation of what you have, what problems you have to solve, right? You can imagine, if you as a driver, you experience a thing maybe once in your lifetime, we will experience that thing pretty much every week, or maybe every month, right? And so all of the things that are exceptional and weird and difficult are essentially becoming common occurrences for us and are kind of putting pressure on our scaling. So, solving for this long tail is really what we’re focused on and what we’re hoping that, you know, AI and sort of large model capabilities can help us accelerate.

When asked when he thinks robotaxis will do more driving than human drivers in the U.S., Vanhoucke’s reply is conservative:

I would love when I’m an old grandpa to be able to talk to my grandkids and tell them in my day we used to drive cars by hand. Can you believe this? Isn’t that crazy? I feel like there is a potential future in which you look back at today and think, man, we were crazy to leave cars in the hands of humans, given the level of accidents that this generates and the complexity of the problem. So, that’s a future I would love to see. Whether it’ll happen in my lifetime, I don’t know.

So, overall, I don’t get the impression that Waymo is saying that self-driving cars are a solved problem, and they certainly aren’t acting like it.

The major caveats to Waymo’s success

Waymo’s progress on autonomous driving has certainly been interesting, and its gradual removal of safety drivers from its vehicles, which started in 2020, is what makes it stand out from other companies. It’s remarkable how far Waymo has been able to take its proof of concept of self-driving tech. However, there are major caveats to Waymo’s success that are hidden from plain view. The most important one is that Waymo’s vehicles are not fully autonomous but require remote human intervention to function. Waymo calls the human intervention “fleet response”. If a vehicle doesn’t know which lane to take, for example, it can stop, phone home, and get a human to pick the correct lane for it. To their credit, the Waymo vehicles are, apparently, able to handle driving well enough that they very rarely crash before needing a fleet response, and the vehicles know when they need to request a fleet response, so that humans can intervene reactively rather than proactively. That’s an impressive accomplishment, although, it must be said, it’s not full autonomy.

The main caveat is there are humans in the loop, giving the Waymos input on what to do, and at times even going so far as to draw a path for the Waymo to take. There are a few other caveats, or at least potential caveats. It is hard to say as much about these because while Waymo has publicly disclosed some information about its “fleet response” — although not much detail — is tight-lipped about basically every technical aspect of its autonomous driving pilot operations. We know that Waymo uses street-level geofencing, restricting exactly where its vehicles can and can’t drive. We know that Waymo uses high-definition maps that are frequently updated with details like where there are construction zones, possibly so that vehicles can route around problems they can’t reliably handle. Geofencing and HD maps allow Waymo to carve out the easiest streets to drive on, avoiding anywhere that’s too difficult for its vehicles to handle.

I strongly suspect, although I don’t recall seeing definite confirmation, that there is a whole lot of special casing going on with Waymo’s driving software and deep learning systems. That is to say, every detail of every street, every block, and every neighbourhood in every city where Waymo operates gets a lot of attention from software and AI engineers. Doing this for the whole world would be impractical; this approach isn’t scalable or generalizable. Now, it seems like Waymo is probably able to do less special casing with each new area it expands to than it did for the last, thanks to the accumulation of software and neural network training over time. However, that’s not to say it will ever reach a tipping point where the amount of special casing required to expand into new areas is so small that it’s no longer a practical barrier.

For the past three or four years (if not a little longer), Tesla has been ideally situated to prove out the thesis that with enough neural network training and software development, drawing on enough data and experience, vehicles will be able to achieve generalizable, scalable autonomy. I believed this approach was extremely promising and I was supremely excited and optimistic about it. It turns out not to have worked. The problem is more fiendish and complex than I realized. It’s too bad. It would be great to have autonomous cars.

Tesla is now rolling out a small robotaxi pilot, but I don’t think there is any technical purpose to doing so. I don’t see what advantage or benefit there would be from a research and development perspective to doing a robotaxi pilot as opposed to the sort of testing Tesla has already been doing for years. It seems to be motivated by other reasons, such as looking good to investors, or, perhaps, more generously, focusing Tesla’s employees on a more concrete goal.

I think Elon Musk’s judgment started to worsen sometime around 2020. He was always mercurial, somewhat impulsive, prone to setting unrealistic timelines, and at times needlessly combative. However, I held hope that his positive qualities, which helped lead Tesla and SpaceX to tremendous success, would win out and he would reign in his worst tendencies. Sadly, the opposite happened, and now I no longer trust his leadership of Tesla. Before his turn to the dark side, Elon did a lot to damage his credibility, on timelines for fully autonomous driving perhaps most of all. At this point, I don’t think Elon has any credibility at all, on anything.

All this to say, while all companies working in autonomous driving, including Waymo, should be subject to a high level of scrutiny and skepticism, I think Tesla should be treated with a blanket distrust. I don’t trust Tesla not to lie about its safety numbers, for example. I don’t trust Tesla not to cover up crashes, or silence victims, or critics, or journalists, or use the Trump administration’s corruption to avoid sensible oversight from regulators. The robotaxi pilot feels like an unnecessary stunt. If Tesla ends up pulling the safety drivers out of the pilot robotaxis, I don’t trust that to be safe, and I don’t trust Tesla to have followed a reasonable, careful process to arrive at that decision. I hope that regulators and lawmakers — and journalists, and any employees brave enough to become whistleblowers — watch Tesla like a hawk and act to put a stop to reckless behaviour. We should not trust anything going on at Tesla with regard to vehicular autonomy.

But back to the main point: despite being set up for success in an ideal way for several years now, Tesla’s progress on autonomous driving has been dismayingly slow, minor, and incremental. Tesla’s attempt has discredited the idea that enough scale or general enough deployment will reach a tipping point where a general-purpose, scalable solution will be achieved. Tesla had the best opportunity to try it that you could ask for. Everything was in place. And it didn’t work. It’s therefore highly unlikely that it will work any better for Waymo, especially since it may not be practical for Waymo to achieve even 1% of Tesla’s scale.

Broader implications

It is a misunderstanding to think Waymo has solved autonomous driving. Waymo itself doesn’t even say it has solved autonomous driving, and it needs humans in the loop to make its pilot programs work. If you don’t believe me that self-driving isn’t a solved problem, believe Andrej Karpathy, who is one of the world’s foremost experts on the subject.

The path forward for Waymo doesn’t look promising. Tesla has shown that scale doesn’t confer the benefits that many (including me) had hoped. Waymo may be doomed to burn capital forever, or else go the way of Cruise and so many other self-driving car startups. (Unless, of course, there is a surprise breakthrough. But we can’t count on that.)

I wish I were wrong. I don’t like that cars bring as much danger as they do. I hate and mourn the loss of life from car crashes. Some friends of mine tragically died this way. It’s a terrible thing. I wish we had a solution to autonomous driving, or that we were on the cusp of one. It would be fantastic. It would be one of the best things to happen in the world in my lifetime. But, sadly, we aren’t there yet.

Self-driving cars are an important enough topic on their own. However, they also provide an object lesson for the recent fevered discussion around artificial general intelligence or AGI. There is really no plausible reason for self-driving cars to not already be solved if AGI is imminent. The self-driving car industry is also a stark example of business leaders and engineers giving short timelines for hugely ambitious AI goals and then companies whizzing straight past those timelines into insolvency or fire sale acquisitions. On a technical level, self-driving is one of the most important empirical examples we have of the limitations of deep learning when it comes to real world complexity, despite deep learning otherwise being teed up to do what it does best.

We have already started to cross into the timeframe where some of the megaphones of AGI fever are proving to be false prophets. Dario Amodei, the CEO of Anthropic, predicted in the first quarter of this year that AI would write 90% of software code by the last quarter of this year. Nothing close to that happened. That 90% threshold wasn’t even met at Anthropic, which you’d think would be the earliest adopters.

Too frequently, I see people cite self-driving as evidence that AGI will be achieved soon. If deep learning can solve self-driving cars, why can’t it solve everything else? But the lesson is actually the opposite. Deep learning can’t solve self-driving cars, at least not yet, with the limitations of current techniques, and we should therefore be skeptical of it solving comparably complex real world problems anytime soon.

Sociologically and from an industry perspective, the self-driving fever (bubble?) that burned from around 2017, when Tesla launched its Hardware 2 platform and hired Andrej Karpathy, to 2024, when Cruise Automation shut down, is almost a one-to-one match for what’s happening now with generative AI and AGI hype. People who are in respectable positions in industry, or others who are knowledgeable and reputable, can confidently say a year when something will happen — 2020 or 2022 or 2024 — and it can come not even close to happening, and then the world just moves on. Dario Anodei’s dud prediction about AI coding is the first prominent example of that (at least that I’m aware of) happening for this current era of generative AI fever. The next one will probably be Elon Musk’s absurd prediction that the next version of Grok, Grok 5, will be AGI or “something indistinguishable from” it. Grok 5 is supposed to launch in early 2026.

The most intense phase of self-driving car fever lasted about seven years, although the fever preceded that somewhat (e.g. Uber ATG started working on self-driving cars in 2015) and continues to some extent to this day. I expect that in five to ten years, we’ll be in a similar state of underwhelming results — relative to current expectations, though not necessarily in absolute terms — with generative AI and the current AGI fever. ChatGPT is a great search engine that allows for genuinely semantic search using advanced natural language processing, which has been a dream at Google and elsewhere for a long time. By most accounts, Tesla’s Autopilot and misnamed “Full Self-Driving” software are great highway driver assistance systems. There is genuine innovation and genuinely useful products in both cases. But the grand, transformative vision of fully self-driving cars hasn’t been achieved, and idea of AI knowledge workers substituting for humans in complex jobs within the next five to ten years won’t be achieved within that timeframe either.

34

0
0

Reactions

0
0

More posts like this

Comments3
Sorted by Click to highlight new comments since:

Interesting take, very American centric though, while China has about as many self driving cars on the road as the US, more companies in the game and faster scale up plans. With less extreme regulation why would Chinese makers not accelerate The takeoff here, and even maybe take over like they are with electric cars?

Good question. I’m less familiar with the self-driving car industry in China, but my understanding is that the story there has been the same as in the United States. Lots of hype, lots of demos, lots of big promises and goals, very little success. I don’t think plans count for anything at this point, since there’s been around 6-10 years of companies making ambitious plans that never materialized.

Regulation is not the barrier. The reason why self-driving cars aren’t a solved problem and aren’t close to being a solved problem is that current AI techniques aren’t up to the task; there are open problems in fundamental AI research that would need to be solved for self-driving to be solved. If governments can accelerate progress, it’s in funding fundamental AI research, not in making the rules on the road more lenient.

Seeing the amount of private capital wasted on generative AI has been painful. (OpenAI alone has raised about $80 billion and the total, global, cumulative investment in generative AI seems like it’s into the hundreds of billions.) It’s made me wonder what could have been accomplished if that money had been spent on fundamental AI research instead. Maybe instead of being wasted and possibly even nudging the U.S. slightly toward a recession (along with tariffs and all the rest), we would have gotten the kind of fundamental research progress needed for useful AI robots like self-driving cars.

Seeing the amount of private capital wasted on generative AI has been painful. (OpenAI alone has raised about $80 billion and the total, global, cumulative investment in generative AI seems like it’s into the hundreds of billions.) It’s made me wonder what could have been accomplished if that money had been spent on fundamental AI research instead. Maybe instead of being wasted and possibly even nudging the U.S. slightly toward a recession (along with tariffs and all the rest), we would have gotten the kind of fundamental research progress needed for useful AI robots like self-driving cars.

Some have claimed that the data center build out is what saved the US from a recession so far. Interestingly, this is valuing the data centers by their cost. The optimists say that the eventual value to the US economy will be much larger than the cost of the data centers, and the pessimists (e.g. those who think we are in an AI bubble) say that the value to the US economy will be lower than the cost of the data centers.

Curated and popular this week
Relevant opportunities