This is the full text of a post from "The Obsolete Newsletter," a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to Build Machine Superintelligence. Consider subscribing to stay up to date with my work.

I'm in San Francisco until February 27th or so. If you'd like to meet up or tell me about some cool event, email me at tgarrisonlovely [at] gmail [dot] com.

Anthropic CEO Dario Amodei may have divulged a big secret with worrying implications for AI firms like his own and OpenAI.

Last month, Chinese startup DeepSeek released R1, an AI model rivalling OpenAI’s flagship o1 model on key benchmarks but at a fraction of the cost — about 27-times cheaper. This sent shockwaves through the market. AI chip designer Nvidia saw a record $600 billion wiped from its value on Monday, January 27th, driving nearly $1 trillion in losses concentrated in American AI infrastructure stocks. The narrative quickly emerged: a small Chinese company had matched billion-dollar American models for mere millions, suggesting powerful AI might be far cheaper to develop than previously believed.

A dramatic digital illustration depicting a massive whale labeled 'DeepSeek' smashing through business plans of 'OpenAI,' 'Anthropic,' and 'Google.' The whale forcefully breaks through documents and charts symbolizing strategic plans, leaving a wake of destruction. The background features a stormy ocean with financial graphs in turmoil, representing the chaotic impact of the AI competition. The scene conveys dominance, disruption, and intense rivalry in the AI industry.

In an essay responding to the market panic, Amodei aimed to defend US export controls on advanced AI chips to China. But in doing so, he revealed something striking: DeepSeek's efficiency gains were exactly what we should expect from historical algorithmic progress — suggesting American AI companies have been enjoying healthy profit margins, at least until DeepSeek arrived to massively undercut them.

This assessment resonates with researchers at leading US AI companies. One told me DeepSeek's results "are within the improvement range that we'd expect from standard algorithmic improvement over time." Another was even more dismissive: "I don't think anyone cares very much, it doesn't seem very surprising… Obviously they're talented but nothing about it is unexpected." Notably, none of these reactions came from customer-facing employees.

By offering a comparable model at significantly lower prices, DeepSeek is likely to trigger an AI price war, just as it did in the Chinese market last summer. Lower prices should boost demand, foreshadowed by DeepSeek’s meteoric rise to the top of the iPhone app store.

Microsoft CEO Satya Nadella recognized this dynamic, tweeting: "Jevons paradox strikes again! As AI gets more efficient and accessible, we will see its use skyrocket, turning it into a commodity we just can't get enough of." The paradox he references: when technologies get more efficient, we often end up using more of them, not less. This commoditization would benefit companies using AI to enhance their existing services, like Microsoft, Meta, and Google.

However, for pure-AI companies like OpenAI, the implications are troubling. If US companies have already achieved similar efficiencies, then compute costs for these models won’t substantially decrease even if prices fall. This tightens unit economics for AI developers, lengthening their already uncertain path to profitability. OpenAI’s swift release of a “mini” version of its forthcoming o3 model, priced roughly double DeepSeek’s offering, suggests they recognize this threat.

And if Nadella is right about AI's coming status as a commodity, that's very bad news for OpenAI — commodity producers don't typically get valued at dozens of times their revenue.

DeepSeek demonstrates that the “secret sauce” for cutting-edge AI won’t remain secret for long. Months after OpenAI announced its o1 "reasoning" model, competitors have largely replicated its approach and performance. Whether through building off open-sourced innovations or “distillations” of closed models, “fast-followers” can match leaders' capabilities faster and cheaper.

While AI developers face margin pressure, the outlook for AI infrastructure providers like Nvidia is unclear.

Lower AI prices will likely drive up aggregate demand for computing power, but also reduce the profit per chip, which are currently astronomically high. The market initially bet on the downside. But early evidence suggests the demand effect may be winning out. Industry research group Semianalysis reported that prices to rent Nvidia's flagship H100 chip actually "exploded" after DeepSeek released its V3 model in December, with no slowdown after R1's introduction. "More intelligence for cheaper means more demand," they write.

Major tech companies seem to agree and are still planning massive AI and datacenter spending increases this year.

Public and private markets diverged wildly in their response to DeepSeek. The same week that Nvidia lost nearly 20% of its market cap, SoftBank reportedly sought to invest up to $25 billion in OpenAI at a valuation approaching $300 billion — nearly double where the company was valued just months earlier. The day of the DeepSeek market panic, Nvidia closed at a lower share price than it had in early October, when OpenAI announced its last funding round at a $157 billion valuation.

These contradictory outlooks can't both be right. A world of commodity AI services is fundamentally incompatible with the soaring private market valuations of AI companies that have yet to turn a profit.

(The market seems to have mostly corrected — Nvidia's stock is now only down one percent for the month.)

Ultimately, adoption of AI will continue and likely accelerate, as price drops coincide with significant increases to usefulness of the underlying technology. Pure-AI companies are in a long race to turn a profit before their products become commoditized. And DeepSeek just moved the finish line further away.

If you enjoyed this post, please subscribe to The Obsolete Newsletter

Comments1


Sorted by Click to highlight new comments since:

Executive summary: DeepSeek’s ability to produce competitive AI models at a fraction of OpenAI’s cost has intensified price competition, threatening the profitability of US AI firms and accelerating the commoditization of AI.

Key points:

  1. DeepSeek’s disruption: The Chinese startup DeepSeek released an AI model rivaling OpenAI’s at 27-times lower cost, triggering market turmoil and wiping out hundreds of billions in AI-related stock value.
  2. US AI firms under pressure: DeepSeek’s efficiency gains align with expected algorithmic progress, implying that US AI firms had previously benefited from high margins that are now unsustainable.
  3. AI price war and commoditization: Lower prices will boost demand (following Jevons paradox), benefiting companies integrating AI into services (e.g., Microsoft, Google) but harming pure-AI firms like OpenAI that rely on pricing power.
  4. Impact on Nvidia and AI infrastructure: While Nvidia's stock initially plunged, increased demand for AI compute suggests that lower AI costs might still drive higher aggregate spending on infrastructure.
  5. Valuation contradictions: Private markets remain bullish on AI firms (e.g., SoftBank considering a $300B OpenAI valuation), despite public markets reacting negatively, indicating fundamental uncertainty about AI’s profitability.
  6. Long-term challenge: AI adoption will accelerate, but DeepSeek’s low-cost competition pushes profitability further out of reach for US AI companies, making sustained innovation and differentiation critical.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f
LewisBollard
 ·  · 8m read
 · 
> How the dismal science can help us end the dismal treatment of farm animals By Martin Gould ---------------------------------------- Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. ---------------------------------------- This year we’ll be sharing a few notes from my colleagues on their areas of expertise. The first is from Martin. I’ll be back next month. - Lewis In 2024, Denmark announced plans to introduce the world’s first carbon tax on cow, sheep, and pig farming. Climate advocates celebrated, but animal advocates should be much more cautious. When Denmark’s Aarhus municipality tested a similar tax in 2022, beef purchases dropped by 40% while demand for chicken and pork increased. Beef is the most emissions-intensive meat, so carbon taxes hit it hardest — and Denmark’s policies don’t even cover chicken or fish. When the price of beef rises, consumers mostly shift to other meats like chicken. And replacing beef with chicken means more animals suffer in worse conditions — about 190 chickens are needed to match the meat from one cow, and chickens are raised in much worse conditions. It may be possible to design carbon taxes which avoid this outcome; a recent paper argues that a broad carbon tax would reduce all meat production (although it omits impacts on egg or dairy production). But with cows ten times more emissions-intensive than chicken per kilogram of meat, other governments may follow Denmark’s lead — focusing taxes on the highest emitters while ignoring the welfare implications. Beef is easily the most emissions-intensive meat, but also requires the fewest animals for a given amount. The graph shows climate emissions per tonne of meat on the right-hand side, and the number of animals needed to produce a kilogram of meat on the left. The fish “lives lost” number varies significantly by