Hide table of contents

Summary: A BOTEC indicates that Open AI might have been valued at 220-430x their annual recurring revenue, which is high but not unheard of. Various factors make this multiple hard to interpret, but it generally does not seem consistent with investors believing that Open AI will capture revenue consistent with creating transformative AI.

Overview

Epistemic status: revenue multiples are intended as a rough estimate of how much investors believe a company is going to grow, and I would be surprised if my estimated revenue multiple was off by more than a factor of 5. But the "strategic considerations" portion of this is a bunch of wild guesses that I feel much less confident about.

  1. There has been some discussion about how much markets are expecting transformative AI, e.g. here. One obvious question is "why isn't Open AI valued at a kajillion dollars?"
  2. I estimate that Microsoft's investment implicitly valued OAI at 220-430x their annual recurring revenue. This is high - average multiples are around 7x, but some pharmaceutical companies have multiples > 1000x.[1] This would seem to support the argument that investors think that OAI is exceptional (but not "equivalent to the Industrial Revolution" exceptional[2]).
  3. However, Microsoft received a set of benefits from the deal which make the EV multiple overstated. Based on adjustments, I can see the actual implied multiple being anything from -2,200x to 3,200x.
    1. (Negative multiples imply that Microsoft got more value from access to OAI models than the amount they invested and are therefore willing to treat their investment as a liability rather than an asset.)
  4. One particularly confusing fact is that OAI's valuation appears to have gone from $14 billion in 2021 to $19 billion in 2023. Even ignoring anything about transformative AI, I would have expected that the success of ChatGPT etc. should have resulted in a more than a 35% increase.
  5. Qualitatively, my guess is that this was a nice but not exceptional deal for OAI, and I feel confused why they took it. One possible explanation is “the kind of people who can deploy $10B of capital are institutionally incapable of investing at > 200x revenue multiples”, which doesn’t seem crazy to me. Another explanation is that this is basically guaranteeing them a massive customer (Microsoft), and they are willing to give up some stock to get that customer.
  6. Squiggle model here
  7. It would be cool if someone did a similar write up about Anthropic, although publicly available information on them is slim. My guess is that they will have an even higher revenue multiple (maybe infinite? I'm not sure if they had revenue when they first raised).

Details

  • Valuation: $19B
    • A bunch of news sites (e.g. here) reported that Microsoft invested $10 billion to value OAI at $29 billion. I assume that this valuation is post money, meaning the pre-money valuation is 19 billion.
    • Although this site says that they were valued at $14 billion in 2021, meaning that they only increased in value 35% the past two years. This seems weird, but I guess it is consistent with the view that markets aren’t valuing the possibility of TAI.
  • Revenue: $54M/year
    • Reuters claims they are projecting $200M revenue in 2023. 
    • FastCompany says they made $30 million in 2022.
    • If the deal closed in early 2023, then presumably annual projections of their monthly revenue were higher than $30 million, though it's unclear how much. 
    • Let’s arbitrarily say MRR will increase 10x this year, implying a monthly growth rate of 10^(1/12) = 1.22
    • Solving the geometric series of 200 =  x * (1-1.22^12) / (1 -1.22) we get that their first month revenue is $4.46M, a run rate of $53.52M/year
  • Other factors:
    • The vast majority of the investment is going to be spent on Microsoft services. Azure supposedly has a 30% profit margin, although I would not be that surprised to learn that this deal involved Microsoft selling services below their nominal value. But this also presumably has some strategic value for Azure so I’m just going to consider this a 30% cut on the cost of capital.
    • Profit: Microsoft gets 75% of OAI profit until it recoups its investment, then gets 49%. Since their investment should have only given them a 34% stake this could be seen as actually valuing OAI at something like (.34/.49) * $19B = $13.2B.
    • Profit cap: Importantly, there is some profit cap, although the amount has not been disclosed as far as I can tell. Their founding announcement says “Returns for our first round of investors are capped at 100x their investment (commensurate with the risks in front of us), and we expect this multiple to be lower for future rounds as we make further progress.” I don't really know how to value this, and I also don’t know what the actual limit was for this round, apart from the “expectation” that it’s <100. I don't think there's ever been a historical instance of a $10 billion investment generating >100 X returns. Also, I'm not sure if this is limiting the value of their shares, their profit, or both. I would guess that the Microsoft analysts working on this deal didn't change their valuation estimate of open AI very much based on this cap, but who knows.
  • Strategic value to Microsoft: 
    • Microsoft gets access to use open AI models in a bunch of its products as part of this deal. This seems really hard to estimate but I will try to make up some numbers.
    • In this section I’m going to assume a 30% annual discount rate because the future is uncertain. This number is made up.
    • Bing
      • “Search and news advertising revenue increased $100 million or 3%. Search and news advertising revenue excluding traffic acquisition costs increased 10% driven by higher search volume and the Xandr acquisition.” - earnings release
      • Microsoft bragged about passing 100 million DAUs for the first time, but this honestly looks to me like pretty normal growth? Q3 was their slowest growing quarter in the past year. 
      • So I don’t know, say chat is responsible for 20-70% of this revenue increase?
    • Office
      • Office drives a lot more revenue than search for Microsoft, and my subjective impression is that AI could be extraordinarily powerful for their sales, but there aren't hard numbers to go off of
      • I'm just going to say it that using OAI models could increase their growth rate 0-2x for one year and this is valued at 9.8 times revenue (Microsoft’s current multiple)
      • … this alone is worth more than $10 billion
    • Azure
      • This post infers from Microsoft public statements that Azure ML currently has a $900 million run rate and the number of customers (though not necessarily the amount of revenue) has 10x’d quarter over quarter.
      • So maybe say this is another 1-$2 billion of revenue?
  • Strategic value to Open AI
    • All of the above instances of strategic value to Microsoft are arguably also instances of strategic value to open AI: Microsoft will presumably need to pay OAI some fee each time they use one of their models, and which party gains more from that trade is unclear.
    • For simplicity I'm just going to say that OAI stands to gain a similar amount to Microsoft and not model this separately
  • Net value
    • I honestly have no clue which entity stands to benefit more from this partnership
    • I’m just going to model this as a normal distribution centered around 0, though this is very made up

I received a number of helpful comments on a draft of this, particularly from two anonymous reviewers.

  1. ^

    Thanks to Pat Myron for this link.

  2. ^

    Someone asked: do we have any historical examples of companies growing enough to actually get a 100x return at this scale? I think there are some, though not many: Saudi Aramco IPO’d at $1.88T, almost exactly 100 times OAI’s $19B valuation.  

Comments13


Sorted by Click to highlight new comments since:

There are public biotech companies with ~$10B valuations and higher price/sales ratios: https://finviz.com/screener.ashx?v=121&f=cap_midover&o=-ps

And private valuations are much more generous than public valuations

Amazing, thanks! I will update the post appropriately.

Private R&D cannot be protected perfectly because patents expire or industry know-how diffuses to other firms and not all rents from investments can be captured. There was a leaked memo out of Google recently that said that Open source foundation models are very good and don't need much compute to run. Recently, OpenAI's CEO Altman has often highlighted that their models are not based on any one fundamental technical breakthrough, but thousands of little hacks from tinkering- but perhaps this is wrong and a strategic statement to boost the valuation of the company. 

Relatedly: "without the gains of stocks that are possible AI winners, the S&P 500 would now be down 2 per cent this year, rather than up 8 per cent." https://archive.ph/KFMJU

This might suggest that the gains from AI might be distributed more evenly amongst different Big Tech companies and that economies of scope are more important than relatively small technical leads.

A BOTEC indicates that Open AI might have been valued at >200x their annual recurring revenue, which is by far the highest multiple I can find evidence of for a company of that size. This seems consistent with the view that markets think OAI will be exceptional, but not “equivalent to the Industrial Revolution” exceptional, and a variety of factors make this multiple hard to interpret.

I would be very cautious about trying to extract information from private valuations, especially at this still somewhat early stage. Private markets are far less efficient than public ones, large funding rounds are less efficient still, and large funding rounds led by the organization that controls your infrastructure might as well be poker games. 

Agreed that it's hard to precisely interpret private valuations, but if e.g. we knew for certainty that OAI would be valued at $1T 10 years from now, it's hard to imagine their round closing for $19B today. So this places some bounds on investor expectations of future growth.

One particularly confusing fact is that OAI's valuation appears to have gone from $14 billion in 2021 to $19 billion in 2023. Even ignoring anything about transformative AI, I would have expected that the success of ChatGPT etc. should have resulted in a more than a 35% increase.

I've frequently seen people fail to consider the fact that potential competitors (including borderline copycats) can significantly undermine the profitability of one company's technology, even if the technology could generate substantial revenue. Are you taking this into account?

I have not done a formal analysis, but yes, even given the possibility of copycats I still feel like the unexpected success of ChatGPT should have resulted in more than a 35% increase to their value.

Yes, absolutely! It's important to recognize and account for the potential impact of competitors, including those who may closely replicate a company's technology or offerings. Competition in the market can indeed pose a significant threat to the profitability of a company, regardless of how promising its technology might be. Even if a technology has the potential to generate substantial revenue, if competitors emerge with similar or even slightly different solutions, it can lead to a loss of market share and decreased profitability.

Re revenue, $200M is only 833k paid subscribers using chatGPT with GPT-4 (it's $20/month). This seems like a massive underestimate.

It looks to me like paid subscriptions were not available in January? But regardless: if they have 100M MAUs then 833k paid would be ~1% of their user base which doesn't seem crazy to me?

I would love better numbers here though, if you have them.

Paid subscriptions started with the official release of GPT-4 (March). 100M is likely a significant underestimate now, I don't think the user-base saturated there. This say 1B users (but doesn't seem that credible). Also 1% seems kind of low when the GPT-4 answers are significantly better (I guess you can also get GPT-4 for free on Bing though). I'd be surprised if there were <10M paid subscribers (c.f. Netflix and Spotify with ~200M each).

Cool, I think you know this but: revenue multiples are by definition based on historical data; since the round closed before March it by definition could not have included ChatGPT Pro subscriptions.

I'm sure Open AI tried to convince its investors that it was on the verge of making a bunch of money, and maybe investors believed them, who knows. I think it could be useful to redo this analysis under that assumption, if someone wants to. It does seem weird that they are supposedly only forecasting $200 million in revenue this year if they are plausibly getting more than that from just ChatGPT.

Curated and popular this week
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
 ·  · 15m read
 · 
In our recent strategy retreat, the GWWC Leadership Team recognised that by spreading our limited resources across too many projects, we are unable to deliver the level of excellence and impact that our mission demands. True to our value of being mission accountable, we've therefore made the difficult but necessary decision to discontinue a total of 10 initiatives. By focusing our energy on fewer, more strategically aligned initiatives, we think we’ll be more likely to ultimately achieve our Big Hairy Audacious Goal of 1 million pledgers donating $3B USD to high-impact charities annually. (See our 2025 strategy.) We’d like to be transparent about the choices we made, both to hold ourselves accountable and so other organisations can take the gaps we leave into account when planning their work. As such, this post aims to: * Inform the broader EA community about changes to projects & highlight opportunities to carry these projects forward * Provide timelines for project transitions * Explain our rationale for discontinuing certain initiatives What’s changing  We've identified 10 initiatives[1] to wind down or transition. These are: * GWWC Canada * Effective Altruism Australia funding partnership * GWWC Groups * Giving Games * Charity Elections * Effective Giving Meta evaluation and grantmaking * The Donor Lottery * Translations * Hosted Funds * New licensing of the GWWC brand  Each of these is detailed in the sections below, with timelines and transition plans where applicable. How this is relevant to you  We still believe in the impact potential of many of these projects. Our decision doesn’t necessarily reflect their lack of value, but rather our need to focus at this juncture of GWWC's development.  Thus, we are actively looking for organisations and individuals interested in taking on some of these projects. If that’s you, please do reach out: see each project's section for specific contact details. Thank you for your continued support as we
 ·  · 11m read
 · 
Our Mission: To build a multidisciplinary field around using technology—especially AI—to improve the lives of nonhumans now and in the future.  Overview Background This hybrid conference had nearly 550 participants and took place March 1-2, 2025 at UC Berkeley. It was organized by AI for Animals for $74k by volunteer core organizers Constance Li, Sankalpa Ghose, and Santeri Tani.  This conference has evolved since 2023: * The 1st conference mainly consisted of philosophers and was a single track lecture/panel. * The 2nd conference put all lectures on one day and followed it with 2 days of interactive unconference sessions happening in parallel and a week of in-person co-working. * This 3rd conference had a week of related satellite events, free shared accommodations for 50+ attendees, 2 days of parallel lectures/panels/unconferences, 80 unique sessions, of which 32 are available on Youtube, Swapcard to enable 1:1 connections, and a Slack community to continue conversations year round. We have been quickly expanding this conference in order to prepare those that are working toward the reduction of nonhuman suffering to adapt to the drastic and rapid changes that AI will bring.  Luckily, it seems like it has been working!  This year, many animal advocacy organizations attended (mostly smaller and younger ones) as well as newly formed groups focused on digital minds and funders who spanned both of these spaces. We also had more diversity of speakers and attendees which included economists, AI researchers, investors, tech companies, journalists, animal welfare researchers, and more. This was done through strategic targeted outreach and a bigger team of volunteers.  Outcomes On our feedback survey, which had 85 total responses (mainly from in-person attendees), people reported an average of 7 new connections (defined as someone they would feel comfortable reaching out to for a favor like reviewing a blog post) and of those new connections, an average of 3
Recent opportunities in AI safety
20
Eva
· · 1m read
14
Ryan Kidd
·