OpenAI is now a public benefit corporation, with a charter that demands they use AGI for the benefit of all, and do so safely. To justify this structure to the Attorneys General of Delaware and California, they split off the nonprofit OpenAI Foundation, and instead of full ownership, gave it 27% equity, worth well over $150 billion - what some have called the largest theft in human history. They also convened a commission to advise them on how to give away that money, and last year announced the first tranche of that giving, evidently funded with their equity.
Announcement
This week, they announced a team, and an even larger commitment; giving "at least $1 billion" over the coming year; I argue everyone should agree that this is far too little.
That's becase their plan is to use their massive endowment, currently over $150 billion, for charity - though technically, it’s not an endowment. Which is convenient, because if it was, they would need to give 5% of their assets every year, currently over $7 billion per year. But even that seems quite conservative, given the possible trajectory of their holdings.
Possible Futures
It seems like there are three relevant possibilities:
- First, OpenAI may be successful in turning a large profit based on continued marginal improvements in AI systems, so that their company continues getting much more valuable, far faster than 5% annual growth, meaning that the endowment would grow in value, that is, on net accumulating rather than distributing wealth. In this case, OpenAI’s equity will appreciate greatly; it would be irresponsible for the nonprofit not to try to spend large parts of that increased value. Given the current valuation, and revenue tripling yearly, even moderate growth means that the nonprofit should be spending tens of billions of dollars per year.
- Secondly, they may be even more successful in building significantly more powerful AI, transforming the world. Obviously, the nonprofit would become far wealthier, but given OpenAI’s mandate, it also becomes irrelevant. That is, the mission of the public benefit corporation to benefit the world is primary, the world becomes far richer and more prosperous, and the nonprofit’s giving becomes less impactful over time. There is no reason to try to save its money until after ASI is eliminating global poverty and curing cancer. With OpenAI’s declared view that AGI is coming in the next 5 years, and their views about what that future looks like, it’s reasonable to provisionally plan to spend down the entirety of the funds within a decade - again, easily tens of billions of dollars per year.
- Lastly, perhaps they do neither, predictions of an AI bubble come true, or their company specifically falters, and the company’s valuation drops greatly. In this case, stock can be sold at current inflated prices, and they should be liquidating an even larger portion of their equity at the current inflated price to lock-in the money they can distribute.
Regardless of which one it is, the argument for faster spending seems clear.
Actual Plans
So, are they doing that? They have started. Last year they announced a $50 million program, or 0.2% of the commitment, they have announced making $40.5m in grants from that program. And this week, they announced $1b in planned giving this coming year, with program officers who have some experience doing that type of giving.
Obviously, the planned "at least $1 billion" in 2026, starting a year after they launched, is a slow start for such a large program. And at the start, they committed to giving away $25 billion… eventually, with no mention of planned sales. But why would they even need to commit to giving away $25 billion? They are a charity that thinks ASI is coming soon, so they should be planning on spending everything quickly, not eventually.
But in the real world, the (unfortunately typical) process of committing money without significant action is not acceptable, especially given their legal commitments and the demands of the attorneys general overseeing the Foundation.
Yes, $25 billion would be a good start; if it’s done in the next two years, I will admit they are doing their jobs[1]. Unfortunately, what I expect instead is that either the $25 billion is more than they give in the coming few years, and also far less money than their stock holding goes up, so that the foundation grows, or perhaps that even the $25 billion commitment is made impossible by a crash. Either way, they would be failing their mission if they do not use a substantial portion of the wealth very soon.
I hope I’m wrong, but we’ll see.
Questions and Answers
Q: Shouldn’t the nonprofit save money to spend on AI alignment when it’s more needed?
A: That’s explicitly the purpose of the public benefit corporation. Given the structure of OpenAI, the Foundation should absolutely demand that such work be done, but should not need to fund it separately.
Q: Won’t the OpenAI Foundation dilute their influence or lose control of the company if they sell too many shares?
A: Not according to the agreement with the Attorney Generals about the structure. As long as they hold Class-N shares, they have sole authority to appoint the board members of the company.
Q: Isn’t the OpenAI Foundation board identical to the corporation’s board, so they have no incentive to do this correctly?
A: Yes, this seems to be a severe drawback of their current governance model and the board’s composition.
Q: Even if they sell shares, why spend the money immediately?
A: Optimal timelines for giving are complex, but given the expected trajectory of OpenAI - especially if they are correct that we’re close to beneficial ASI - it seems very hard to justify saving most of the money for later. But as argued above, regardless of what they believe the future holds, they should be rapidly giving away money - and the opportunities exist already.
Q: Couldn’t selling shares cause the price to collapse, making this a self-fulfilling reason for OpenAI to decline?
A: If liquidating shares itself could collapse the price, then the fundamental value of the product and expected revenue doesn't support the valuation of the company, and it's a bubble. It could be seen as evidence of OpenAI not having confidence, but that’s more likely to be the case if they don’t have clear reasons, i.e. they aren’t actually spending the cash.
Q: Even if the company is solid, wouldn’t liquidating the shares cause the value of employee and investor shares to go down?
A: Probably, but that’s not the responsibility of the Foundation, and contrary to their recent agreements with state attorneys general. Their responsibility is to do charitable things with their assets, and if they nevertheless decided that safeguarding OpenAI employee’s short-term profit overrides the mission, the nonprofit is explicitly not doing its job.
Thanks to Max Dalton, Ozzie Gooen, Jakob Graabak, and Nuño Sempere for comments on an earlier draft.
- ^
They gave $50m in 2025 and plan $1b in 2026, so if the keep accelerating at their recent trajectory, that's 20x per year. Perhaps they'll plan on giving $20b in 2027, and given that they expect AGI before then, keep going, they could maintain the acceleration and give $400bn in 2028.

To forestall an obvious objection, I do not endorse the decision of OpenAI to use this structure, and there are many other problems. However, the above arguments should apply according to the views they profess, which seems important.
In this scenario, wouldn't it be much better if the non-profit didn't spend its money now? By holding onto the money now, it'd have much more to give later. Put another way: imagine if the grantees receiving the money were asked "would you prefer $100 today or $10,000 in 6 years?" many would take the latter.
One frame that might make this argument more compelling is that if OAI ends up building AGI and ends up having astronomical value, then the foundation is sitting on humanity's endowment. Spending it down now before it's realized its value could be very costly.
No, it would not. Onthe frame that makes the argument more compelling, that's the second option; "Secondly, they may be even more successful in building significantly more powerful AI, transforming the world. Obviously, the nonprofit would become far wealthier, but given OpenAI’s mandate, it also becomes irrelevant."
But within the first option, if they are actually more than doubling their value yearly (as implied by 100x in 6 years, which matches their current revenue growth continuing at the current rate,) if they give away $20 billion per year, starting at their current valuation of $150 billion, they end up giving away only a small fraction of their eventual endowment - about 13%. And in that case, given that it's hard to spend 13% of $150b effectively, it's going to be far harder to spend any large percentage of their $15 trillion endowment in later years!