We just published an interview: Emergency pod: Judge plants a legal time bomb under OpenAI (with Rose Chan Loui). Listen on Spotify, watch on Youtube, or click through for other audio options, the transcript, and related links.
Episode summary
…if the judge thinks that the attorney general is not acting for some political reason, and they really should be, she could appoint a ‘special interest party’…. That’s the court saying, “I’m not seeing the public’s interest sufficiently protected here.” — Rose Chan Loui |
When OpenAI announced plans to convert from nonprofit to for-profit control last October, it likely didn’t anticipate the legal labyrinth it now faces. A recent court order in Elon Musk’s lawsuit against the company suggests OpenAI’s restructuring faces serious legal threats, which will complicate its efforts to raise tens of billions in investment.
As nonprofit legal expert Rose Chan Loui explains, the court order set up multiple pathways for OpenAI’s conversion to be challenged. Though Judge Yvonne Gonzalez Rogers denied Musk’s request to block the conversion before a trial, she expedited proceedings to the fall so the case could be heard before it’s likely to go ahead. (See Rob’s brief summary of developments in the case.)
And if Musk’s donations to OpenAI are enough to give him the right to bring a case, Rogers sounded very sympathetic to his objections to the OpenAI foundation selling the company, benefiting the founders who forswore “any intent to use OpenAI as a vehicle to enrich themselves.”
But that’s just one of multiple threats. The attorneys general (AGs) in California and Delaware both have standing to object to the conversion on the grounds that it is contrary to the foundation’s charitable purpose and therefore wrongs the public — which was promised all the charitable assets would be used to develop AI that benefits all of humanity, not to win a commercial race. Some, including Rose, suspect the court order was written as a signal to those AGs to take action.
And, as she explains, if the AGs remain silent, the court itself, seeing that the public interest isn’t being represented, could appoint a “special interest party” to take on the case in their place.
This places the OpenAI foundation board in a bind: proceeding with the restructuring despite this legal cloud could expose them to the risk of being sued for a gross breach of their fiduciary duty to the public. The board is made up of respectable people who didn’t sign up for that.
And of course it would cause chaos for the company if all of OpenAI’s fundraising and governance plans were brought to a screeching halt by a federal court judgment landing at the eleventh hour.
Host Rob Wiblin and Rose Chan Loui discuss all of the above as well as what justification the OpenAI foundation could offer for giving up control of the company despite its charitable purpose, and how the board might adjust their plans to make the for-profit switch more legally palatable.
This episode was originally recorded on March 6, 2025.
Video editing: Simon Monsour
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Transcriptions: Katy Moore
I would like to see more developed thinking in EA circles about what a potential and plausible remedy is if Musk prevails here. The possibility of "some kind of middle ground here" was discussed on the podcast, and I'd keep those kinds of outcomes in mind if Musk were to prevail at trial.
In @Garrison's helpful writeup, he observes that:
And I would guess that's going to be a key element of OpenAI's argument at trial. They may assert that subsequent developments establish that nonprofit development of AI is financially infeasible, that they are going to lose the AI arms race without massive cash infusions, and that obtaining infusions while the nonprofit is in charge isn't viable. If the signs are clear enough that the mission as originally envisioned is doomed to fail, then switching to a backup mission doesn't seem necessarily unreasonable under general charitable-law principles to me. The district court didn't need to go there at this point given that the existence of an actual contract or charitable trust between the parties is a threshold issue, and I am not seeing much on this point in the court's order.
To me, this is not only a defense for OpenAI but is also intertwined with the question of remedy. A permanent injunction is not awarded to a prevailing party as a matter of right. Rather:
According to well-established principles of equity, a plaintiff seeking a permanent injunction must satisfy a four-factor test before a court may grant such relief. A plaintiff must demonstrate: (1) that it has suffered an irreparable injury; (2) that remedies available at law, such as monetary damages, are inadequate to compensate for that injury; (3) that, considering the balance of hardships between the plaintiff and defendant, a remedy in equity is warranted; and (4) that the public interest would not be disserved by a permanent injunction.
eBay Inc. v. MercExchange, L.L.C., 547 U.S. 388 (2006) (U.S. Supreme Court decision).
The district court's discussion of the balance of equities focuses on the fact that "Altman and Brockman made foundational commitments foreswearing any intent to use OpenAI as a vehicle to enrich themselves." It's not hard to see how an injunction against payola for insiders would meet traditional equitable criteria.
But an injunction that could pose a significant existential risk to OpenAI's viability could run into some serious problems on prong four. It's not likely that the district court would conclude the public interest affirmatively favors Meta, Google, xAI, or the like reaching AGI first as opposed to OpenAI. There is a national-security angle to the extent that the requested injunction might increase the risk of another country reaching AGI first. And to the extent that the cash from selling off OpenAI control would be going to charitable ends rather than lining Altman's pockets, it's going to be hard to argue that OpenAI's board has a fiduciary duty to just shut it all down and vanish ~$100B in charitable assets into thin air.
And put in more EA-coded language: the base rate of courts imploding massive businesses (or charities) is not exactly high. One example in which something like this did happen was the breakup of the Bell System in 1982, but it wasn't quick, the evidence of antitrust violations was massive, and there just wasn't any other plausible remedy. Another would be the breakup of Standard Oil in 1911, again a near-monopoly with some massive antitrust problems.
If OpenAI is practically enjoined from raising enough capital needed to achieve its goals, the usual responsible thing for a charity that can no longer effectively function is to sell off its assets and distribute the proceeds to other non-profits. Think about a non-profit set up to run a small rural hospital that is no longer viable on its own. It might prefer to merge with another non-profit, but selling the whole hospital to a for-profit chain is usually the next-best option, with selling the land and equipment as a backup option. In a certain light, how different might a sale be from what OpenAI is proposing to do? I'd want to think more about that . . .
With Musk as plaintiff, there are also some potential concerns on prong three relating to laches (the idea that Musk slept on his rights and prejudiced OpenAI-related parties as a result). Although I'm not sure if the interests of OpenAI investors and employees (who are not Altman and Brockman) with equity-like interests would be analyzed under prong three or four, it does seem that he sat around without asserting his rights while others invested cash and/or sweat equity into OpenAI. In contrast, "[t]he general principle is, that laches is not imputable to the government . . . ." United States v. Kirkpatrick, 22 U.S. (9 Wheat) 720, 735 (1824). I predict that any relief granted to Musk will need to take account of these third-party interests, especially because they were invested in while Musk slept on his rights. The avoidance of a laches argument is another advantage of a governmental litigant as opposed to Musk (although the third-party interests would still have to be considered).
All that is to say that -- while "this is really important and what OpenAI wants is bad" may be an adequate public advocacy basis for now, I think there will need to be a judicially and practically viable plan for what appropriate relief looks like at some point. Neither side in the litigation would be a credible messenger on this point, as OpenAI is compromised and its competitor Musk would like to pick off assets for his own profit and power-seeking purposes. I think that's one of the places where savvy non-party advocacy could make a difference.
Would people rather see OpenAI sold off to whatever non-insider bidder the board determines would be best, possibly with some judicial veto of a particularly bad choice? Would people prefer that a transition of some sort go forward, subject to imposition of some sort of hobbles that would slow OpenAI down and require some safety and ethics safeguards? These are the sorts of questions on which I think a court would be more likely to defer to the United States as an amicus and/or to the state AGs, and would be more likely to listen to subject-matter experts and advocacy groups who sought amicus status.
To me, "advanc[ing] digital intelligence in the way that is most likely to benefit humanity as a whole" does not necessitate them building AGI at all. Indeed the same mission statement can be said to apply to e.g. Redwood Research.
Further evidence for this view comes from OpenAI's old merge-and-assist clause, which indicates that they'd be willing to fold and assist a different company if the other company is a) within 2 years of building AGI and b) sufficiently good.