Hide table of contents

We just published an interview: Emergency pod: Judge plants a legal time bomb under OpenAI (with Rose Chan Loui). Listen on Spotify, watch on Youtube, or click through for other audio options, the transcript, and related links. 

Episode summary

…if the judge thinks that the attorney general is not acting for some political reason, and they really should be, she could appoint a ‘special interest party’…. That’s the court saying, “I’m not seeing the public’s interest sufficiently protected here.”

— Rose Chan Loui

When OpenAI announced plans to convert from nonprofit to for-profit control last October, it likely didn’t anticipate the legal labyrinth it now faces. A recent court order in Elon Musk’s lawsuit against the company suggests OpenAI’s restructuring faces serious legal threats, which will complicate its efforts to raise tens of billions in investment.

As nonprofit legal expert Rose Chan Loui explains, the court order set up multiple pathways for OpenAI’s conversion to be challenged. Though Judge Yvonne Gonzalez Rogers denied Musk’s request to block the conversion before a trial, she expedited proceedings to the fall so the case could be heard before it’s likely to go ahead. (See Rob’s brief summary of developments in the case.)

And if Musk’s donations to OpenAI are enough to give him the right to bring a case, Rogers sounded very sympathetic to his objections to the OpenAI foundation selling the company, benefiting the founders who forswore “any intent to use OpenAI as a vehicle to enrich themselves.”

But that’s just one of multiple threats. The attorneys general (AGs) in California and Delaware both have standing to object to the conversion on the grounds that it is contrary to the foundation’s charitable purpose and therefore wrongs the public — which was promised all the charitable assets would be used to develop AI that benefits all of humanity, not to win a commercial race. Some, including Rose, suspect the court order was written as a signal to those AGs to take action.

And, as she explains, if the AGs remain silent, the court itself, seeing that the public interest isn’t being represented, could appoint a “special interest party” to take on the case in their place.

This places the OpenAI foundation board in a bind: proceeding with the restructuring despite this legal cloud could expose them to the risk of being sued for a gross breach of their fiduciary duty to the public. The board is made up of respectable people who didn’t sign up for that.

And of course it would cause chaos for the company if all of OpenAI’s fundraising and governance plans were brought to a screeching halt by a federal court judgment landing at the eleventh hour.

Host Rob Wiblin and Rose Chan Loui discuss all of the above as well as what justification the OpenAI foundation could offer for giving up control of the company despite its charitable purpose, and how the board might adjust their plans to make the for-profit switch more legally palatable.

This episode was originally recorded on March 6, 2025.

Video editing: Simon Monsour
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Transcriptions: Katy Moore

Comments17


Sorted by Click to highlight new comments since:

If you're in EA in California or Delaware and believe OpenAI has a significant chance of achieving AGI first and there being a takeoff, it's probably time-effective to write a letter to your AG encouraging them to pursue action against OpenAI. OpenAI's nonprofit structure isn't perfect, but it's infinitely better than a purely private company would be.

Thanks, I wrote a letter to my California AG because of this comment. See here for a workflow someone else made to write a letter to the California or Delaware AG. My letter is here if anyone wants to take a look for inspiration.

For an AG, should you handwrite the letter, like with congressmember offices, or type and print it like with normal legal work? 

Congressional offices often ignore typed letters because they've learned years ago that some people set up mills that mass produce cookie-cutter letters mimicking civic engagement, instead of the legitimate paradigm of reaching out to interested people and convincing them to write their own letters, and see handwritten letters to be a costly signal that a large number of highly engaged people are involved; but if attorney generals aren't in an equilibria like that then they'd probably prefer typed and printed letters.

Would be really curious to see what evidence you're looking at re: congressional offices ignoring typed letters. The last thing I saw on this showed individualized emails slightly outperforming individualized hand-written letters, but both far outperforming form-based emails, probably for reasons you mention (from this post).

Also, I spent some time looking for grassroots campaigns to state AG offices earlier this year and found ~none, so I think there's a good chance the novelty of any grassroots outreach might be more impactful than it is for congressional offices. That's pure speclation on my part though.

Christ, why isn’t OpenPhil taking any action, even making a comment or filing an amicus curiae?

I certainly hope there’s some legitimate process going on behind the scenes; this seems like an awfully good time to spend whatever social/political/economic/human capital OP leadership wants to say is the binding constraint.

And OP is an independent entity. If the main constraint is “our main funder doesn’t want to pick a fight,” well so be it—I guess Good Ventures won’t sue as a proper donor the way Musk is; OP can still submit some sort of non-litigant comment. Naively, at least, that could weigh non trivially on a judge/AG

[warning: speculative]

As potential plaintiff: I get the sense that OP & GV are more professionally run than Elon Musk's charitable efforts. When handing out this kind of money for this kind of project, I'd normally expect them to have negotiated terms with the grantee and memoralized them in a grant agreement. There's a good chance that agreement would have a merger clause, which confirms that (e.g.) there are no oral agreements or side agreements. Attorneys regularly use these clauses to prevent either side from getting out of or going beyond the negotiated final written agreement. Even if there isn't a merger clause, the presence of a comprehensive grant agreement would likely make it harder for the donor to show that a trust had been created, that the donor had a reversionary interest, or so on if the agreement didn't say those things.

As potential source of evidence: I'd at least consider the possibility that people associated with OP and/or GV could be witnesses at trial or could provide documentary evidence -- e.g., if there is a dispute over what representations OpenAI was making to major donors to secure funding. That might counsel keeping quiet at this juncture, particularly considering the next point.

As a potential amicus: I expect the court would either reject or ignore an amicus filing at this stage in the process. The court has jurisdiction over a claim by Elon Musk and xAI that OpenAI violated antitrust law, violated a contract or trust with Musk under California charitable law, etc. If OP/GV tried to submit an amicus brief on most of the actually relevant legal issues on a preliminary injunction, the court would likely see this as an improper attempt to effectively buy additional pages of argument for Musk & xAI.[1] To the extent that the amicus brief was about a legally peripheral issue -- like AI as a GCR -- it would likely be read by a law clerk (a bright recent graduate) who would tell the judge something like "This foundation submitted an amicus brief arguing that AI may go rogue and kill us all. Doesn't seem relevant to the issues in this case." 

Note that I think there is a potential role for amici later in this case, but the preliminary-injunction stage was not it.
 

  1. ^

    See Ryan v. Commodity Futures Trading Com’n, 125 F. 3d 1062, 1063 (7th Cir. 1993) (Posner, C.J., in chambers) ("The vast majority of amicus curiae briefs are filed by allies of litigants and duplicate the arguments made in the litigants’ briefs, in effect merely extending the length of the litigant’s brief. Such amicus briefs should not be allowed. They are an abuse. The term 'amicus curiae' means friend of the court, not friend of a party."). In my experience, these sorts of amicus briefs do have a place when the core legal issue is of broad importance but the litigant lacks either the means or incentive to put forth their best argument.

I expect the court would either reject or ignore an amicus filing at this stage in the process.

Worth noting that the court has already accepted two amicus briefs on the preliminary injunction, one by Encode Justice and the other by the Delaware AG.

To the extent that the amicus brief was about a legally peripheral issue -- like AI as a GCR -- it would likely be read by a law clerk (a bright recent graduate) who would tell the judge something like "This foundation submitted an amicus brief arguing that AI may go rogue and kill us all. Doesn't seem relevant to the issues in this case."

Although this is of course speculation, I wonder if this is the type of reaction that the Encode Justice brief (and possible future ones like it), might have received. Reading the brief I could definitely see how it might come across as kind of trying to deputize the court into a policy question, while not really hitting upon issues that are hugely relevant to the case actually in front of the court.

*Edited to fix a typo

The vast majority of amicus curiae briefs are filed by allies of litigants and duplicate the arguments made in the litigants’ briefs, in effect merely extending the length of the litigant’s brief. Such amicus briefs should not be allowed. They are an abuse. The term 'amicus curiae' means friend of the court, not friend of a party.

I agree that most such briefs are often from close ideological allies, but I'm curious about you suggestion that the court would reject them on this ground. Surely all the organizations filing somewhat duplicative amicus curiae briefs all the time do so because they think it is helpful?

That quotation is from an order by then-Chief Judge Posner of the Seventh Circuit denying leave to file an amicus brief on such a basis. Judge Posner was, and the Seventh Circuit is, more of a stickler for this sort of this sort of thing (and both were/are more likely to call lawyers out for not following the rules than other courts). Other courts are less likely to actually kick an amicus brief -- that requires more work than just ignoring it! -- but I think Judge Posner's views would enjoy general support among the federal judiciary. 

There's a literature on whether amicus briefs are in general helpful vs. being a waste of money, although it mostly focuses on the Supreme Court (e.g., this article surveys some prior work and reflects interviews with former clerks, but is a bit dated). I don't see an amicus brief on the preliminary injunction here hitting many of the notes the former clerks identified as markers of value in that article. Whether there was a charitable trust between Musk and OpenAI isn't legally esoteric, there's no special perspective the amicus can bring to bear on that question, and so on. 

You're right insofar as amicus briefs are common at the Supreme Court level, although they are not that common in the courts of appeals (at least when I clerked) and I think they are even less common at the district court level in comparison to the number of significant cases. So I would not view their relative prevalence at the Supreme Court level as strong information in either direction on how effective an amicus brief might be here.

Judges are busy people; if a would-be amicus seeks to file an unhelpful amicus brief at one stage of the litigation, it's pretty unlikely the judge is going to even touch another brief from that amicus at a later stage. If I were a would-be amicus, I would be inclined to wait until I thought I had something different enough than the parties to say -- or thought that I would be seen as a more credible messenger than the parties on a topic directly relevant to a pending decision -- before using my shot.

I agree this is absurd, this is probably the most obvious action open Phil has not taken. What do they have to lose at this stage by filling a lawsuit or at the very least like you say making an official comment. 

Perhaps EAs and EA orgs are just by nature largely allergic to open public conflict even if it has decent potential to do good?

I would like to see more developed thinking in EA circles about what a potential and plausible remedy is if Musk prevails here. The possibility of "some kind of middle ground here" was discussed on the podcast, and I'd keep those kinds of outcomes in mind if Musk were to prevail at trial. 

In @Garrison's helpful writeup, he observes that: 

OpenAI's ability to attract enough investment to compete may be dependent on it being structured more like a typical company. The fact that it agreed to such onerous terms in the first place implies that it had little choice.

And I would guess that's going to be a key element of OpenAI's argument at trial. They may assert that subsequent developments establish that nonprofit development of AI is financially infeasible, that they are going to lose the AI arms race without massive cash infusions, and that obtaining infusions while the nonprofit is in charge isn't viable. If the signs are clear enough that the mission as originally envisioned is doomed to fail, then switching to a backup mission doesn't seem necessarily unreasonable under general charitable-law principles to me. The district court didn't need to go there at this point given that the existence of an actual contract or charitable trust between the parties is a threshold issue, and I am not seeing much on this point in the court's order.

To me, this is not only a defense for OpenAI but is also intertwined with the question of remedy. A permanent injunction is not awarded to a prevailing party as a matter of right. Rather:

According to well-established principles of equity, a plaintiff seeking a permanent injunction must satisfy a four-factor test before a court may grant such relief. A plaintiff must demonstrate: (1) that it has suffered an irreparable injury; (2) that remedies available at law, such as monetary damages, are inadequate to compensate for that injury; (3) that, considering the balance of hardships between the plaintiff and defendant, a remedy in equity is warranted; and (4) that the public interest would not be disserved by a permanent injunction.

eBay Inc. v. MercExchange, L.L.C., 547 U.S. 388 (2006) (U.S. Supreme Court decision).

The district court's discussion of the balance of equities focuses on the fact that "Altman and Brockman made foundational commitments foreswearing any intent to use OpenAI as a vehicle to enrich themselves." It's not hard to see how an injunction against payola for insiders would meet traditional equitable criteria. 

But an injunction that could pose a significant existential risk to OpenAI's viability could run into some serious problems on prong four. It's not likely that the district court would conclude the public interest affirmatively favors Meta, Google, xAI, or the like reaching AGI first as opposed to OpenAI. There is a national-security angle to the extent that the requested injunction might increase the risk of another country reaching AGI first. And to the extent that the cash from selling off OpenAI control would be going to charitable ends rather than lining Altman's pockets, it's going to be hard to argue that OpenAI's board has a fiduciary duty to just shut it all down and vanish ~$100B in charitable assets into thin air. 

And put in more EA-coded language: the base rate of courts imploding massive businesses (or charities) is not exactly high. One example in which something like this did happen was the breakup of the Bell System in 1982, but it wasn't quick, the evidence of antitrust violations was massive, and there just wasn't any other plausible remedy. Another would be the breakup of Standard Oil in 1911, again a near-monopoly with some massive antitrust problems.

If OpenAI is practically enjoined from raising enough capital needed to achieve its goals, the usual responsible thing for a charity that can no longer effectively function is to sell off its assets and distribute the proceeds to other non-profits. Think about a non-profit set up to run a small rural hospital that is no longer viable on its own. It might prefer to merge with another non-profit, but selling the whole hospital to a for-profit chain is usually the next-best option, with selling the land and equipment as a backup option. In a certain light, how different might a sale be from what OpenAI is proposing to do? I'd want to think more about that . . . 

With Musk as plaintiff, there are also some potential concerns on prong three relating to laches (the idea that Musk slept on his rights and prejudiced OpenAI-related parties as a result). Although I'm not sure if the interests of OpenAI investors and employees (who are not Altman and Brockman) with equity-like interests would be analyzed under prong three or four, it does seem that he sat around without asserting his rights while others invested cash and/or sweat equity into OpenAI. In contrast, "[t]he general principle is, that laches is not imputable to the government . . . ." United States v. Kirkpatrick, 22 U.S. (9 Wheat) 720, 735 (1824). I predict that any relief granted to Musk will need to take account of these third-party interests, especially because they were invested in while Musk slept on his rights. The avoidance of a laches argument is another advantage of a governmental litigant as opposed to Musk (although the third-party interests would still have to be considered).

All that is to say that -- while "this is really important and what OpenAI wants is bad" may be an adequate public advocacy basis for now, I think there will need to be a judicially and practically viable plan for what appropriate relief looks like at some point. Neither side in the litigation would be a credible messenger on this point, as OpenAI is compromised and its competitor Musk would like to pick off assets for his own profit and power-seeking purposes. I think that's one of the places where savvy non-party advocacy could make a difference.

Would people rather see OpenAI sold off to whatever non-insider bidder the board determines would be best, possibly with some judicial veto of a particularly bad choice? Would people prefer that a transition of some sort go forward, subject to imposition of some sort of hobbles that would slow OpenAI down and require some safety and ethics safeguards? These are the sorts of questions on which I think a court would be more likely to defer to the United States as an amicus and/or to the state AGs, and would be more likely to listen to subject-matter experts and advocacy groups who sought amicus status.

They may assert that subsequent developments establish that nonprofit development of AI is financially infeasible, that they are going to lose the AI arms race without massive cash infusions, and that obtaining infusions while the nonprofit is in charge isn't viable. If the signs are clear enough that the mission as originally envisioned is doomed to fail, then switching to a backup mission doesn't seem necessarily unreasonable under general charitable-law principles to me

I'm confused about this line of argument. Why is losing the AI arms race relevant to whether the mission as originally envisioned is doomed to fail?

I tried to find the original mission statement. Is the following correct?

OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact. 

If so, I can see how an OpenAI plantiff can try to argue that "advanc[ing] digital intelligence in the way that is most likely to benefit humanity as a whole" necessitates them "winning the AI arms race", but I don't exactly see why an impartial observer should grant them that.

Why is losing the AI arms race relevant to whether the mission as originally envisioned is doomed to fail?

It depends on what exactly "losing the AI arms race" means, which is in turn influenced by how big the advantages of being first (or one of the first) to AGI are. If the mission was to "advance digital intelligence," and it was widely understood that the mission involved building AGI and/or near-AGI, that would seem to imply some sort of technological leadership position was prerequisite to mission success. I agree that being first to AGI isn't particularly relevant to succeeding at the mission. But if they can't stay competitive with Google et al., it's questionable whether they can meaningfully achieve the goal of "advanc[ing] digital intelligence."

So for instance, if OpenAI's progress rate were to be reduced by X% due to the disadvantages in raising capital it faces on account of its non-profit structure, would that be enough to render it largely irrelevant as other actors quickly passed it and their lead grew with every passing month? I think a lot would depend on what X% is. A range of values seem plausible to me; as I mentioned in a different comment I just submitted, I suspect that fairly probative evidence on OpenAI's current ability to fundraise with its non-profit structure exists but is not yet public.

(I found the language you quoted going back to 2015, so it's probably a fair characterization of what OpenAI was telling donors and governmental agencies at the beginning.)

To me, "advanc[ing] digital intelligence in the way that is most likely to benefit humanity as a whole" does not necessitate them building AGI at all. Indeed the same mission statement can be said to apply to e.g. Redwood Research.

Further evidence for this view comes from OpenAI's old merge-and-assist clause, which indicts that they'd be willing to fold and assist a different company if the other company is a) within 2 years of building AGI and b) sufficiently good. 

Thanks for sharing this, very informative and helpful for highlighting a potential leverage point., strong upvoted.

One minor point of disagreement: I think you are being a bit too pessimistic here:

And put in more EA-coded language: the base rate of courts imploding massive businesses (or charities) is not exactly high. One example in which something like this did happen was the breakup of the Bell System in 1982, but it wasn't quick, the evidence of antitrust violations was massive, and there just wasn't any other plausible remedy. Another would be the breakup of Standard Oil in 1911, again a near-monopoly with some massive antitrust problems.

There are few examples of US courts blowing up large US corporations, but that is not exactly the situation here. OpenAI might claim that preventing a for-profit conversion would destroy or fatally damage the company, but they do not have proof. There is a long history of businesses exaggerating the harm from new regulations, claiming they will be ruinous when actually human ingenuity and entrepreneurship render them merely disadvantageous. The fact is that this far OpenAI has raised huge amounts of money and been at the forefront of scaling with its current hybrid structure, and I think a court could rightfully be skeptical of claims without proof that this cannot continue. 

I think a closer example might be when the DC District Court sided with the FTC and blocked the Staples-Office Depot merger on somewhat dubious grounds. The court didn't directly implode a massive retailer... but Staples did enter administration shortly afterwards, and my impression at the time was the causal link was pretty clear.

OpenAI might claim that preventing a for-profit conversion would destroy or fatally damage the company, but they do not have proof. [ . . . .] The fact is that this far OpenAI has raised huge amounts of money and been at the forefront of scaling with its current hybrid structure, and I think a court could rightfully be skeptical of claims without proof that this cannot continue. 

Yes, that's the counterargument. I submit that there is likely to be pretty relevant documentary and testimonial evidence on this point, but we don't know which way it would go. So I don't have any clear opinion on whether OpenAI's argument would work and/or how much these kinds of concerns would shape the scope of injunctive relief.

OpenAI agreed to terms that I would almost characterize as a poison pill: if the transformation doesn't move forward on time, the investors can get that $6.6B back. It may be that would-be investors were not willing to put enough money to keep OpenAI going without a commitment to refund if the non-profit board were not disempowered. As you mentioned, corporations exaggerate the detrimental impact of legal requirements they don't like all the time! But the statements and actions of multiple, independent third-party investors should be less infected on this issue. If an inability to secure adequate funding as a non-profit is what this evidence points toward, I think that would be enough to establish a prima facie case and require proponents to put up evidence of their own to rebut that case. 

So who will make that case? It's not clear Musk will assert that OpenAI can stay competitive while remaining a non-profit; his expression of a desire “[o]n behalf of a consortium of buyers,” “to acquire all assets . . . of OpenAI” for $97,375,000,000 (Order at 14 n.10) suggests he may not be inclined to advocate for OpenAI's ability to use its own assets to successfully advance its mission.

There's also the possibility that the court would show some deference on this question to the business judgment of OpenAI's independent board members if people like Altman and Brockman were screened off enough. It seems fairly clear to me that everyone understood early on there would need to be some for-profit elements in the mix, and so I think the non-conflicted board members may get some benefit of the doubt in figuring that out.

To the extent that evidence from the recent fundraising cycle supports the risk-of-fatal-damage theory, I suspect the relevance of fundraising success that occurred prior to the board controversy may be limited. I think it would be reasonable to ascribe lowered funder willingness to tolerate non-profit control to that controversy.

Rob Wiblin: I see. So the thing is there that the attorneys general in California, in Delaware, they definitely have standing to object to what is on, but they might not feel resourced to do it themselves. Musk has said, “Empower me. Give me standing to object to this.”

And they haven’t replied yet — perhaps understandably, because I guess they’re Democrats and also just Musk in general is a very controversial figure; they may not want to deputise Musk to go out to bat for them. I think if they wanted to give someone else that authority, probably they would choose a different party.

In my view, Musk would make a terrible relator here, and for reasons that have nothing to do with partisan affiliation. He has his own personal interests in hand -- such as his interest in xAI and in being part of a consortium allegedly seeking to purchase OpenAI assets -- and there's a serious conflict between those considerable personal interests and dispassionate advocacy of the public interest.

Curated and popular this week
jackva
 ·  · 3m read
 · 
 [Edits on March 10th for clarity, two sub-sections added] Watching what is happening in the world -- with lots of renegotiation of institutional norms within Western democracies and a parallel fracturing of the post-WW2 institutional order -- I do think we, as a community, should more seriously question our priors on the relative value of surgical/targeted and broad system-level interventions. Speaking somewhat roughly, with EA as a movement coming of age in an era where democratic institutions and the rule-based international order were not fundamentally questioned, it seems easy to underestimate how much the world is currently changing and how much riskier a world of stronger institutional and democratic backsliding and weakened international norms might be. Of course, working on these issues might be intractable and possibly there's nothing highly effective for EAs to do on the margin given much attention to these issues from society at large. So, I am not here to confidently state we should be working on these issues more. But I do think in a situation of more downside risk with regards to broad system-level changes and significantly more fluidity, it seems at least worth rigorously asking whether we should shift more attention to work that is less surgical (working on specific risks) and more systemic (working on institutional quality, indirect risk factors, etc.). While there have been many posts along those lines over the past months and there are of course some EA organizations working on these issues, it stil appears like a niche focus in the community and none of the major EA and EA-adjacent orgs (including the one I work for, though I am writing this in a personal capacity) seem to have taken it up as a serious focus and I worry it might be due to baked-in assumptions about the relative value of such work that are outdated in a time where the importance of systemic work has changed in the face of greater threat and fluidity. When the world seems to
 ·  · 4m read
 · 
Forethought[1] is a new AI macrostrategy research group cofounded by Max Dalton, Will MacAskill, Tom Davidson, and Amrit Sidhu-Brar. We are trying to figure out how to navigate the (potentially rapid) transition to a world with superintelligent AI systems. We aim to tackle the most important questions we can find, unrestricted by the current Overton window. More details on our website. Why we exist We think that AGI might come soon (say, modal timelines to mostly-automated AI R&D in the next 2-8 years), and might significantly accelerate technological progress, leading to many different challenges. We don’t yet have a good understanding of what this change might look like or how to navigate it. Society is not prepared. Moreover, we want the world to not just avoid catastrophe: we want to reach a really great future. We think about what this might be like (incorporating moral uncertainty), and what we can do, now, to build towards a good future. Like all projects, this started out with a plethora of Google docs. We ran a series of seminars to explore the ideas further, and that cascaded into an organization. This area of work feels to us like the early days of EA: we’re exploring unusual, neglected ideas, and finding research progress surprisingly tractable. And while we start out with (literally) galaxy-brained schemes, they often ground out into fairly specific and concrete ideas about what should happen next. Of course, we’re bringing principles like scope sensitivity, impartiality, etc to our thinking, and we think that these issues urgently need more morally dedicated and thoughtful people working on them. Research Research agendas We are currently pursuing the following perspectives: * Preparing for the intelligence explosion: If AI drives explosive growth there will be an enormous number of challenges we have to face. In addition to misalignment risk and biorisk, this potentially includes: how to govern the development of new weapons of mass destr
Sam Anschell
 ·  · 6m read
 · 
*Disclaimer* I am writing this post in a personal capacity; the opinions I express are my own and do not represent my employer. I think that more people and orgs (especially nonprofits) should consider negotiating the cost of sizable expenses. In my experience, there is usually nothing to lose by respectfully asking to pay less, and doing so can sometimes save thousands or tens of thousands of dollars per hour. This is because negotiating doesn’t take very much time[1], savings can persist across multiple years, and counterparties can be surprisingly generous with discounts. Here are a few examples of expenses that may be negotiable: For organizations * Software or news subscriptions * Of 35 corporate software and news providers I’ve negotiated with, 30 have been willing to provide discounts. These discounts range from 10% to 80%, with an average of around 40%. * Leases * A friend was able to negotiate a 22% reduction in the price per square foot on a corporate lease and secured a couple months of free rent. This led to >$480,000 in savings for their nonprofit. Other negotiable parameters include: * Square footage counted towards rent costs * Lease length * A tenant improvement allowance * Certain physical goods (e.g., smart TVs) * Buying in bulk can be a great lever for negotiating smaller items like covid tests, and can reduce costs by 50% or more. * Event/retreat venues (both venue price and smaller items like food and AV) * Hotel blocks * A quick email with the rates of comparable but more affordable hotel blocks can often save ~10%. * Professional service contracts with large for-profit firms (e.g., IT contracts, office internet coverage) * Insurance premiums (though I am less confident that this is negotiable) For many products and services, a nonprofit can qualify for a discount simply by providing their IRS determination letter or getting verified on platforms like TechSoup. In my experience, most vendors and companies