J

Jason

17499 karmaJoined Working (15+ years)

Bio

I am an attorney in a public-sector position not associated with EA, although I cannot provide legal advice to anyone. My involvement with EA so far has been mostly limited so far to writing checks to GiveWell and other effective charities in the Global Health space, as well as some independent reading. I have occasionally read the forum and was looking for ideas for year-end giving when the whole FTX business exploded . . . 

How I can help others

As someone who isn't deep in EA culture (at least at the time of writing), I may be able to offer a perspective on how the broader group of people with sympathies toward EA ideas might react to certain things. I'll probably make some errors that would be obvious to other people, but sometimes a fresh set of eyes can help bring a different perspective.

Posts
2

Sorted by New
6
Jason
· · 1m read

Comments
2054

Topic contributions
2

Jason
21
3
1
2

I have a complicated reaction.

  1. First, I think @NickLaing is right to point out that there's a missing mood here and to express disappointment that it isn't being sufficiently acknowledged.

2. My assumption is that the direction change is motivated by factors like:

  • A view of AI as a particularly time-sensitive area right now vs. areas like GHD often having a slower path to marginal impact (in part due to the excellence and strength of existing funding-constrained work).
  • An assumption that there are / will be many more net positions to fill in AI safety for the next few years, especially to the extent one thinks that funding will continue to shift in this direction. (Relatedly, one might think there will be relatively few positions to fill in certain other cause areas.)

    I would suggest that these kinds of views and assumptions don't imply that people who are already invested in other cause areas should shift focus. People who are already on a solid path to impact are not, as I understand it, 80K's primary target audience.

3. I'm generally OK with 80K going in this direction if that is what its staff, leadership, and donors want. I've taken a harder-line stance on this sort of thing to the extent that I see something as core infrastructure that is a natural near-monopoly (e.g., the Forum, university groups) -- in which case I think there's an enhanced obligation to share the commons. Here, there's nothing inherent about career advising that is near-monopolistic (cf. Probably Good and Animal Advocacy Careers exist in analogous spaces). I would expect the new 80K to make at least passing reference to the existence of other EA career advice services for those who decide they want to work in another cause area. Thus, to the extent that there are advisors interested in giving advice in these areas, advisees interested in receiving that advice, and funders interested in supporting those areas, there's no clear reason why alternative advisors would not fill the gap left by 80K here. I'd like to have seen more lead time, but get that the situation in AI is rapidly evolving and that this is a reaction to external developments.

4. I think part of the solution is to stop thinking of 80K as (quoting Nick's comment) "one of the top 3 or so EA orgs" in the same sense one might have considered it before this shift. Of course, it's an EA org in the same sense that (e.g.) Animal Advocacy Careers is an EA org, but after today's announcement it shouldn't be seen as a broad-tent EA org in the same vein as (e.g.,) GWWC. Therefore, we should be careful not to read a shift in the broader community's cause prio into 80K's statements or direction. This may change how we interact with it and defer (or not) to it in the future. For example, if someone wants to point a person toward broad-based career advice, Probably Good is probably the most appropriate choice.

5. I too am concerned about the EA funnel / onramp / tone-setting issues that EA others have written about, but don't have much to add on those.

It might be helpful to clarify what you mean by "moral hazard" here.

Jason
13
0
0
1

Personally, I'm optimistic that this could be done in specific ways that could be better than one might initially presume. One wouldn't fund "CEA" - they could instead fund specific programs in CEA, for instance. I imagine that people at CEA might have some good ideas of specific things they could fund that OP isn't a good fit for. 

That may be viable, although I think it would be better for both sides if these programs were not in CEA but instead in an independent organization. For the small-donor side, it limits the risk that their monies will just funge against OP/GV's, or that OP/GV will influence how the community-funded program is run (e.g., through its influence on CEA management officials). On the OP/GV side, organizational separation is probably necessary to provide some of the reputational distance it may be looking for. That being said, given that small/medium donors have never to my knowledge been given this kind of opportunity, and the significant coordination obstacles involved, I would not characterize them not having taken it as indicative of much in particular.

~

More broadly, I think this is a challenging conversation without nailing down the objective better -- and that may be hard for us on the Forum to do. Without any inside knowledge, my guess is that OP/GV's concerns are not primarily focused on the existence of discrete programs "that OP isn't a good fit for" or a desire not to fund them. 

For example, a recent public comment from Dustin contain the following sentence: "But I can't e.g. get SBF to not do podcasts nor stop the EA (or two?) that seem to have joined DOGE and started laying waste to USAID." The concerns implied by that statement aren't really fixable by the community funding discrete programs, or even by shelving discrete programs altogether. Not being the flagship EA organization's predominant donor may not be sufficient for getting reputational distance from that sort of thing, but it's probably a necessary condition.

I speculate that other concerns may be about the way certain core programs are run -- e.g., I would not be too surprised to hear that OP/GV would rather not have particular controversial content allowed on the Forum, or have advocates for certain political positions admitted to EAGs, or whatever. I'm not going to name the content I have in mind in an attempt not to be drawn into an object-level discussion on those topics, but I wouldn't want my own money being used to platform such content or help its adherents network either. Anyway, these types of issues can probably be fixed by running the program with community/other-donor funding in a separate organization, but these programs are expensive to run. And the community / non-OP/GV donors are not a monolithic constituency; I suspect that at least a significant minority of the community would share OP/GV's concerns on the merits. 

Lastly, I'd flag that CEA being 90% OP/GV funded really can be quite different than 70% in some important ways, still. For example, if OP/GV were to leave - then CEA might be able to go to 30% of its size - a big loss, but much better than 10% of its size.  

I agree -- the linked comment was focused more on the impact of funding diversity on conflicts of interest and cause prio. But the amount of smaller-EA-donor dollars to go around is limited,[1] and so we have to consider the opportunity cost of diverting them to fund CEA or similar meta work on an ongoing basis.  OP/GV is usually a pretty responsible funder, so the odds of them suddenly defunding CEA without providing some sort of notice and transitional funding seems low.

  1. ^

    For instance, I believe GWWC pledgers gave about $32MM/year on average from 2020-2022 [p. 12 of this impact assessment], and not all pledgers are EAs.

Jason
15
0
0
1

I'd love to see some other EA donors and community members step up here. I think it's kind of damning how little EA money comes from community members or sources other than OP right now. Long-term this seems pretty unhealthy. 

There was some prior relevant discussion in November 2023 in this CEA fundraising thread, such as my comment here about funder diversity at CEA. Basically, I didn't think that there was much meaningful difference between a CEA that was (e.g.) 90% OP/GV funded vs. 70% OP/GV funded. So I think the only practical way for that percentage to move enough to make a real difference would be both an increase in community contributions/control and CEA going on a fairly severe diet. 

As for EAIF, expected total grantmaking was ~$2.5MM for 2025. Even if a sizable fraction of that went to CEA, it would only be perhaps 1-2% of CEA's 2023 budget of $31.4MM.

I recall participating in some discussions here about identifying core infrastructure that should be prioritized for broad-based funding for democratic and/or epistemic reasons. Identifying items in the low millions for more independent funding seems more realistic than meaningful changes in CEA's funding base. The Forum strikes me as an obvious candidate, but a community-funded version would presumably need to run on a significantly leaner budget than I understand to be currently in place. 

You're not missing anything!

Cause Prioritization. Does It Ignore Political and Social Reality?

People should be factoring in the risk of waste, fraud, or mismanagement, as well as the risk of adverse leadership changes, into their cost-effectiveness estimates. That being said, these kinds of risks exist for most potential altruistic projects one could envision. If the magnitude of the risk (and the consequences of the fraud etc.) are similar between the projects one is considering, then it's unlikely that consideration of this risk will affect one's conclusion.

EA says encourages donations where impact is highest, which often means low income countries. But what happens when you live in one of those countries? Should I still prioritize problems elsewhere?

I think this is undertheorized in part because EA developed in, and remains focused on, high-income countries. It also developed in a very individualistic culture.

EA implicitly tells at least some members of the global top 1% that its OK to stay rich as long as they give a meaningful amount of their income away. If it's OK for me to keep ~90% of my income for myself and my family, then it's hard for me to see how it wouldn't be OK for a lower-income community to keep virtually all of its resources for itself. So given that, I'd be pretty uncomfortable with there being a "EA party line" that moderately low-income communities should send any meaningful amount of their money away to even lower-income communities. 

Maybe one could see people in lower-income areas giving money to even lower-income areas as behaving in a supererogatory fashion?

I would generally read EA materials through a lens of the main target audience being relatively well-off people in developed countries. That audience generally isn't going to have local knowledge of (often) smaller-scale, highly effective things to do in a lower-income country. Moreover, it's often not cost-effective to evaluate smaller projects thoroughly enough to recommend them over the tried-and-true projects that can absorb millions in funding. You, however, might have that kind of knowledge!

 

Amartya Sen (Development as Freedom) says that well-being isn’t just about cost-effectiveness, it’s about giving people the capability to sustain improvements in their lives. That makes me wonder: Are EA cause priorities too detached from local realities? Shouldn’t people closest to a problem have more say in solving it?

I think that's a fair question. However, in current EA global health & development work, the primary intended beneficiaries of classic GiveWell-style work are children under age 5 who are at risk of dying from malaria or other illnesses. Someone else has to speak for them as a class, and I don't think toddlers can have well-being in the broader sense you describe. Moreover, the classic EA GH&D program is pretty narrow -- such as a few dollars for a bednet -- so EA efforts generally cause only a very small fraction of all resources spent on the child beneficiary's welfare to have low local control. 

All that makes me somewhat less concerned about potential paternalism than I would be if EAs were commonly telling adult beneficiaries that they knew better about the beneficiary's own interest than said beneficiaries, or if EAs controlled a significant fraction of all charitable spending and/or all spending in developing countries.

Why is losing the AI arms race relevant to whether the mission as originally envisioned is doomed to fail?

It depends on what exactly "losing the AI arms race" means, which is in turn influenced by how big the advantages of being first (or one of the first) to AGI are. If the mission was to "advance digital intelligence," and it was widely understood that the mission involved building AGI and/or near-AGI, that would seem to imply some sort of technological leadership position was prerequisite to mission success. I agree that being first to AGI isn't particularly relevant to succeeding at the mission. But if they can't stay competitive with Google et al., it's questionable whether they can meaningfully achieve the goal of "advanc[ing] digital intelligence."

So for instance, if OpenAI's progress rate were to be reduced by X% due to the disadvantages in raising capital it faces on account of its non-profit structure, would that be enough to render it largely irrelevant as other actors quickly passed it and their lead grew with every passing month? I think a lot would depend on what X% is. A range of values seem plausible to me; as I mentioned in a different comment I just submitted, I suspect that fairly probative evidence on OpenAI's current ability to fundraise with its non-profit structure exists but is not yet public.

(I found the language you quoted going back to 2015, so it's probably a fair characterization of what OpenAI was telling donors and governmental agencies at the beginning.)

OpenAI might claim that preventing a for-profit conversion would destroy or fatally damage the company, but they do not have proof. [ . . . .] The fact is that this far OpenAI has raised huge amounts of money and been at the forefront of scaling with its current hybrid structure, and I think a court could rightfully be skeptical of claims without proof that this cannot continue. 

Yes, that's the counterargument. I submit that there is likely to be pretty relevant documentary and testimonial evidence on this point, but we don't know which way it would go. So I don't have any clear opinion on whether OpenAI's argument would work and/or how much these kinds of concerns would shape the scope of injunctive relief.

OpenAI agreed to terms that I would almost characterize as a poison pill: if the transformation doesn't move forward on time, the investors can get that $6.6B back. It may be that would-be investors were not willing to put enough money to keep OpenAI going without a commitment to refund if the non-profit board were not disempowered. As you mentioned, corporations exaggerate the detrimental impact of legal requirements they don't like all the time! But the statements and actions of multiple, independent third-party investors should be less infected on this issue. If an inability to secure adequate funding as a non-profit is what this evidence points toward, I think that would be enough to establish a prima facie case and require proponents to put up evidence of their own to rebut that case. 

So who will make that case? It's not clear Musk will assert that OpenAI can stay competitive while remaining a non-profit; his expression of a desire “[o]n behalf of a consortium of buyers,” “to acquire all assets . . . of OpenAI” for $97,375,000,000 (Order at 14 n.10) suggests he may not be inclined to advocate for OpenAI's ability to use its own assets to successfully advance its mission.

There's also the possibility that the court would show some deference on this question to the business judgment of OpenAI's independent board members if people like Altman and Brockman were screened off enough. It seems fairly clear to me that everyone understood early on there would need to be some for-profit elements in the mix, and so I think the non-conflicted board members may get some benefit of the doubt in figuring that out.

To the extent that evidence from the recent fundraising cycle supports the risk-of-fatal-damage theory, I suspect the relevance of fundraising success that occurred prior to the board controversy may be limited. I think it would be reasonable to ascribe lowered funder willingness to tolerate non-profit control to that controversy.

I would like to see more developed thinking in EA circles about what a potential and plausible remedy is if Musk prevails here. The possibility of "some kind of middle ground here" was discussed on the podcast, and I'd keep those kinds of outcomes in mind if Musk were to prevail at trial. 

In @Garrison's helpful writeup, he observes that: 

OpenAI's ability to attract enough investment to compete may be dependent on it being structured more like a typical company. The fact that it agreed to such onerous terms in the first place implies that it had little choice.

And I would guess that's going to be a key element of OpenAI's argument at trial. They may assert that subsequent developments establish that nonprofit development of AI is financially infeasible, that they are going to lose the AI arms race without massive cash infusions, and that obtaining infusions while the nonprofit is in charge isn't viable. If the signs are clear enough that the mission as originally envisioned is doomed to fail, then switching to a backup mission doesn't seem necessarily unreasonable under general charitable-law principles to me. The district court didn't need to go there at this point given that the existence of an actual contract or charitable trust between the parties is a threshold issue, and I am not seeing much on this point in the court's order.

To me, this is not only a defense for OpenAI but is also intertwined with the question of remedy. A permanent injunction is not awarded to a prevailing party as a matter of right. Rather:

According to well-established principles of equity, a plaintiff seeking a permanent injunction must satisfy a four-factor test before a court may grant such relief. A plaintiff must demonstrate: (1) that it has suffered an irreparable injury; (2) that remedies available at law, such as monetary damages, are inadequate to compensate for that injury; (3) that, considering the balance of hardships between the plaintiff and defendant, a remedy in equity is warranted; and (4) that the public interest would not be disserved by a permanent injunction.

eBay Inc. v. MercExchange, L.L.C., 547 U.S. 388 (2006) (U.S. Supreme Court decision).

The district court's discussion of the balance of equities focuses on the fact that "Altman and Brockman made foundational commitments foreswearing any intent to use OpenAI as a vehicle to enrich themselves." It's not hard to see how an injunction against payola for insiders would meet traditional equitable criteria. 

But an injunction that could pose a significant existential risk to OpenAI's viability could run into some serious problems on prong four. It's not likely that the district court would conclude the public interest affirmatively favors Meta, Google, xAI, or the like reaching AGI first as opposed to OpenAI. There is a national-security angle to the extent that the requested injunction might increase the risk of another country reaching AGI first. And to the extent that the cash from selling off OpenAI control would be going to charitable ends rather than lining Altman's pockets, it's going to be hard to argue that OpenAI's board has a fiduciary duty to just shut it all down and vanish ~$100B in charitable assets into thin air. 

And put in more EA-coded language: the base rate of courts imploding massive businesses (or charities) is not exactly high. One example in which something like this did happen was the breakup of the Bell System in 1982, but it wasn't quick, the evidence of antitrust violations was massive, and there just wasn't any other plausible remedy. Another would be the breakup of Standard Oil in 1911, again a near-monopoly with some massive antitrust problems.

If OpenAI is practically enjoined from raising enough capital needed to achieve its goals, the usual responsible thing for a charity that can no longer effectively function is to sell off its assets and distribute the proceeds to other non-profits. Think about a non-profit set up to run a small rural hospital that is no longer viable on its own. It might prefer to merge with another non-profit, but selling the whole hospital to a for-profit chain is usually the next-best option, with selling the land and equipment as a backup option. In a certain light, how different might a sale be from what OpenAI is proposing to do? I'd want to think more about that . . . 

With Musk as plaintiff, there are also some potential concerns on prong three relating to laches (the idea that Musk slept on his rights and prejudiced OpenAI-related parties as a result). Although I'm not sure if the interests of OpenAI investors and employees (who are not Altman and Brockman) with equity-like interests would be analyzed under prong three or four, it does seem that he sat around without asserting his rights while others invested cash and/or sweat equity into OpenAI. In contrast, "[t]he general principle is, that laches is not imputable to the government . . . ." United States v. Kirkpatrick, 22 U.S. (9 Wheat) 720, 735 (1824). I predict that any relief granted to Musk will need to take account of these third-party interests, especially because they were invested in while Musk slept on his rights. The avoidance of a laches argument is another advantage of a governmental litigant as opposed to Musk (although the third-party interests would still have to be considered).

All that is to say that -- while "this is really important and what OpenAI wants is bad" may be an adequate public advocacy basis for now, I think there will need to be a judicially and practically viable plan for what appropriate relief looks like at some point. Neither side in the litigation would be a credible messenger on this point, as OpenAI is compromised and its competitor Musk would like to pick off assets for his own profit and power-seeking purposes. I think that's one of the places where savvy non-party advocacy could make a difference.

Would people rather see OpenAI sold off to whatever non-insider bidder the board determines would be best, possibly with some judicial veto of a particularly bad choice? Would people prefer that a transition of some sort go forward, subject to imposition of some sort of hobbles that would slow OpenAI down and require some safety and ethics safeguards? These are the sorts of questions on which I think a court would be more likely to defer to the United States as an amicus and/or to the state AGs, and would be more likely to listen to subject-matter experts and advocacy groups who sought amicus status.

That quotation is from an order by then-Chief Judge Posner of the Seventh Circuit denying leave to file an amicus brief on such a basis. Judge Posner was, and the Seventh Circuit is, more of a stickler for this sort of this sort of thing (and both were/are more likely to call lawyers out for not following the rules than other courts). Other courts are less likely to actually kick an amicus brief -- that requires more work than just ignoring it! -- but I think Judge Posner's views would enjoy general support among the federal judiciary. 

There's a literature on whether amicus briefs are in general helpful vs. being a waste of money, although it mostly focuses on the Supreme Court (e.g., this article surveys some prior work and reflects interviews with former clerks, but is a bit dated). I don't see an amicus brief on the preliminary injunction here hitting many of the notes the former clerks identified as markers of value in that article. Whether there was a charitable trust between Musk and OpenAI isn't legally esoteric, there's no special perspective the amicus can bring to bear on that question, and so on. 

You're right insofar as amicus briefs are common at the Supreme Court level, although they are not that common in the courts of appeals (at least when I clerked) and I think they are even less common at the district court level in comparison to the number of significant cases. So I would not view their relative prevalence at the Supreme Court level as strong information in either direction on how effective an amicus brief might be here.

Judges are busy people; if a would-be amicus seeks to file an unhelpful amicus brief at one stage of the litigation, it's pretty unlikely the judge is going to even touch another brief from that amicus at a later stage. If I were a would-be amicus, I would be inclined to wait until I thought I had something different enough than the parties to say -- or thought that I would be seen as a more credible messenger than the parties on a topic directly relevant to a pending decision -- before using my shot.

[warning: speculative]

As potential plaintiff: I get the sense that OP & GV are more professionally run than Elon Musk's charitable efforts. When handing out this kind of money for this kind of project, I'd normally expect them to have negotiated terms with the grantee and memoralized them in a grant agreement. There's a good chance that agreement would have a merger clause, which confirms that (e.g.) there are no oral agreements or side agreements. Attorneys regularly use these clauses to prevent either side from getting out of or going beyond the negotiated final written agreement. Even if there isn't a merger clause, the presence of a comprehensive grant agreement would likely make it harder for the donor to show that a trust had been created, that the donor had a reversionary interest, or so on if the agreement didn't say those things.

As potential source of evidence: I'd at least consider the possibility that people associated with OP and/or GV could be witnesses at trial or could provide documentary evidence -- e.g., if there is a dispute over what representations OpenAI was making to major donors to secure funding. That might counsel keeping quiet at this juncture, particularly considering the next point.

As a potential amicus: I expect the court would either reject or ignore an amicus filing at this stage in the process. The court has jurisdiction over a claim by Elon Musk and xAI that OpenAI violated antitrust law, violated a contract or trust with Musk under California charitable law, etc. If OP/GV tried to submit an amicus brief on most of the actually relevant legal issues on a preliminary injunction, the court would likely see this as an improper attempt to effectively buy additional pages of argument for Musk & xAI.[1] To the extent that the amicus brief was about a legally peripheral issue -- like AI as a GCR -- it would likely be read by a law clerk (a bright recent graduate) who would tell the judge something like "This foundation submitted an amicus brief arguing that AI may go rogue and kill us all. Doesn't seem relevant to the issues in this case." 

Note that I think there is a potential role for amici later in this case, but the preliminary-injunction stage was not it.
 

  1. ^

    See Ryan v. Commodity Futures Trading Com’n, 125 F. 3d 1062, 1063 (7th Cir. 1993) (Posner, C.J., in chambers) ("The vast majority of amicus curiae briefs are filed by allies of litigants and duplicate the arguments made in the litigants’ briefs, in effect merely extending the length of the litigant’s brief. Such amicus briefs should not be allowed. They are an abuse. The term 'amicus curiae' means friend of the court, not friend of a party."). In my experience, these sorts of amicus briefs do have a place when the core legal issue is of broad importance but the litigant lacks either the means or incentive to put forth their best argument.

Load more