Hide table of contents

I'm linking and excerpting a submission to the EA criticism contest published by a pseudonymous author on August 31, 2022 (i.e. before the collapse of FTX).

The submission did not win a prize, but was highlighted by a panelist:

I was unsure about including this post, but I think this post highlights an important risk of the EA community receiving a significant share of its funding from a few sources, both for internal community epistemics/culture considerations as well as for external-facing and movement-building considerations. I don't agree with all of the object-level claims, but I think these issues are important to highlight and plausibly relevant outside of the specific case of SBF / crypto. That it wasn't already on the forum (afaict) also contributed to its inclusion here.

Due to concerns about copyright, I'm excerpting the post besides the summary and disclaimer, but I recommend reading the whole piece. 


Sam Bankman-Fried, founder of the cryptocurrency exchange FTX, is a major donator to the Effective Altruism ecosystem and has pledged to eventually donate his entire fortune to causes aligned with Effective Altruism. 

By relying heavily on ultra-wealthy individuals like Sam Bankman-Fried for funding, the Effective Altruism community is incentivized to accept political stances and moral judgments based on their alignment with the interests of its wealthy donators, instead of relying on a careful and rational examination of the quality and merits of these ideas. Yet, the Effective Altruism community does not appear to recognize that this creates potential conflicts with its stated mission of doing the most good by adhering to high standards of rationality and critical thought.

In practice, Sam Bankman-Fried has enjoyed highly-favourable coverage from 80,000 Hours, an important actor in the Effective Altruism ecosystem. Given his donations to Effective Altruism, 80,000 Hours is, almost by definition, in a conflict of interest when it comes to communicating about Sam Bankman-Fried and his professional activities. This raises obvious questions regarding the trustworthiness of 80,000 Hours’ coverage of Sam Bankman-Fried and of topics his interests are linked with (quantitative trading, cryptocurrency, the FTX firm…).

In this post, I argue that the Effective Altruism movement has failed to identify and publicize its own potential conflicts of interests. This failure reflects poorly on the quality of the standards the Effective Altruism movement holds itself to. Therefore, I invite outsiders and Effective Altruists alike to keep a healthy level of skepticism in mind when examining areas of the discourse and action of the Effective Altruism community that are susceptible to be affected by incentives conflicting with its stated mission. These incentives are not just financial in nature, they can also be linked to influence, prestige, or even emerge from personal friendships or other social dynamics. The Effective Altruism movement is not above being influenced by such incentives, and it seems urgent that it acts to minimize conflicts of interest.


Introduction — Cryptocurrency is not neutral (neither morally nor politically)

... Cryptocurrency is not simply an attempt to provide a set of technical solutions to improve existing currency systems. It is an attempt to replace existing monetary institutions by a new political system, it is therefore political at its core...

My point here is not to debate on the virtues of the societal model promoted by cryptocurrency actors, but rather to convince readers unfamiliar with the cryptocurrency industry that it is deeply infused with political ideology and is certainly not a purely-technological response to a technical problem. The cryptocurrency response to monetary policy questions manifests a specific worldview accompanied by a specific set of moral values.

EA’s reliance on funding from the cryptocurrency industry

...These incentives are not just monetary. As the crypto industry grows and SBF gains in wealth, influence and prestige, EA benefits by receiving more funding but also by extending its own area of influence and its prestige. On the contrary, attacks on the image of SBF, FTX and even crypto as a whole carry the risk of tarnishing EA’s reputation. Were SBF to be involved in an ethical or legal scandal (whether in his personal or profesional life), the EA ecosystem would inevitably be damaged as well. As a result, the EA community has an incentive to protect SBF’s reputation, to counter critics against him from the outside and to stifle critics from the inside of the community (this incentive can act between EA members, by voiced criticisms being ignored, downplayed, treated with defiance, but can even act via self-censorship, conscious or not).

How EA views cryptocurrency

...Given that the adoption of cryptocurrency has massive political implications for the future of our societies and carries with it very strong ideological foundations, it should, at first glance, seem slightly surprising that the EA community does not visibly engage critically with this topic on a deeper political level. But this is not as surprising when considering EA’s reliance on crypto wealth for funding. As explained above, the EA community is powerfully primed to view cryptocurrency positively, if only by the direct financial benefits it collects from the industry. Moreover, the incentives at play are likely effective inhibitors of contrarian views (notably by means of self-censorship)...

EA’s ineffective mechanisms to protect itself against conflicts of interest

... EA claims to aim to do “the most good” using the tools of rationality and critical thinking. So what does the EA ecosystem do to mitigate the risk that EA members act according to bias-inducing incentives?

As far as I can tell, the systemic safeguards against conflicts of interests in the EA ecosystem are very limited...

These two main forms of promotion of debates (internal forums and invitations to criticisms) are not nearly sufficient as mechanisms to prevent the establishment of conflicts of interests. Yet, they appear to be the only ones the Effective Altruism community relies on.

Conflicts of interest need to be addressed whether they have real effects or not

...Fundamentally, the problem I want to highlight is not even whether the EA ecosystem is effectively influenced by the incentives it is subject to. These incentives exist, and are left unchecked. This is the primary issue. 

Whether these incentives have actual effects on EA is almost secondary to the fact that EA seems to be unable to recognize, publicize and mitigate its engagement in conflicts of interest... For EA, incentives like the ones related to SBF are in direct conflict with EA’s stated mission of “using evidence and reason to figure out how to benefit others as much as possible” (quote from the Centre For Effective Altruism).

What should EA do?

It appears clear that EA does not consider itself to be at any real risk of falling prey to conflicts of interests. This seems to be the only way to explain the blind spot EA suffers from when it comes to recognizing its incentives associated with relying on donations from tech bilionaires, as obvious as these may be in the particular case of donations by SBF.

Identifying and publicizing obvious sources of potential conflicts of interests

A necessary (but non sufficient) first step would be to acknowledge existing incentive and recognize their potential effects...

I will briefly address a counter-argument that could be made along the lines of: “SBF’s profile in 80,000 Hours clearly mentions SBF’s contributions to EA-aligned causes. Therefore EA is transparent about its funding, therefore EA does not suffer from issues of undisclosed conflicts of interests.” Indeed, SBF’s contributions are mentioned at length in EA publications. But I have seen no instance where this contribution was listed as a sign of potential conflict of interest. On the contrary, SBF is framed as a prime example of Earning to Give, he is presented as an example to follow, a person to admire, to take inspiration from and to be grateful to, which does nothing to warn against potential conflicts of interest.

Going further

It seems crucial that EA, if it values independence of thought and critical thinking, should engage in an in-depth examination of the role that incentives are allowed to play in the organization...

Publicizing existing conflicts of interest achieves little if not accompanied by a significant effort to understand how conflicts of interest are allowed to appear, how to minimize their potential effects, how to strengthen counterpowers within the organization to foster accountability, how to prevent EA from becoming more conflicted and instead reduce the number and strength of exisiting conflicts of interest.

There is a clear trade-off between 1) expanding the available resources of a non-profit organization and 2) protecting said organization from potential conflicts of interests. My opinion is that EA as a community should probably think hard about where it stands on this trade-off...


...It would be pointless to aspire to building an organization in which conflicting incentives are completely eliminated. On the other hand, it would be completely illusory to think that individuals can consciously decide to free themselves from the biases associated with incentives of all kinds. Systemic safeguards are essential, all the more so when an organization aims to hold itself to high standards of rationality. Hopefully, the EA movement will remember this sooner rather than later.



This post deals with conflicts of interests, it is only natural that I would be particularly transparent regarding the incentives that played into its writing.

First, I did not receive any funding for writing this post, and I have no affiliation to the EA movement. 

I wrote this post aiming to submit it to EA’s criticism contest and was thus incentivized to write an effective critique of EA, but one that would not be too antagonizing to the jury of the contest (I believe that the jury is mainly composed of members of EA). I did my best to resist this incentive and aimed to not water down my thesis too much.
By making a pseudonymous submission, I am shielding myself from the fear of reputational damage, which could otherwise have been a powerful incentive to self-censor.





More posts like this

Sorted by Click to highlight new comments since: Today at 5:34 PM

I think this criticism be extended beyond cryptocurrency, to Social Media. Specifically, EA is heavily reliant on funding from Dustin Moskovitz, co-founder Facebook. (I'm fairly ignorant as to details of Moskovitz's finance, I believe he still owns shares in Meta and so has at least some controlling interest in the company, but I could be off-base here)

Social media is criticised for a lot of things, but here I'm just going to link the following article,  because it's recent, and because it seems topical to a lot of EA global  health/development stuff:  Meta faces $1.6bn lawsuit over Facebook posts inciting violence in Tigray war

There's a story here that goes 'Man who's made billions in technology that significantly damages social and political institutions, including spreading misinformation about elections, covid, vaccines, and allowing people to spread abuse and incite violence, now wants to use that money for the good of society '.  And to the degree that you think that's true, you might think that the harms done by Meta outweigh the good done by Open Philanthropy.


There's a critique of EA, that goes 'EA is more focused on individual donations than systemic change'. I used to think this was off-base, because there's plenty of EAs who want to do system-changing things, like advocate for animal welfare laws, or work in government policy. 

Now I read this criticism more like:

"By relying on one or two extremely rich donors for a large portion of EA  funding, EA is less likely to be advocate for the kind of systemic change that would be harmful to the financial interests of these donors", 

or (and I'm thinking of crypto and social media here:) 

"By relying on one or two extremely rich donors who've made their fortunes in 'disruptive' technology,  EA is less likely to be critical of the harms that these technologies do to the world". 

And I actually think that's quite a valid criticism.

I don't believe Dustin has been involved in Facebook for many years, is several years into running his new startup (Asana) and doubt there's any real obligation to like Facebook from that (at least, I do not perceive there to be, and at least given my impression of Dustin from Twitter it would be pretty surprising to me if others did)

I'm not that concerned about this.

First off, it is very hard to find a funding source that doesn't create a conflict of interest. With ten megadonors in ten different fields, the conflicts would not be as acute but would be more broadly distributed. Government support brings conflicts. Relying on an army of small, mildly engaged donors creates "conflicts" of a different sort -- there is a strong motivation to focus on what looks good and will play to a mildly-engaged donor base rather than what does good.

The obvious risk for conflict of interest is that the money impedes or distorts the movement's message. It's generally not a meaningful problem for a kidney-disease charity to have financial entanglement with a social media company; it would very much be a problem for the American Academy of Pediatrics. It seems relatively less likely that, applying the principles of EA, that being critical of the harm social media creates and/or advocating for systemic change that would specifically or disproportionately tank Meta/Asana stock would be priority cause areas. 

It seems more likely to me that faithful application of EA principles would lead down a path that is contrary to in the interests of very wealthy donors more generally. But that is a hard problem to get around for a movement that wants to have great impact and needs loads of funding to do it.

I think conflict of interest is what leads to existential risk from AI rising up to being the most important issue in EA even though it's based on dubious reasoning and extrapolations that many people at the forefront of AI development don't think make sense from a capabilities perspective. It's been sufficient for senior people for EA to take it seriously and given these same folks control resource allocation it ends up driving a lot of what the community thinks. This bias clearly reveals itself when talking to some EAs terrified of AI but don't know anything about how it works nor do they have any idea of what actual AI researchers think as well as the obstacles they are trying to overcome. It seems like 95% of the people in EA who are terrified about existential risk from AI just defer to other people who speak about things they don't really comprehend but because they have control of the money and status in the community they assume they can be trusted. How can the folks who are funding AI Safety research be considered objective when they are the same folks who are considered as the producers of authoritative content on AI safety as well as have familial or intimate relationships with top AI researchers ? I don't question the sincerity of these folks in their beliefs but given the nature and structure of the situation I cannot trust that EA can come to the correct conclusions about this specific topic. I also think this is a mess that cannot be untangled and will have to run it's course until EA doesn't have money to burn.

The same conflict of interest argument applies to ML engineers who have every reason to argue that their work isn’t leading to the potential death of everyone on Earth.

And also to people who are significantly invested in other cause areas and feel it diminishes the importance of their work.

Unfortunately, I think the conflict of interest line of thought ends up being far more expansive in a way that impinges basically everyone.

It's a lot more direct with AI though.  Ai safety org people and EA org are often the same people, or are personal friends, or at least know each other in some capacity. This undeniably grants them advantages compared to some far off animal rights org. Social aspects give their ideas more access, more consideration, and less temptation to be written off as crazy. If someone found decisive proof that AI safety was nonsense, I'm sure they would publish it, but they might be sad about putting some of their personal friends out of jobs, making them look foolish, etc. I think this bias seeps, at least a little bit, into AI safety consideration. 

There is a difference. For ML engineers they actually have to follow up their claims by making products that actually work and earn revenue or successfully convince a VC to keep funding their ventures. The source of funding and the ones appealing for the funding have different interests. In this regard ML engineers have more of an incentive to try upsell the capabilities of their products than downplay them. It's still possible for someone to burn their money funding something that won't pan out and this is the risk investors have to make (I don't know of any top VCs as bullish on AI capabilities on as aggressive timelines as EA folks). In the case of AI safety some of the folks who are in charge of the funding are the ones who are also the loudest advocates for the cause as well as some of the leading researchers. The source of funding and the ones utilizing the funding are comingled in a way that would lead to a conflict of interest that seems quite more problematic than I've noticed in other cause areas. But if such serious conflicts do exist, then those too are a problem and not an excuse to ignore conflicts of interest.

Not really? Yes, I do think that EA probably has conflict of interest re AI, though I don't understand why actually having capabilities is actually a defense to the criticism that they are incentivized to ignore the risk, exactly? This is a symmetrical claim that admittedly does teach us to lower or raise our credences in things we have a stake on, but there's no asymmetry.

I think they would have to believe there is a risk but they are actually just trying to figure out how to make headway on basic issues. The point of my comment was not to argue about AI risk since I think that is a waste of time as those who believe in it seem to hold it more like an ideological/religious belief and I don't think there is any amount of argumentation or evidence that can convince them(there is also a lot of material online where the top researchers are interviewed and talk about some of these issues for anyone actually interested about what the state of AI is outside the EA bubble). My intention was just to name that there is a conflict of interest in this particular domain that is having a lot of influence in the community and I doubt there will be much done about it.

For people who haven't been around for a while, the history of AI x-risk as a cause area is actually one of a long struggle for legitimacy and significant funding.  20 years ago, only Eliezer Yudkowsky and a handful of other people even recognised there was a problem. 15 years ago, there was a whole grass-roots movement of people (centred around the Overcoming Bias and LessWrong websites) earning to give to support MIRI (then the Singularity Institute), as they were chronically underfunded. 10 years ago, Holden Karnofsky was arguing against it being a big problem. The fact that AI x-risk now has a lot of legitimacy and funding is a result of the arguments for taking it seriously winning many long and hard battles. Recently, huge prizes were announced for arguments that it wasn't a (big) risk. Before their cancellation, not much was produced in the way of good arguments imo. OpenPhil are now planning on running a similar competition. If there really are great arguments against AI x-risk being a thing, then they should come to light in response.

For those who want to deepen their knowledge of AI x-risk, I recommend reading the AGI Safety Fundamentals syllabus. Or better yet, signing up for the next iteration of the course (deadline to apply is 5th Jan).