I think it’s likely that institutional effective altruism was a but-for cause of FTX’s existence[1] and therefore that it may have caused about $8B in economic damage due to FTX’s fraud (as well as potentially causing permanent damage to the reputation of effective altruism and longtermism as ideas). This example makes me feel it’s plausible that effective altruist community-building activities could be net-negative in impact,[2] and I wanted to explore some conjectures about what that plausibility would entail.
I recognize this is an emotionally charged issue, and to be clear my claim is not “EA community-building has been net-negative” but instead that that’s plausibly the case (i.e. something like >10% likely). I don’t have strong certainty that I’m right about that and I think a public case that disproved my plausibility claim would be quite valuable. I should also say that I have personally and professionally benefitted greatly from EA community building efforts (most saliently from efforts connected to the Center for Effective Altruism) and I sincerely appreciate and am indebted to that work.
Some claims that are related and perhaps vaguely isomorphic to the above which I think are probably true but may feel less strongly about are:
- To date, there has been a strong presumption among EAs that activities likely to significantly increase the number of people who explicitly identify as effective altruist (or otherwise increase their identification with the EA movement) are default worth funding. That presumption should be weakened.
- Social movements are likely to overvalue efforts to increase the power of their movement and undervalue their goals actually being accomplished, and EA is not immune to this failure mode.
- Leadership within social movements are likely to (consciously or unconsciously) overvalue measures that increase the leadership’s own control and influence and under-value measures that reduce it, which is a trap EA community-building efforts may have unintentionally fallen into.
- Pre-FTX, there was a reasonable assumption that expanding the EA movement was one of the most effective things a person could do, and the FTX catastrophe should significantly update our attitude towards that assumption.
- FTX should significantly update us on principles and strategies for EA community/movement-building and institutional structure, and there should be more public discourse on what such updates might be.
- EA is obligated to undertake institutional reforms to minimize the risk of creating an FTX-like problem in the future.
Here are some conjectures I’d make for potential implications of believing my plausibility claim:
- Make Impact Targets Public: Insofar as new evidence has emerged about the impact of EA community building (and/or insofar as incentives towards movement-building may map imperfectly onto real-world impact), it is more important to make public, numerical estimates of the goals of particular community-building grants/projects going forward and to attempt public estimation of actual impact (and connection to real-world ends) of at least some specific grants/projects conducted to date. Outside of GiveWell, I think this is something EA institutions (my own included) should be better about in general, but I think the case is particularly strong in the community-building context given the above.
- Separate Accounting for Community Building vs. Front-Line Spending: I have argued in the past that meta-level and object-level spending by EAs should be in some sense accounted for separately. I admit this idea is, at the moment, under-specified but one basic example would be “EAs/EA grant makers should say their “front-line” and “meta” (or “community building”) donation amounts as separate numbers (e.g. “I gave X to charity this year in total of which, Y was to EA front-line stuff, Z to EA community stuff, and W was non-EA stuff”). I think there may be intelligent principles to develop about how the amounts of EA front-line funding and meta-level funding should relate to one another, but I have less of a sense of what those principles might be than a belief that starting to account for them as separate types of activities in separate categories will be productive.
- Integrate Future Community Building More Closely with Front-Line Work: Insofar as it makes sense to have less of a default presumption towards the value of community building, a way of de-risking community building activities is to link them more closely to activities where the case for direct impact is stronger. For example, personally I hope for some of my kidney donation, challenge trial recruitment, and Rikers Debate Project work to have significant EA community-building upshots, even though that meta level is not those projects’ main goal or the metric I use to evaluate them. For what it’s worth, I think pursuing “double effect” strategies (e.g projects that simultaneously have near-termist and longtermist targets or animal welfare and forecasting-capacity targets) is underrated in current EA thinking. I also think connecting EA recruitment to direct work may mitigate certain risks of community building (e.g. the risks of creating an EA apparatchik class, recruiting “EAs” not sufficiently invested in having an actual impact, or competing with direct work for talent)
- Implement Carla Zoe Cremer’s Recommendations: Maybe I’m biased because we’re quoted together in some of the same articles but I’ve honestly been pretty surprised there has not been more public EA discussion post-FTX of adopting a number of Cremer's proposed institutional reforms, many of which seem to me obviously worth doing (e.g. whistleblowing protections). Some (such as democratizing funding decisions) are more complicated to implement, and I acknowledge the concern that these procedural measures create friction that could reduce the efficacy of EA organizations, but I think (a) minimizing unnecessary burden is a design challenge likely to yield fairly successful solutions and (b) FTX clearly strengthens the arguments in favor of bearing the cost of that friction. Also, insofar as she'd be willing (and some form of significant compensation is clearly merited), integrally engaging Cremer in whatever post-FTX EA institutional reform process emerges would be both directly helpful and a public show of good faith efforts at rectification.
- Consideration of a “Pulse” Approach to Funding EA Community Building: It may be the case that large EA funders should do time-limited pulses of funding towards EA community building goals or projects with the intention of building institutions that can sustain themselves off of separate funds in the future. The logic of this is: (a) insofar as EAs may be bad judges of the value of our own community building, requiring something appealing to external funders helps check that bias, (b) creating EA community institutions that must be attractive to outsiders to survive may avoid certain epistemic and political risks inherent to being too insular
- EA as a Method and not a Result: The concept of effective altruism (rationally attempting to do good) has broad consensus but particular conceptions may be parochial or clash with one another.[3] A “thinner” effective altruism that emphasizes EA as an idea akin to the scientific method rather than a totalizing identity or community may be less vulnerable to FTX-like mistakes.
- Develop Better Logic for Weighing Harms Caused by EA against EA Benefits: An EA logic that assumes resources available to EAs will be spent at (say) GiveWell benefit levels (which I take to be roughly $100/DALY or equivalent) but that resources available to others are spent at (say) US government valuations of a statistical life (I think roughly $100,000/DALY) seems to justify significant risks of incurring very sizable harms to the public if they are expected to yield additional resources for EA. Clearly, EA's obligations to avoid direct harms (or certain types of direct harms) are at least somewhat asymmetric to obligations/permissions to generate benefits. But at the same time, essentially any causal act will have some possibility of generating harm (which in the case of systemic change efforts can be quite significant), so a precautionary principle designed in an overly simplistic way would kneecap the ability of EAs to make the world better. I don't know the right answer to this challenge, but clearly "defer to common sense morality" has proven insufficient, and I think more intellectual work should be done.
I'm not at all certain about the conjectures/claims above, but I think it's important that EA deals with the intellectual implications of the FTX crisis, so I hope they can provoke a useful discussion.
- ^
Am basing this on reporting in Semafor and the New Yorker. To be clear, I'm not saying that once you assume Alameda/FTX's existence, the ideology of effective altruism necessarily made it more likely that those entities would commit fraud. But I do think it is unlikely they would have existed in the first place without the support of institutional EA.
- ^
To be clear, my claim is not "the impact of the FTX fraud incident plausibly outweighs benefits of EA community building efforts to date" (though that may be true and would be useful to publicly disprove if possible) but that the FTX fraud should demonstrate there are a range of harms we may have missed (which collectively could plausibly outweigh benefits) and that "investing in EA community building is self-evidently good" is a claim that needs to be reexamined.
- ^
I find the distinction between concept and conception to be helpful here. Effective altruism as a concept is broadly unobjectionable, but particular conceptions of what effective altruism means or ought entail involve thicker descriptions that can be subject to error or clash with one another. For example, is extending present-day human lifespans default good because human existence is generally valuable or bad because doing so tends to create greater animal suffering that outweighs the human satisfaction in the aggregate? I think people who consider the principles of effective altruism important to their thinking can reasonably come down on both sides of that question (though I, and I imagine the vast majority of EAs, believe the former). Moreover efforts to build a singular EA community around specific conceptions of effective altruism will almost certainly exclude other conceptions, and the friction of doing so may create political dynamics (and power-seeking behavior) that can lead to recklessness or other problems.
Yeah. I have strong feelings that social norms or norms of discourse should never disincentivize trying to do more than the very minimum one can get away with as an apathetic person or as a jerk. For example, I'm annoyed when people punish others for honesty in cases where it would have been easy to tell a lie and look better. Likewise, I find it unfair if having the stated goal to make the future better for all sentient beings is somehow taken to imply "Oh, you care for the future of all humans, and even animals? That's suspicious – we're definitely going to apply extra scrutiny towards you." Meanwhile, AI capabilities companies continue to scale up compute and most of the world is busy discussing soccer or what not. Yet somehow,"Are EAs following democratic processes and why does their funding come from very few sources?" is made into the bigger issue than widespread apathy or the extent to which civilization might be acutely at risk.
EAs who are serious about their stated goals have the most incentive of anyone to help the EA movement get its act together. The idea that "it's important to have good institutions" is something EA owes to outsiders is what seems weird to me. Doesn't this framing kind of suggest that EAs couldn't motivate themselves to try their best if it weren't for "institutional safeguards." What a depressing view of humans, that they can only act according to their stated ideals if they're watched at every step and have to justify themselves to critics!
EAs have discussions about governance issues EA-internally, too. It's possible (in theory) that EA has as many blindspots as Zoe thinks, but it's also possible that Zoe is wrong (or maybe it's something in between). Either way, I don't think anyone in EA, nor "EA" as a movement, has any obligation to engage in great detail with Zoe's criticisms if they don't think that's useful.* (Not to say that they don't consider the criticism useful – my impression is that there are EAs on both sides, and that's fine!)
If a lot of people agree with Zoe's criticism, that creates more social pressure to answer to her points. That's probably a decent mechanism to determine what an "appropriate" level of minimally-mandatory engagement should be – though it depends a bit whether the social pressure comes from well-intentioned people who reasonably informed about the issues or whether some kind of "let's all pile on these stupid EAs" dynamics emerge. (So far, the dynamics seem healthy to me, but if EA keeps getting trashed in the media, then this could change.)
*(I guess if someone's impression of EA was "group of people who want to turn all available resources into happiness simulations regardless of what existing people want for their future," then it would be reasonable for them to go like, "wtf, if that's your movement's plan, I'm concerned!" However, that would be a strawman impression of EA. Most EAs endorse moral views according to which individual preferences matter and "eudaimonia" is basically "everyone gets what they most want." Besides, even the few hedonist utilitarians [or negative utilitarians] within EA think preferences matter and argue for being nice to others with different views.)
I don't disagree with this part. I definitely think it's wise for EAs to engage with critics, especially thoughtful critics, which I consider Zoe to be one of the best examples of, despite disagreeing with probably at least 50% of her specific suggestions.
While I did use the word "immoral," I was only commenting on the framing Zoe/Carla used in that one particular paragraph I quoted. I definitely wasn't describing her overall behavior!
In case you want my opinion, I am a bit concerned that her rhetoric is often a bit "sensationalist" in a nuance-lacking way, and this makes EA look bad to journalists in a way I consider uncalled for. But I wouldn't label that "acting in bad-faith;" far from it!
Yeah, I agree with all of that. Still, in the end, it's up to EAs themselves to decide which criticisms to engage with at length and where it maybe isn't so productive.
In the books (or the movies), this part is made easy by having a kind and wise old wizard – who wouldn't consider going with Gandalf's advice a defensible decision-procedure!
In reality, "who gets to wield power" is more complicated. But one important point in my original comment was that EA doesn't even have that much power, and no ring (nor anything analogous to it – that's a place where the analogy breaks). So, it's a bit weird to subject EA to as much scrutiny as would be warranted if they were about to enshrine their views into the constitution of a world government. All longtermist EA is really trying to do right now is trying to ensure that people won't be dead soon so that there'll be the option to talk governance and so on later on. (BTW, I do expect EAs to write up proposals for visions of AI-aided ideal governance at some point. I think that's good to have and good to discuss. I don't see it as the main priority right now because EAs haven't yet made any massive bids for power in the world. Besides, it's not like whatever the default would otherwise be has much justification. And you could even argue that EAs have done the most so far out of any group promoting discourse about important issues related to fair governance of the future.)