Cross-posted from my blog.
Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small.
Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%.
That is not how most nonprofit work feels to me.
You are only ever making small dents in important problems
I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems.
Consider what else my $500 CrossFit scholarship might do:
* I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed.
* I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f
This post really does not match my perspective.
Some parts I especially disagree with:
I'm really annoyed by this line of reasoning, where once you give 1 dollar to the AMF then you can't give anything else to any cause area that critics might not like, otherwise you're killing children.
If Moskovitz was spending it on boats instead it would be seen as ok? Should we criticize everyone that spends 1% of their net worth on something we don't like? Was criminal justice reform ever recommended to small donors? Was it ever a GiveWell top charity?
This part enrages me:
Please I encourage people to read the "fluff profile"! Dylan Matthews also literally wrote:
At the top of another article. Is Matthews' conveyance of enmity and discomfort feigned? I doubt it.
There are thousands of donors in EA, and dozens of organizations doing all sorts of stuff independent stuff! My favorite example is Charity Entrepreneurship, and their incubated charities. They are doing really amazing stuff and are saving thousands of lives, I don't see them (and all their 20 charities) going away any time soon.
[Sorry for ranting a bit from now on]
This kind of "altruism doesn't exist" reasoning seems really common and really annoys me. I'm not donating 70% of my income to "optimize taxes" or "manage my reputation", Giving What We Can just reached 8000 pledgers (I know not all pledgers actually end up giving >10% for their whole lives, but many do!).
For many people, EA is still a place to decide where to give their money and/or time. Not to get money. Many many EA direct workers are both donating significant amounts and earning significantly less than they would be in the private sector.
"Altruism doesn't exist" seems especially common in some rationalists and post-rationalists that really cannot imagine someone being in any way altruistic: it's always all signaling, nothing could ever falsify this.
Personal take: SBF/FTX stole a lot of money from a lot of people, and ruined the reputation of coworkers, families, industries, and politicians. I really think EA is making it all about ourselves in a very egocentric manner, and long-time critics are mostly using it to justify their existing "altruism is bad" or "EA is bad" takes.
I don't think the claim is that altruism doesn't exist. Rather, it's that at the margin large contributors are prone to use charity for their own goals. As EA attempts to monetize 'whales', it's pushed to twist itself into something that serves those goals, which in turn changes how good your own, smaller donations are.
It's a 'at the margin' argument, and I don't know how accurate it is. Maybe EA orgs are currently resistant to such processes. OTOH, the ones that are less resistant will be more appealing to big money, get bigger budgets, become more visible, and likely be copied. Seems unstable long-term.
Re Geeks, MOPs, and Sociopaths, SBF was one of the original Geeks (into EA when it was still small), so I don't think it makes sense to use this framing here. SBF seems to have developed into a Sociopath after he made it big (and EA became similarly big, somewhat independently). Does Chapman have anything to say about that (Geeks -> Sociopaths)?
I think this is an awfully difficult problem to solve. A status-motivated donor has plenty of other causes that will give them status in exchange for their millions to billions. If you accept that many donors are significantly motivated by status, adopting policies to prevent them from using donations to improve status will make one's cause areas less competitive with other causes. Except in unusual circumstances, a specific charity or cause needs its donors a lot more than the donors need the cause/charity.
So my takeaway is grace for everyone who is trying to muddle through all the challenges of thinking through and managing donor risk as best they can.
Thanks for the post !
It provides useful insight. It's true that many, many organisations over history have been "captured" up to some point.
I guess that is the fate of a lot of things : organisms that are the best at capturing power are more competitive. So at the end of the day, places with deciders concentrate mostly individuals and corporations that are very good at obtaining power and keeping it.
It's why we end up with actual scientific studies concluding that the US is not a democracy but an oligarchy. I recommend reading Propaganda by Edwards Bernays and the Powell memo.
Which is why you need counter-powers to prevent that. It's hard and doesn't always work, but with a lot of effort it's possible to mitigate that, many social movements obtained some results through this - but we really need to learn from the FTX scandal to achieve that.
However, I'm not certain that EA as a whole has died yet. I agree that I find it seriously unlikely that EA ideals end up shaping the whole world for centuries to come, because of these reasons. But what I care about is "how much impact do we have?", and I'm not sure the impact we have is really declining. So far, Givewell is still donating more than ever before. But we'll see how this goes.
Strong upvote from me. I really appreciated the frank sharing of your experience and also that it was playfully written (and more on this forum could be!)
Particularly keen on the policy mandating anonymous donations, and building a receiving organisation to do this and (presumably) pool all different donations for cause / intervention X together and administering them? TBH, it seems like a no-brainer to me to do in the first place if governance was being prioritised. Main reason I think you would keep the close donor-adviser relationship in place would be:
And I'm genuinely curious if people who advise donors 1:1 think that such interventions / policies you list would deter donors? Or if, on the plausible trade-off between anonymity-as-prevention-against-capture Vs. getting donations, they think it's net positive?
Would also be keen to hear views of donors directly on this (even if this thread might not be the most hospitable place given Eigenrobot's unpacking of how these structures necessarily corrupt).
This is a great post and mirrors a lot of the kinds of things I have been worried about over the years. I also recommend Zvi's "Immoral Mazes" sequence as covering some similar ground.
The "donate in public and brag about it" thing has been an EA norm since the early days. The rationale was to take the desire for status and weaponize and corrupt it in the service of good. It was in explicit and deliberate opposition to the mainstream Judeo-Christian norm of keeping it to oneself. Funny how things work out
Great post, and agreed about the dynamics involved. I worry the current EA synthesis has difficulty addressing this class of criticism (power corrupts; transactional donations, geeks/mops/sociopaths), but perhaps we haven’t seen EA’s final form.