A few years ago, I read The Life You Can Save by Peter Singer. I felt deeply inspired. The idea that charities could be compared using evidence and reason, the thought that I could save many lives without sacrificing my own happiness: I found these ideas meaningful, and I hoped they would give my life a sense of purpose (even if other factors were likely also at play).
I became an Intro Fellow and read more. I went to conferences and retreats. I now lead my university group.
But I’m frustrated.
I’m now asked to answer for the actions of a man who defrauded millions of people, and for the purchase of castles and $2000+ coffee tables.
I’m now associated with predatory rationalists.
I’m now told to spend my life reducing existential risk by .00001 percent to protect 1018 future humans, and forced to watch money get redirected from the Global South to AI researchers.[1]
This is not what I signed up for.
I used to be proud to call myself an EA. Now, when I say it, I also feel shame and embarrassment.
I will take the Giving What We Can pledge, and I will stay friends with the many kind EAs I’ve met.
But I no longer feel represented by this community. And I think a lot of others feel the same way.
Edit log (2/6/23, 12:28pm): Edited the second item of the list, see RobBensinger's comment.
- ^
This is not to say that longtermism is completely wrong—it’s not. I do, however, think "fanatical" or "strong" longtermism has gone too far.
Is influencing the far future really tractable? How is x-risk reduction not a Pascal's mugging?
I agree that future generations are probably too neglected right now. But I just don't find myself entirely convinced by the current EA answers to these questions. (See also.)
I don't think this is a healthy way of framing disagreements about cause prioritization. Imagine if a fan of GiveDirectly started complaining about GiveWell's top charities for "redirecting money from the wallets of world's poorest villagers..." Sounds almost like theft! Except, of course, that the "default" implicitly attributed here is purely rhetorical. No cause has any prior claim to the funds. The only question is where best to send them, and this should be determined in a cause neutral way, not picking out any one cause as the privileged "default" that is somehow robbed of its due by any or all competing candidates that receive funding.
Of course, you're free to feel frustrated when others disagree with your priorities. I just think that the rhetorical framing of "redirected" funds is (i) not an accurate way to think about the situation, and (ii) potentially harmful, insofar as it seems apt to feed unwarranted grievances. So I'd encourage folks to try to avoid it.
I am one of those donors, as are you, probably. I'm not a high earner, but It does count. I make my decisions based on my own beliefs and the beliefs of who I trust. I also make it based on the opinions of EA, whenever I go look at the top charities of givewell.org to guide my donation decisions.
There are at least some some people who were previously donating to global poverty orgs based off EA recommendations, that are now donating to AI risk instead, based of EA recommendations, due to the shift in priorities among core EA. If the shift had not occ... (read more)