A few years ago, I read The Life You Can Save by Peter Singer. I felt deeply inspired. The idea that charities could be compared using evidence and reason, the thought that I could save many lives without sacrificing my own happiness: I found these ideas meaningful, and I hoped they would give my life a sense of purpose (even if other factors were likely also at play).
I became an Intro Fellow and read more. I went to conferences and retreats. I now lead my university group.
But I’m frustrated.
I’m now asked to answer for the actions of a man who defrauded millions of people, and for the purchase of castles and $2000+ coffee tables.
I’m now associated with predatory rationalists.
I’m now told to spend my life reducing existential risk by .00001 percent to protect 1018 future humans, and forced to watch money get redirected from the Global South to AI researchers.[1]
This is not what I signed up for.
I used to be proud to call myself an EA. Now, when I say it, I also feel shame and embarrassment.
I will take the Giving What We Can pledge, and I will stay friends with the many kind EAs I’ve met.
But I no longer feel represented by this community. And I think a lot of others feel the same way.
Edit log (2/6/23, 12:28pm): Edited the second item of the list, see RobBensinger's comment.
- ^
This is not to say that longtermism is completely wrong—it’s not. I do, however, think "fanatical" or "strong" longtermism has gone too far.
Is influencing the far future really tractable? How is x-risk reduction not a Pascal's mugging?
I agree that future generations are probably too neglected right now. But I just don't find myself entirely convinced by the current EA answers to these questions. (See also.)
I agree that that the framing could be improved, but I'm not sure the actual claim is inaccurate? There is a pool of donors who make their decisions based on the opinions of EA. Several years ago they were "directed" toward giving their money towards global poverty. Now, due to a shift in opinion, they are "directed" towards giving their money towards AI safety. At least some of that money has been "redirected": if the shift hadn't occurred, global poverty would probably have had more money, and AI safety probably would have had less.
As an AI risk believer, you think that this change in funding is on balance good, whereas the OP is an AI risk skeptic that thinks this shift in funding is bad. Both are valid opinions that cast no aspersions on ones character (and here is where I think the framing could be improved). I think if you fall into the latter camp, it's a perfectly valid reason to want to leave.