For the first year and a half since taking the GWWC pledge, I donated exclusively to the long-term future fund. As a longtermist, that seemed the obvious choice. I now find I have a nagging feeling that I should instead donate to global health and factory farming initiatives. With an hour to kill in BER Terminal 2 and a disappointing lack of plant-based food to spend my remaining Euros on, I figured I should lay out my reasoning, to get it straight in my own head, and invite some feedback.
I am more convinced of the arguments for longtermism than other cause areas. Hence, my career planning (beyond the scope of this post) is driven by a high longtermist upside. In retrospect, I wonder if one's assessment of the top cause area needs to apply universally, or if the SNT framework could produce a different outcome when considering one's money as opposed to one's work. Obviously, scale will be unchanged, but the tractability of my funding and my time are likely different, and cause areas will be limited by money or talent but not both.
I just read George Rosenfeld's recent article on free spending. The point highlighting the strange situation where EA has an excess of longtermist funding available while efficient global health programmes could absorb more, very tangibly doing good in the process, really grabbed me. There are a few reasons why that feeds into my thinking.
Having billionaires donating fortunes to longtermism is great. If that temporarily saturates the best opportunities there, maybe small donors like me should move elsewhere?
Furthermore, when I read through a donation report for the long-term future fund, I noticed an appreciable fraction of pay-outs were for things like 'this person will use this as a grant, allowing them to gain credentials and work AI alignment'. I appreciate the value of this. Nevertheless, it's mightily difficult to explain to wider friends and family why, having dedicated 10% of my income to "doing the most good possible, with reason and evidence", funding people to get ML qualifications is literally the best use of that money. Even if a raw EV estimate decided this was the best thing, I'd cite the burgeoning discussion on the optics of EA spending to be cause for concern. A few massive donors making sure these opportunities are funded is reassuring but small donors will account for the lion's share of all conversation that is had about EA donation. People talking about the community surely accounts for much of how the outside world perceives us. It troubles me that these conversations might be quite alienating or confusing when transmitting the message "I want to use reason and evidence to maximise my altruistic impact, so I fund people in developed countries to get postgraduate degrees instead of cheap ways to save lives."
I can't get away from this feeling that the PR bonus from donating to GiveWell or ACE charities is worth switching to, given the angst I am starting to feel about community optics and the well-funded state, on aggregate, of the longtermist sphere. Does this make sense to you? I'd be interested to see other arguments for or against this point of view.
Thanks for posting this -- as the other comments also suggest, I don't think you're alone in feeling a tension between your conviction of longtermism and lack of enthusiasm for marginal longtermist donation opportunities.
I want to distinguish between two different ways at approaching this. The first is simply maximising expected value, the second is trying to act as if you're representing some kind of parliament of different moral theories/worldviews. I think these are pretty different. [1]
For example, suppose you were 80% sure of longtermism, but had a 20% credence in animal welfare being the most important issue of our time, and you were deciding whether to donate to the LTFF or the animal welfare fund. The expected value maximiser would likely think one had a higher expected value, and so would donate all their funds to that one. However, the moral parliamentarian might compromise by donating 80% of their funds to the LTFF and 20% to the animal welfare fund.
From this comment you left:
I take it that you're in the game of maximising expected value, but you're just not sure that the longtermist charities are actually higher impact than the best available neartermist ones (even if they're being judged by you, someone with a high credence in longtermism). That makes sense to me!
But I'm not sure I agree. I think there'd be something suspicious about the idea that neartermism/longtermism align on which charities are best (given they are optimising for very different things, it'd be surprising if they turned out with the same recommendation). But more importantly, I think I might just be relatively more excited about the kinds of grants the LTFF are making than you are, and also more excited about the idea that my donations could essentially 'funge' open philanthropy (meaning I get the same impact as their last dollar).
I also think that if you place significant value on the optics of your donations, you can always just donate to multiple different causes, allowing you to honestly say something like "I donate to X, Y and Z -- all charities that I really care about and think are doing tremendous work" which, at least in my best guess, gets you most of the signalling value.
Time to wrapup the lengthy comment! I'd suggest reading Ben Todd's post on this topic, and potentially also the Red-Team against it. I also wrote "The value of small donations from a longtermist perspective" which you may find interesting.
Thanks again for the post, I appreciate the discussion it's generating. You've put your finger on something important.
At least, I think the high-level intuition behind each of these mental models are different. But my understanding from a podcast with Hilary Greaves is that when you get down to trying to formalise the ideas, it gets much murkier. I found these slides of her talk on this subject, in case you're interested!