For the first year and a half since taking the GWWC pledge, I donated exclusively to the long-term future fund. As a longtermist, that seemed the obvious choice. I now find I have a nagging feeling that I should instead donate to global health and factory farming initiatives. With an hour to kill in BER Terminal 2 and a disappointing lack of plant-based food to spend my remaining Euros on, I figured I should lay out my reasoning, to get it straight in my own head, and invite some feedback.

I am more convinced of the arguments for longtermism than other cause areas. Hence, my career planning (beyond the scope of this post) is driven by a high longtermist upside. In retrospect, I wonder if one's assessment of the top cause area needs to apply universally, or if the SNT framework could produce a different outcome when considering one's money as opposed to one's work. Obviously, scale will be unchanged, but the tractability of my funding and my time are likely different, and cause areas will be limited by money or talent but not both.

I just read George Rosenfeld's recent article on free spending. The point highlighting the strange situation where EA has an excess of longtermist funding available while efficient global health programmes could absorb more, very tangibly doing good in the process, really grabbed me. There are a few reasons why that feeds into my thinking.

Having billionaires donating fortunes to longtermism is great. If that temporarily saturates the best opportunities there, maybe small donors like me should move elsewhere?

Furthermore, when I read through a donation report for the long-term future fund, I noticed an appreciable fraction of pay-outs were for things like 'this person will use this as a grant, allowing them to gain credentials and work AI alignment'. I appreciate the value of this. Nevertheless, it's mightily difficult to explain to wider friends and family why, having dedicated 10% of my income to "doing the most good possible, with reason and evidence", funding people to get ML qualifications is literally the best use of that money. Even if a raw EV estimate decided this was the best thing, I'd cite the burgeoning discussion on the optics of EA spending to be cause for concern. A few massive donors making sure these opportunities are funded is reassuring but small donors will account for the lion's share of all conversation that is had about EA donation. People talking about the community surely accounts for much of how the outside world perceives us. It troubles me that these conversations might be quite alienating or confusing when transmitting the message "I want to use reason and evidence to maximise my altruistic impact, so I fund people in developed countries to get postgraduate degrees instead of cheap ways to save lives." 

I can't get away from this feeling that the PR bonus from donating to GiveWell or ACE charities is worth switching to, given the angst I am starting to feel about community optics and the well-funded state, on aggregate, of the longtermist sphere. Does this make sense to you? I'd be interested to see other arguments for or against this point of view. 

108

0
0

Reactions

0
0

More posts like this

Comments19
Sorted by Click to highlight new comments since: Today at 1:44 AM

This makes a lot of sense to me. Personally I'm trying to use my career to work on longtermism, but focusing my donations on global poverty. A few reasons, similar to what you outlined above:

  • I don't want to place all my bets on longtermism. I'm sufficiently skeptical of arguments about AI risk, and sufficiently averse to pinning all my personal impact on a low-probability high-EV cause area, that I'd like to do some neartermist good with my life. Also, this
  • Comparatively speaking, longtermism needs more people and global poverty needs more cash. GiveWell has maintained their bar for funding as "8x better than GiveDirectly", and is delaying grants that would not meet that bar because they expect to find more impactful opportunities over the next few years. Meanwhile longtermists seem to have lowered the bar to funding significantly, with funding readily available for any individuals interested in working on or towards impactful longtermist projects. (Perhaps the expected value of longtermist giving still looks good because the scale is so much bigger, but getting a global poverty grant seems to require a much more established organization with a proven track record of success.) 
  • The best pitch for EA in my experience is the opportunity to reliably save lives by donating to global poverty charities. When I tell people about EA, I want to be able to tell them that I do the thing I'm recommending. (Though, maybe I should be learning to pitch x-risk instead

On the whole, it seems reasonable to me for somebody to donate to neartermist causes despite the fact that they believe in the longtermist argument. This is particularly true for people who do or will work directly on longtermism and would like to diversify their opportunities for impact. 

I came to say the same thing. I was (not that long ago) working on longtermist stuff and donating to neartermist stuff (animal welfare). I think this is not uncommon among people I know.

I'm pretty skeptical about arguments from optics, unless you're doing marketing for a big organization or whatever. I just think it's really valuable to have a norm of telling people your true beliefs rather than some different version of your beliefs designed to appeal to the person you're speaking to. This way people get a more accurate idea of what a typical EA person thinks if they talk to them, and you're likely better able to defend your own beliefs vs the optics-based ones if challenged. (The argument about there being so much funding in longtermism that the best opportunities are already funded I think is pretty separate to the optics one, and I don't have any strong opinions there)

For me, I would donate to where you think there's the highest EV, and if that turns out to be longtermism, think about a clear and non-jargony way to explain that to non-EA people, i.e. say something like 'I'm concerned about existential risks from things like nuclear war, future pandemics and risks from emerging technologies like AI, so I donate some money to a fund trying to alleviate those risks' (rather than talking about the 10^100 humans who will be living across many galaxies etc etc). A nice side effect of having to explain your beliefs might be convincing some more people to go check out this 'longtermism' stuff!

Thanks for posting this -- as the other comments also suggest, I don't think you're alone in feeling a tension between your conviction of longtermism and lack of enthusiasm for marginal longtermist donation opportunities. 

I want to distinguish between two different ways at approaching this. The first is simply maximising expected value, the second is trying to act as if you're representing some kind of parliament of different moral theories/worldviews. I think these are pretty different. [1]

For example, suppose you were 80% sure of longtermism, but had a 20% credence in animal welfare being the most important issue of our time, and you were deciding whether to donate to the LTFF or the animal welfare fund. The expected value maximiser would likely think one had a higher expected value, and so would donate all their funds to that one. However, the moral parliamentarian might compromise by donating 80% of their funds to the LTFF and 20% to the animal welfare fund. 

From this comment you left:

I'm not convinced small scale longtermist donations are presently more impactful than neartermist ones, nor am I convinced of the reverse. Given this uncertainty, I am tempted to opt for neartermist donations to achieve better optics.

I take it that you're in the game of maximising expected value, but you're just not sure that the longtermist charities are actually higher impact than the best available neartermist ones (even if they're being judged by you, someone with a high credence in longtermism). That makes sense to me! 

But I'm not sure I agree. I think there'd be something suspicious about the idea that neartermism/longtermism align on which charities are best (given they are optimising for very different things, it'd be surprising if they turned out with the same recommendation). But more importantly, I think I might just be relatively more excited about the kinds of grants the LTFF are making than you are, and also more excited about the idea that my donations could essentially 'funge' open philanthropy (meaning I get the same impact as their last dollar). 

I also think that if you place significant value on the optics of your donations, you can always just donate to multiple different causes, allowing you to honestly say something like "I donate to X, Y and Z -- all charities that I really care about and think are doing tremendous work" which, at least in my best guess, gets you most of the signalling value.

Time to wrapup the lengthy comment! I'd suggest reading Ben Todd's post on this topic, and potentially also the Red-Team against it. I also wrote "The value of small donations from a longtermist perspective" which you may find interesting. 

Thanks again for the post, I appreciate the discussion it's generating. You've put your finger on something important.

  1. ^

    At least, I think the high-level intuition behind each of these mental models are different. But my understanding from a podcast with Hilary Greaves is that when you get down to trying to formalise the ideas, it gets much murkier. I found these slides of her talk on this subject, in case you're interested!

Note that Toby Ord has long given 10% to global poverty. He doesn't explain why in the linked interview despite being asked "Has that made you want to donate to more charities dealing on “long-termist” issues? If not, why not?"

My guess is that he intentionally dodged the question because the true answer is that he continues to donate to global poverty charities because he thinks the signaling value of him donating to global poverty charities is greater than the signaling value of him donating to longtermist charities and yet saying this explicitly in the interview would likely have undermined some of that signaling value.

In any case, I think those two things are true, and think the signaling value represents the vast majority of the value of his donations, so his decision seems quite reasonable to me, even assuming there are longtermist giving opportunities available to him that offer more direct impact per dollar (as I believe).

For other small donors whose donations are not so visible, I still think the signaling value is often greater than the direct value of the donations. Unlike in Toby Ord's case though, for typical donors I think the donations with the highest signaling value are usually the donations with the highest direct impact.

There are probably exceptions though, such as if you often introduce effective giving to people by talking about how ridculously inexpensive it is to save someone's life. In that case, I think it's reasonable for you to donate a nontrivial amount (even up to everything you donate, potentially) to e.g. GiveWell's MIF even if you think the direct cost-effectiveness of that donation is less, since the indirect effect of raising the probability of getting the people you talk to into effective giving and perhaps eventually into a higher impact career path can plausibly more than make up for the reduced direct impact.

An important consideration related to all of this that I haven't mentioned yet is that large donors (e.g. Open Phil and FTX) could funge your donations. I.e. You donate more to X so they donate less to it and more to the other high impact giving opportunities available to them, such that the ultimate effect of your donation to X is to only increase the amount of funding for X a little bit and to increase the funding for other better things mpre. I don't know if this actually happens, though I often hope it does.

(For example, I hope it does whenever I seize opportunities to raise funds for EA nonprofits that are not the nonprofits that I believe will use marginal dollars most cost-effectively. E.g. During the last every.org donation match I directed matching funds to 60+ EA nonprofits due to a limit on the match amount per nonprofit despite thinking many of those nonprofits would use marginal funds less than half as cost-effectively as the nonprofits that seemed best to me. My hope was that this was the right call, i.e. that large EA funders would correct the allocation to EA nonprofits by giving less to the nonprofits I gave to and more to the giving opportunities that had highest cost-effectivess than they otherwise would have, thereby making my decision the right call.)

That's a really interesting point about Toby Ord!

I appreciate you taking the time to write out your thinking. Everything you've written makes sense to me.

I'm in a similar position (donate to global poverty but care enough about x-risk to plan my career around it). I think the signalling value of donating to easy-to-pitch causes is pretty significant (probably some people find x-risk easier/more effective to pitch but I don't personally). aogara's first point also resonates with me. Donating to obviously good causes also seems like it would be psychologically valuable if I end up changing my mind about the importance of x-risk in the future.

I think most people should be thinking about the optics of their donations in terms of how it affects them personally pitching EA, not in terms of how community-wide approaches to donation would affect optics of the community. It seems plausible that the optics of your donations could be anywhere from basically irrelevant to much more important than the direct good they do, depending on the nature/number of conversations about EA you have with non-EA people.

How many people do we need to get ML degrees before hitting diminishing returns? Maybe you could be upfront about funding "people in developed countries to get postgraduate degrees", with the context that AI risk is an urgent problem, and that you'll switch to poverty/animal welfare once AI is solved.

That seems like a very robust approach if one had a clear threshold in mind for how many qualified AI alignment researchers is enough. Sadly, I have no intuition or information for this, nor a finger on the pulse of that research community.

Does the relative amount of evidence and uncertainty affect your thinking at all? I have heard indirectly of people working in longtermism who donate to neartermist causes because they think it hedges the very large uncertainties of longtermism (both longtermist work and donations). 

As you say, the neartermist donation options recommended by EA benefit from very robust evidence, observable feedback loops, tried-and-tested organisations etc., and that could be a good hedge if you're working in an area of much higher uncertainty.

I wouldn't worry too much about optics unless you're doing a lot of community building and outreach to your friends. But even if you are doing a lot, there's this weird effect where the more that someone cares about optics, the less valuable it is for them to be a part of EA and at a certain point it actually ends up being negative.

Coincidentally: I'm actually seriously considering setting up a group to make AI safety micro-grants. I agree that LTFF can probably receive sufficient funding to cover most of its applications from OpenPhil or FTX, but I'm interested in trying to see if there's a case for smaller, but easier to access grants; especially there is a degree of active grantmaking incl. providing advice, helping people network + assisting with seeking further funding.

I guess my general point is that I still believe that it is possible to find giving opportunities that aren't adequately covered by the current funds, but this is still an untested thory.

Hi Tom! Thanks for writing this post. Just curious... would you consider donating to cost-effective climate charities? (e.g. Effective Environmentalism recommended ones) Seems like it could look better from an optics point of view and  fit more with longtermism, depending on your views.

Hi Tom! I think this idea of giving based on the signalling value is an interesting one.

One idea - I wonder if you could capture a lot of the signalling value while only moving a small part of your donation budget to non-xrisk causes?

How that would work: when you're talking to people about your GWWC donations, if you think they'd be more receptive to global health/animal ideas you can tell them about your giving to those charities. And then (if you think they'd be receptive) you can go on to say that ultimately you think the most pressing problems are xrisks, and therefore you allocate most of your donations to building humanity's capacity to prevent them.

In other words, is the signalling value scale-insensitive (compared to the real-world impact of your donations)?

Sorry it’s not entirely clear to me if you think good longtermist giving opportunities have dried up, or if you think good opportunities remain but your concern is solely about the optics of giving to them.

On the optics point, I would note that you don’t have to give all of your donations to the same thing. If you’re worried about having to tell people about your giving to LTFF, you can also give a portion of your donations to global health (even if small), allowing you to tell them about that instead, or tell them about both.

You could even just give everything to longtermism yet still choose to talk to people about how great it can be to give global health. This may feel a bit dishonest to you though so you may not want to.

To clarify, my position could be condensed to "I'm not convinced small scale longtermist donations are presently more impactful than neartermist ones, nor am I convinced of the reverse. Given this uncertainty, I am tempted to opt for neartermist donations to achieve better optics."

The point you make seems very sensible. If I update strongly back towards longtermist giving I will likely do as you suggest.

TLDR: Fund LTFF for inclusive wellbeing programs (not survival of humanity, which is covered by others) or/and EAIF for listening to experts on brilliant self-perpetuating solutions to complex problems in emerging economies (not prominent infrastructure development, such as at developed nations PhD institutions without the interest in understanding or addressing issues - which is addressed by other funders) (before/rather than funding survival programs, because persons may be experiencing negative wellbeing (suffering)).

The counterargument is: you save negative valence lives. So, you advance suffering. Large EA orgs, such as AMF, do not consider valence in their decisionmaking, even if you specifically point this out to them (AMF). There is no systemic change, people do not improve their wellbeing, or upskill in being joy to hang out with and care for others (it is still the feeling of unwanted children competing for scarce resources and capturing or working some/other animals to get less work/prevent the need to do anything) - so, you advance dystopia, according to one interpretation.

It is not about if you donate to 'longtermism' or 'neartermism' - concepts which I further classify for a distinction between intent and impact - because you can do poorly and well in both. For example, if you fund the one PhD who is going to run a program on risk-mitigating (e. g. not notify inconsiderate (example) actors of the threat for stress-based selfish gain potential) lab safety in developing countries, and thus prevents pandemics, using grants of academia or industrialized nations governments (also pointing similar opportunities to these institutions), you have higher impact than if you save the 100 unwanted lives of suffering by nets. The neartermism positive example is you fund an innovation, such as my pamphlet under the net packaging, and advance systemic change toward a virtuous cycle of wellbeing (it has long-term effects).

A negative longtermist example is if you fund a PhD who narrates safety as personal gain and dystopia for most (e. g. AI remains in the hands of a biological human - who may however be influenced by AI algorithms, such as finding abstract colors and shapes combination that solicit the most seller-desired action without the ability of buyers pinpoint the issue they have with it because prima facie the narratives are positive, by using big data (e. g. Alibaba) - but does not consider the impact on persons, including buyers, third parties, any animals and sentient entities - or does not consider counterfactual impact of comparative advantage development/civilizational trajectory that can be instead selling more products/getting higher numbers in terms of GDP maybe exploring wellbeing and including yet more entities) and they gain attention and basically if you say but this is suboptimal individuals are suffering they say no but this is the definition of safety.

The Long Term Future Fund seems to be one where maybe the cooler, more alluding to traditional power, such as big tech or spreading humans, funding opportunities are public-facing, but submissions of making sure that the future is good for all (including wild animal welfare research) may be also there, just not so prominently reported. You have to ask the Fund managers or you can also say well I want to fund only inclusive wellbeing future - there is sufficient number of people who fund survival of humanity so that they can use their cognitive capacity to continue pursuing projects I like to contribute etc.

If you donate to 'neartermism' you need to have knowledge of what are the bundle bargains for the people and then fund them, otherwise you may end up like the small donors who should not feel ashamed or whatever this article is narrating. I mean understand beneficiaries' perspectives but also acknowledge that they may be limited based on limited experience (if you ask an abused person what they would prefer, maybe they tell you hurt abuser not well pursue various hobbies and develop a complex set of interesting relationships that together give me what I enjoy but also enable me to adjust the ratios) with alternatives, which you may need to supply. There are many experts in this but no one is really listening to them. So, you should fund EAIF for focus on including such experts. Not just EAIF - since there will always be funders who cover the share of nice events at the PhD institutions.

I don’t know what campaign finance laws are like in other countries (and honestly don’t even know American laws that well), but I’ve heard a few times now (and understand) that donating to political causes/candidates that have EA or longtermist values can be quite valuable despite the billionaire funding available, because certain contributions are capped on a per-person basis. Additionally, it can look better when a candidate appears to have funding support from a wider base of people.

If there are such laws in Europe I would definitely be interested to hear about that!

I think the thoughts about having different priorities with your career and donations are super interesting.

Do you think it’s possible that you prefer the higher probability of impact with global health charities when it comes to donations, but are willing to focus on expected value for your career, where it’s hard to find something with a very high probability of large positive counterfactual impact?

Curated and popular this week
Relevant opportunities