For the first year and a half since taking the GWWC pledge, I donated exclusively to the long-term future fund. As a longtermist, that seemed the obvious choice. I now find I have a nagging feeling that I should instead donate to global health and factory farming initiatives. With an hour to kill in BER Terminal 2 and a disappointing lack of plant-based food to spend my remaining Euros on, I figured I should lay out my reasoning, to get it straight in my own head, and invite some feedback.
I am more convinced of the arguments for longtermism than other cause areas. Hence, my career planning (beyond the scope of this post) is driven by a high longtermist upside. In retrospect, I wonder if one's assessment of the top cause area needs to apply universally, or if the SNT framework could produce a different outcome when considering one's money as opposed to one's work. Obviously, scale will be unchanged, but the tractability of my funding and my time are likely different, and cause areas will be limited by money or talent but not both.
I just read George Rosenfeld's recent article on free spending. The point highlighting the strange situation where EA has an excess of longtermist funding available while efficient global health programmes could absorb more, very tangibly doing good in the process, really grabbed me. There are a few reasons why that feeds into my thinking.
Having billionaires donating fortunes to longtermism is great. If that temporarily saturates the best opportunities there, maybe small donors like me should move elsewhere?
Furthermore, when I read through a donation report for the long-term future fund, I noticed an appreciable fraction of pay-outs were for things like 'this person will use this as a grant, allowing them to gain credentials and work AI alignment'. I appreciate the value of this. Nevertheless, it's mightily difficult to explain to wider friends and family why, having dedicated 10% of my income to "doing the most good possible, with reason and evidence", funding people to get ML qualifications is literally the best use of that money. Even if a raw EV estimate decided this was the best thing, I'd cite the burgeoning discussion on the optics of EA spending to be cause for concern. A few massive donors making sure these opportunities are funded is reassuring but small donors will account for the lion's share of all conversation that is had about EA donation. People talking about the community surely accounts for much of how the outside world perceives us. It troubles me that these conversations might be quite alienating or confusing when transmitting the message "I want to use reason and evidence to maximise my altruistic impact, so I fund people in developed countries to get postgraduate degrees instead of cheap ways to save lives."
I can't get away from this feeling that the PR bonus from donating to GiveWell or ACE charities is worth switching to, given the angst I am starting to feel about community optics and the well-funded state, on aggregate, of the longtermist sphere. Does this make sense to you? I'd be interested to see other arguments for or against this point of view.
TLDR: Fund LTFF for inclusive wellbeing programs (not survival of humanity, which is covered by others) or/and EAIF for listening to experts on brilliant self-perpetuating solutions to complex problems in emerging economies (not prominent infrastructure development, such as at developed nations PhD institutions without the interest in understanding or addressing issues - which is addressed by other funders) (before/rather than funding survival programs, because persons may be experiencing negative wellbeing (suffering)).
The counterargument is: you save negative valence lives. So, you advance suffering. Large EA orgs, such as AMF, do not consider valence in their decisionmaking, even if you specifically point this out to them (AMF). There is no systemic change, people do not improve their wellbeing, or upskill in being joy to hang out with and care for others (it is still the feeling of unwanted children competing for scarce resources and capturing or working some/other animals to get less work/prevent the need to do anything) - so, you advance dystopia, according to one interpretation.
It is not about if you donate to 'longtermism' or 'neartermism' - concepts which I further classify for a distinction between intent and impact - because you can do poorly and well in both. For example, if you fund the one PhD who is going to run a program on risk-mitigating (e. g. not notify inconsiderate (example) actors of the threat for stress-based selfish gain potential) lab safety in developing countries, and thus prevents pandemics, using grants of academia or industrialized nations governments (also pointing similar opportunities to these institutions), you have higher impact than if you save the 100 unwanted lives of suffering by nets. The neartermism positive example is you fund an innovation, such as my pamphlet under the net packaging, and advance systemic change toward a virtuous cycle of wellbeing (it has long-term effects).
A negative longtermist example is if you fund a PhD who narrates safety as personal gain and dystopia for most (e. g. AI remains in the hands of a biological human - who may however be influenced by AI algorithms, such as finding abstract colors and shapes combination that solicit the most seller-desired action without the ability of buyers pinpoint the issue they have with it because prima facie the narratives are positive, by using big data (e. g. Alibaba) - but does not consider the impact on persons, including buyers, third parties, any animals and sentient entities - or does not consider counterfactual impact of comparative advantage development/civilizational trajectory that can be instead selling more products/getting higher numbers in terms of GDP maybe exploring wellbeing and including yet more entities) and they gain attention and basically if you say but this is suboptimal individuals are suffering they say no but this is the definition of safety.
The Long Term Future Fund seems to be one where maybe the cooler, more alluding to traditional power, such as big tech or spreading humans, funding opportunities are public-facing, but submissions of making sure that the future is good for all (including wild animal welfare research) may be also there, just not so prominently reported. You have to ask the Fund managers or you can also say well I want to fund only inclusive wellbeing future - there is sufficient number of people who fund survival of humanity so that they can use their cognitive capacity to continue pursuing projects I like to contribute etc.
If you donate to 'neartermism' you need to have knowledge of what are the bundle bargains for the people and then fund them, otherwise you may end up like the small donors who should not feel ashamed or whatever this article is narrating. I mean understand beneficiaries' perspectives but also acknowledge that they may be limited based on limited experience (if you ask an abused person what they would prefer, maybe they tell you hurt abuser not well pursue various hobbies and develop a complex set of interesting relationships that together give me what I enjoy but also enable me to adjust the ratios) with alternatives, which you may need to supply. There are many experts in this but no one is really listening to them. So, you should fund EAIF for focus on including such experts. Not just EAIF - since there will always be funders who cover the share of nice events at the PhD institutions.