Hide table of contents

Epistemic Status: Fairly quickly written. No one has reviewed this. It’s possible I’m missing something important.

Introduction

In two recent posts on the EA Forum, Ben Todd argues that small donors can still have significant impact despite billions of extra funding in EA, whilst AppliedDivinityStudies argues against this, whilst acknowledging that small donations can still have impact if they spent on “weirder and more speculative things”.

Both individuals seem to agree on a core claim however that “there are better uses of your time than earning to give”. Whilst I doubt they think this is true of absolutely every EA, as some EAs may have significant personal fit in earning to give (ETG), they both seem to endorse this as some sort of general rule. In short, AppliedDivinityStudies thinks that high impact funding is likely to have already been filled by large donors such that further donations have very little value, whilst Ben Todd thinks that marginal value of donations remains high but has declined such that ETG is generally unlikely to be the best option for EAs.

In this post I argue that they are too quick to come to the conclusion that ETG is generally the wrong option for EAs. If the arguments put forward for patient altruism are credible (I personally believe they are) then ETG becomes a potentially highest impact path for many EAs and, furthermore, even very small donations can have very high expected impact. Little of what I say is original to me, mostly drawing on the Founders Pledge report into investing to give and Will MacAskill’s argument that we are not living at the most influential time in history, but I wanted to raise these points in the specific context of ETG.

Overview of Founders Pledge’s report into investing to give

I would recommend that everyone read at least the executive summary of Founders Pledge’s (FP) report on investing to give. To summarise, the report identifies a few key arguments for investing to give rather than giving now:

  1. Financial returns on the investment: This is a key factor according to FP. One can typically double one’s financial resources - in nominal terms - over the course of a decade, through equity market index investing. So a certain amount of money now can be significantly more in the future, in real terms, if invested wisely.
  2. Higher cost-effectiveness in the future due to exogenous learning: Another key factor according to FP. Not only can we learn more about how best to do good over time by understanding what the best interventions are, but we can also learn more on the actual question of when to give. In other words we may be able to spend our money much better in the future.
  3. Option value: Investing to give keeps options open. For example, we can look out for particularly high value giving opportunities and then fund them (a “watch and pounce” strategy).
  4. Societal impatience: Other people are impatient, so we may well be overspending in the present and investing to give can correct for this.

The report also acknowledges potential arguments against investing to give:

  1. Value drift or loss of funds: There is a risk that a fund’s focus changes or that a fund ceases to exist. Whilst there may be ways to mitigate this, it seems that this risk will always exist to some extent.
  2. Lower cost-effectiveness in the future because of taking best opportunities: It’s possible that the cost-effectiveness of opportunities will decrease over time due to more funding becoming available combined with diminishing marginal returns to spending at  the community level.
  3. Investment-like giving opportunities could be even better: giving opportunities whose primary route to impact is making more financial or human resources available to be “spent” on the highest-impact opportunities at a later point in time may be better than financially investing money now, although the report doesn’t make a judgement on whether this is true or not.

FP estimates the value of key parameters based on extrapolations of historical data and expert surveys, and come to the following key conclusions:

  1. A patient philanthropist will have more impact by investing than by giving today with 70% probability.
  2. More importantly, the expected value of the impact ratio is very high: the patient philanthropist will have nine times as much impact by investing on average. This asymmetry is because there is more to gain than to lose from investing to give - in the worst cases, the patient philanthropist’s invested $1 million will have no or negligible impact. In the best cases, however, her invested $1 million could end up having an impact many times larger than it would now. These potentially very large gains are much more significant than the potential losses, and drive up the expected impact of investing to give.
  3. How giving later compares to giving now to investment-like giving opportunities (whose primary route to impact is making more financial or human resources available to be “spent” on the highest-impact opportunities at a later point in time) remains an open question not tackled in the report, and the latter option could be better.

Overview of Will MacAskill’s report on how influential the present is

In an EA Forum post and then in a paper with revisions, Will MacAskill argues that we are currently unlikely to be living at the most influential period in history. In short, MacAskill argues that:

  1. It would be an extraordinary coincidence if right now was the most influential time. In other words our prior probability of that possibility should be very low and so we need pretty extraordinary evidence to believe that we are at HoH (and we don't have such extraordinary evidence).
  2. If we  look at  how  “influentialness”  has  been changing over time, we observe that it has increased due to more knowledge and opportunities. An inductive argument then leads to expecting this trend to continue into the future as our knowledge and understanding continues to improve over time.

If we are not living at the most influential period in history then it makes sense to invest, financially or otherwise, for a more influential period in the future.

What does this all mean for ETG?

Note that whilst FP and Will MacAskill are making different arguments, both arguments essentially lead to the same conclusion - that we should invest for the future, whether financially or otherwise. If you find either FP’s or Will MacAskill’s arguments convincing, it then seems that three options are plausibly the highest impact options for an EA:

  1. Investing to give: ETG is the best way to do this
  2. Giving now to “investment-like” giving opportunities: ETG is the best way to do this
  3. Being the one to drive “investment-like” opportunities: This doesn’t imply ETG and instead implies actions such as global priorities research or community/movement building.

A key point I want to emphasise at this point is that option #1 is immune to the criticisms of ETG made by both AppliedDivinityStudies and Ben Todd, specifically that AppliedDivinityStudies thinks that high impact funding is likely to have already been filled by large donors such that further donations have very little value, whilst Ben Todd thinks that marginal cost-effectiveness remains high but has declined such that ETG is generally unlikely to be the best option for EAs. Both of these criticisms rely on the diminishing marginal value of donations, but investing to give should not run into diminishing marginal value, at least not anytime soon. 

So, where does that leave us with regards to deciding whether or not to ETG? 

Firstly, comparing #1 and #2 is a moot point as they both imply donating money. Obviously comparing these options is important to understand what to do with your money, but it isn’t relevant when deciding whether or not to ETG.

What about comparing #1 and #3 or comparing #2 and #3. These comparisons boil down to whether or not financial investment is better or worse than non-financial investment. At a movement-level, this is a very tricky open question - both options promise great returns and it’s hard to compare them. At an individual level however, one can rely on personal fit - if you have better personal fit for ETG than for capacity building, then do #1 over #3, and vice versa.

The upshot of this is that ETG may well be the highest impact thing for many EAs to do, conditional on them finding one or more of the arguments for patient altruism compelling and having personal fit for ETG. Moreover, small donations can have very high expected impact if they are invested financially or into investment-like giving opportunities, according to both Will MacAskill’s and FP’s arguments outlined above.

We need to take patient altruism more seriously

AppliedDivinityStudies doesn’t mention investing to give or the influentialness of the present/future in their post. Ben Todd mentions investing to give in passing, also mentioning that investing in Founders Pledge’s Patient Philanthropy Fund might be a good way to provide insurance in the case that large existing funders such as Open Phil leave the EA funding arena. Despite this, Ben still comes to the conclusion that ETG is generally unlikely to be the absolute best thing for EAs to do.

I think both individuals have not paid enough attention to the ideas of patient altruism and that, as I have outlined, the arguments for patient altruism can lead to the conclusion that many EAs should ETG.

It is worth noting that there has been pushback against patient altruism, in particular against the contention that the present may not be the most influential period. However, if either AppliedDivinityStudies or Ben Todd find these pushbacks compelling they should say so when discussing how high impact ETG may be, as these considerations are highly relevant. It is also worth noting that there has been far more pushback against the contention that the present may not be the most influential period, than there has been against investing to give.

Indeed I get the impression that the EA community as a whole doesn’t really take investing to give that seriously, with most people ignoring it - either because they are not convinced by the argument or because they just forget the argument exists. I would be interested to see more discussion on investing to give and more arguments against it. As it stands, I think that patient altruism is compelling, and that many patient altruists might find ETG as their highest impact career option.


 

31

0
0

Reactions

0
0

More posts like this

Comments5


Sorted by Click to highlight new comments since:

One can typically double one’s financial resources - in nominal terms - over the course of a decade, through equity market index investing.

It seems to me that EAs who have pursued more high-risk entrepreneurial activities have got substantially higher mean returns (see Mathieu Putz's post). So it's not clear to me that index investing should be seen as the best or default option.

Btw, Owen Cotton-Barratt has written a post called "Patient vs urgent longtermism" has little direct bearing on giving now vs later which I think is very relevant to the themes of this post. For instance, he argues that 

longtermis[t community-building and research] (broadly understood) has vastly outperformed the stock market over the last twenty years in terms of the resources it has amassed. I then think that individual decisions about giving now vs later should largely be driven by whether the best identifiable marginal opportunities are still good investments

It seems to me that EAs who have pursued more high-risk entrepreneurial activities have got substantially higher mean returns (see Mathieu Putz's post). So it's not clear to me that index investing should be seen as the best or default option.

I'm not quite sure what point you're trying to make. I haven't commented on how people should earn  money, although agree that we should be willing to take on more risk for more gain. I'm saying that when we've earned the money, it then might make a lot of sense to invest it in a long-term fund such as the Patient Philanthropy Fund. Or are you saying it would make sense to use that money to fund further high-risk entrepreneurial activities? That is an interesting idea, although I suppose such ideas will eventually dry up or hit diminishing returns.

longtermis[t community-building and research] (broadly understood) has vastly outperformed the stock market over the last twenty years in terms of the resources it has amassed. I then think that individual decisions about giving now vs later should largely be driven by whether the best identifiable marginal opportunities are still good investments

I agree we should look at the best marginal opportunities. AppliedDivinityStudies and Ben Todd appear to think that these aren't as good as they once were i.e. that we have hit diminishing returns.

On the point about community building and research outperforming the stock market - I would like to see some sort of quantification of this rather than just an assertion. I'm not saying it's wrong, I'm just unsure how to evaluate that claim. Also, how sure can we be that such a trend would continue?
 

Or are you saying it would make sense to use that money to fund further high-risk entrepreneurial activities?

Yes.

On the point about community building and research outperforming the stock market - I would like to see some sort of quantification of this rather than just an assertion.

In any event, I think it's a relevant post which would be good to mention in this context. And fwiw I agree with Owen's estimate that it's substantially outperformed the stock market in the past. E.g. it has arguably led to many billions of dollars getting dedicated to longtermist causes.

Thanks. I agree Owen's post is relevant. 

This post is also very relevant. Especially this part which reflects an update away ETG:

Despite these caveats, the model has produced at least one important update for us. As the stock of EA capital has grown more quickly than the stock of EA labor, it has been widely claimed that earning to give is less valuable, relative to direct work, than it used to be. On a March 2020 episode of the 80,000 Hours podcast, Phil had argued that this claim was mistaken, on the grounds that the EA "capital to labor ratio" should simply be expected to fluctuate over time, suggesting that we had no reason to expect a long-run trend in either direction. Earning to give is thus still highly valuable, he argued, in light of the opportunity to invest for a time in which EA projects are again more capital-constrained. The results of our model suggest to us that this particular argument for earning to give was incorrect. It is at least plausible that, relative to direct work, earning to give has indeed grown less valuable, and—temporary fluctuations notwithstanding—will continue to do so.

To what extent do you believe Investing to Give is better than Direct Work because we're not working on exactly the right problems/solutions vs "you just have more money" 

Because if the argument relies on the latter, on producing 9x more money than regular Earning To Give, surely the question is "At what level of income is it better to ETG, than work on direct cause areas". I think this is especially relevant because of how scalable and fungible cold hard cash is.  I.e. If one donates 14 billion USD, they are donating the equivalent of 1.4 million regular people (Whom donate 10,000 USD a year). Considering this has already happened, and we don't (yet) have 1.4 million ETG, it provides strong practical evidence for this mechanism of scale. However labour is likely harder to scale. Hence the funding overhang. 

I appreciate I am not saying anything new here, but I don't see any important distinction between being a high Earning to giver (and donating in the short term), and being a median income Investing to giver. 

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr