People hoping to do the most good with their giving face a tradeoff between (a) giving now and (b) investing to give more later. If the giver doesn't expect to learn anything over time about the best places to give, the question of when to give boils down roughly to the question of whether the interest rate at which they could invest exceeds the "charitable discount rate" at which doing a unit of good is growing more costly.
For me, and I think for pretty much any (utilitarian-leaning) aspiring effective altruist right now, the "learning" consideration should swamp all else. We currently lack, as far as I can tell, any good way to forecast the long-run impacts of our actions, and in that lack I think all of our attempts at charity are about as likely to do harm as to do good. But more people are starting to think about the problem seriously, and there is at least a sliver of hope that progress will be made over the coming years or decades. In the meantime, there is nothing to do but invest and wait—or, perhaps, fund better prioritization research.
But if your goal as a philanthropist is to spend money at time t to increase welfare around time t, and more complex funding opportunities are not under consideration, it seems to me that there's a good a priori reason to think it's usually better to give later (even if we ignore the cluelessness/learning issue) which I haven't seen expressed elsewhere. The reason is that market interest rates are set in part by people's "rates of pure time preference" (RPTP). If investors were perfectly patient—i.e. if they sought to maximize the sum of their own non-discounted welfare over the course of their lives—an equilibrium interest rate of 7% would mean that a unit of welfare next year was projected to cost about 7% more, in dollar terms, than a unit of welfare this year. But investors are not perfectly patient; they discount their future welfare at some positive rate. If that rate is, say, 2%, then the indifference point of 7% returns implies that the rate at which the cost of welfare is rising (R) is only 5%. Philanthropists with zero RPTP can therefore do 2% more good for others by investing at 7% and giving next year.
This is not a small concern. Some recent literature on discounting, for instance, has observed that the "near-zero social discount rate" reasoning usually used to justify extensive action against long-term risks like climate change also implies a need to promote investment in general, with optimal capital gains subsidies of as much as 50% (financed by correspondingly high taxes on present consumption).
Perhaps the cost of welfare is growing more quickly for some populations than others, and perhaps some of those populations are currently top contenders for our charity. For instance, perhaps the cost of helping the world's poorest is rising more quickly than 7% per year, as is sometimes claimed, due to the particularly fast progress being made in global development. (Scott Alexander reports Elie Hassenfeld basically making this point a few years ago.) If this is true, then indeed, we would do less good giving next year than giving this year.
But this one-year relationship must be temporary. Over the course of a long future, the rate of increase in the cost of producing a unit of welfare as efficiently as possible cannot, on average, exceed R. Otherwise, the most efficient way to good would eventually be more costly than one particular way to good--just giving money to ordinary investors for their own consumption. And since the long-run average rate of increase in the cost of welfare is bounded above by R ("5%"), investing at R + RPTP ("7%") must eventually result in an endowment able to buy more welfare than the endowment we started with.
At first glance, this leads to the paradoxical conclusion that we should invest forever and never give. The resolution of this paradox is that one way or another, the opportunity to invest at a rate R + RPTP will eventually not be available. There are various reasons an endowment could come to lose this opportunity. It could face idiosyncratic constraints (the money is going to be seized in a few years). Or, with increasing wealth, investment opportunities could dry up in general (depending on how utility decreases with consumption, people might grow so rich that the only projects worth investing in would be those that earned extremely high returns, and eventually fewer such projects might exist than the size of the endowment). Or investment opportunities could vanish for other reasons, such as the impending end of the world. In the last case, if the other constraints don't hold, the best thing to do is to invest forever, until one massive act of charity on the last day.
But under ordinary circumstances, to a first approximation, if a philanthropist's plan is to spend his money at some time t to increase consumption-based welfare as efficiently as possible at time t—and if these considerations are not swamped by others beyond the scope of this post, such as the risk of value drift—then it seems the philanthropist should wait.
[Edited 4 Nov. 2018 [1] to include the link to Elie Hassenfeld making the point about vanishing giving opportunities in global poverty, and [2] to weaken the last sentence so that it emphasizes the limited scope of this post. Edited 7 Nov. 2018 [3] to point out this limited scope earlier on, so that it's clear that this argument doesn't apply to research funding.]
I'm not clear on how RPTP fits into a general understanding of financial returns on investment. Clearly your RPTP matters, and if you have a lower RPTP than most people, that makes investing look relatively better for you. But why don't, say, financial advisors ever talk about this? Advisors largely make investment recommendations based on clients' risk tolerance, which is unrelated to RPTP.
This is a good point. In general I think the hypothesis that people don't actually have positive RPTP (in contradiction to the received wisdom from most of the economics literature on this) is the most likely way that my argument fails. In particular, I'm aware of some papers (e.g. Gabaix and Laibson 2017) that argue that what looks like discounting might usually be better explained by the fact that future payoffs just come with more uncertainty.
I currently think the balance of evidence is that people do do "pure discounting". Defending that would be a long discussion, but at least some evidence (e.g. Clark et al. 2016) suggests that pure impatience is a thing, and explains more of the variation in, for example, retirement saving behavior than risk tolerance does.
In response to your particular argument that if RPTP is a thing it's weird that financial advisers don't usually ask about it: I agree, that's interesting evidence in the other direction. One alternative explanation that comes to mind on that front, though, is that, while advisers don't ask for the RPTP number explicitly, they do ask questions like "how much do you want to make sure you have by age 65?" whose answers will implicitly incorporate pure time preference.
I still think that the original post isn't quite clear enough about its limited scope, but having read the other comments, I'd like to give a meta-compliment to the author for being willing to lay out a piece that makes just a single point.
Intellectual progress often relies on having a library of concepts available for easy reference, and for combining with other concepts. There are lots of reasons that RPTP applies to no one or almost no one in this strict form, but there are benefits to writing like this:
I like long, integrated collections of ideas, but they're hard to follow and hard to criticize productively. ("I disagree with points 1, 3, and 6, and your claimed interaction between points 2 and 5, and have you considered that point 9 breaks down at extreme values of Q?") Give me a series of short, single-idea posts, and I'll have a much easier time working them into a model.
Thanks!
And if you have any particular ways you think this post still overstates its case, please don't hesitate to point them out.
I don't have a direct source on the argument that you said Elie Hassenfeld made, but I do have a quote from Scott Alexander (http://slatestarcodex.com/2013/04/05/investment-and-inefficient-charity/) who went to a live event in which Elie made this argument:
Thanks! I've edited the post to include a link to that article.
Regarding learning value being dominant (which seems plausible), you make one concrete recommendation:
Do you think we could instead spend money on prioritization research? Some actors relevant to this space include
Some of these groups are clearly very well-funded. However, some have raised funds recently, suggesting that they believe that marginal funds will accelerate prioritization work. Reasonable people can disagree about this, and it seems like a key point which would need to be resolved for the recommendation to carry.
Yes, I agree with this wholeheartedly--there are ways for money to be put to use now accelerating the research process, and those might well beat waiting. In fact (as I should have been far clearer about throughout this post!) this whole argument is really just directed at people who are planning to "spend money at some time t to increase welfare as efficiently as possible at time t".
I'm hoping to write down a few thoughts soon about one might think about discounting if you'll be spending the money on something else, like research or x-risk reduction. For now I'll edit the post to make caveats like yours explicit. Thanks.
If you have any confident thoughts, I'd be interested to hear which funding opportunities in the space seem most promising to you. (i.e. my question above was not rhetorical.) In particular, it is not obvious to me where the funding gaps are, and you seem likely to be better placed to know.
Also, I think there are some considerations which are 1-3 orders of magnitude more important than RTPT. Your post prompted me to write up a simple quantitative comparison of factors. Would you be interested in discussing/commenting at some point?
My current best guess happens to be that there aren't great funding opportunities in the "priorities research" space--for a point of reference, GPI is still sitting on cash while it decides which economist(s) to recruit--but that there will be better funding opportunities over the next few years, as the infrastructure gets better set up and as the pipeline of young EA economists starts flowing. For example I'd actually be kind of surprised if there weren't a "Parfit Institute" (or whatever it might be called), writing policy papers in DC next door to Cato and Heritage and all the rest, in a decade or two. So at the moment I'm just holding out for opportunities like that. But if you have ideas for funding-constrained research right now, let me know!
And sure, I'd love to discuss/comment on that write-up!
You didn't mention anything about (a) the risk of becoming less altruistic in the future, (b) increasing your motivation to learn more about effective giving by giving now, and (c) supporting the development of the culture of effective giving. How much the giver learns over time isn't the only consideration. I'm referring to this forum post by listing these other considerations: http://effective-altruism.com/ea/4e/giving_now_vs_later_a_summary/.
That's right: I agree that there are many other considerations one must weigh in deciding when to give. In this post, I only meant to discuss the RPTP consideration, which I hadn't seen spelled out explicitly elsewhere. But thanks for pointing out that this was unclear. I've weakened the last sentence to emphasize the limited scope of this post.
What is the frame of reference / underlying units for the percentages you are referring to? It makes a big difference if they are monetary vs. utility, USD vs. EUR, real vs. nominal, etc. When you look at the real life data historically and implied for the future, it is clear that time preference (i.e. real risk-free returns) is pretty neutral, i.e. sometimes you end up with less in real terms and sometimes you end up with more.
As Michael alluded to, I would expect that the primary explanation for positive real return rates is that people are risk averse. I don't think this changes the conclusion much, qualitatively the rest of the argument would still follow in this case, though the math would be different.
For the people who are actually indifferent at a rate of 7%. I would expect that people in extreme poverty and factory farmed animals don't usually make this choice, so this argument says nothing about them. Similarly, most people don't care about the far future in proportion to its size, so you can't take their choices about it as much evidence.
Because of this, I would take the stock market + people's risk aversion as evidence that investing to give later is probably better if you are trying to benefit only the people who invest in the stock market.
I think risk-aversion and pure time preference are most likely both at play—I say a few more words about this in my response to Michael above—but yeah, fair enough.
With regard to your second point: I thought I was addressing this objection with,
My point here is that, sure, maybe for farm animals, people in extreme poverty, and so on, the cost of helping them is currently growing more expensive at some rate greater than R (so, >5% per year, if R = 5%). But since the cost of helping a typical stock market investor is only growing more expensive at R ("5% per year"), eventually the curves have to cross. So over the long run, the cheapest way of "buying a unit of welfare" seems to be growing at a rate bounded above by R.
Does that make sense, or am I misunderstanding you?
I see, that makes more sense. Yeah, I agree that that paragraph addresses my objection, I don't think I understood it fully the first time around.
My new epistemic status is that I don't see any flaws in the argument but it seems fishy -- it seems strange that an assumption as weak as the existence of even one investor means you should save.