The Philanthropist’s Paradox


TL;DR. Many effective altruists wonder whether it's better to give now or invest and give later. I’ve realised there is an additional worry for those who (like me) are sceptical of the value of the far future. Roughly, it looks like such people are rationally committed to investing their money and spending it in the future (potentially, at the end of time) even though they don't think this will do any good and they can see this whole problem coming. I don't think I've seen this mentioned anywhere else, so I thought I'd bring it to light. I don’t have a resolution, which is why I call it a paradox

Setting a familiar scene: should you give now or invest and give later?

You're thinking about giving money to charity because you want to do some good. Then someone points out to you that, if you invested your money, it would grow over time and therefore you'd be able to do more good overall. You don't believe in pure time discounting - i.e. you don't think 1 unit of happiness is morally worth more today than it is tomorrow - so you invest.

As you think about this more, you realise it’s always going to be better to keep growing the money instead of spending it now. You set up a trust that runs after your death and tell the executors of the trust to keeping investing the money until it will do as much good as possible. But when does the money get spent? It seems the money keeps on growing and never gets given away, so your investment ends up doing no good at all. Hence we have the philanthropist’s paradox.

How to resolve the philanthropist’s paradox?

There are lots of practical reasons you might think push you one way or the other: if you don't give now you'll never actually make yourself give later; there are better opportunities to give now; you'll know more later, so it’s better to wait; the Earth might get destroyed, so you should give sooner; and so on. I won't discuss these as I'm interested in the pure version of the paradox that leads to the conclusion you should give later.[1]

What’s the solution if we ignore the practical concerns? One option is to note that, at some stage, you (or, your executors) will have enough money to solve all the world’s problems. At that point, you should spend it as there’s no value in growing your investment further. This won’t work if the financial costs of solving the world’s problems keeps growing and grows faster than your investment increases. However, if one supposes the universe will eventually end – all the stars will burn out at some point – then you will eventually reach a stage where it’s better to spend the money. If you wait any longer there won’t be any people left. This might not be a very satisfactory response, but then it is called a ‘paradox’ for a reason.

A new twist for those who aren’t convinced about the value of the far future

The above problem implicitly assumed something like totalism, the view on which the best history of the universe is the one with the greatest total of happiness. If you’re totalist, you will care about helping those who wiil potentially exist in millions of years.

However, totalism is not the only view you could take about the value of future people. We might agree with Jan Narveson who stated “we are in favour of making people happy, but neutral about making happy people”[2]. Views of this sort are typically called 'person-affecting’ (PA) views.

There isn’t a single person-affecting view, but a family of them. I’ll quickly introduce them before explaining the new version of the paradox the face. The three most common person-affecting theories are:

Presentism: the only people who matter are those who presently exist (rather than those who might or will exist in the future)

Actualism: the only people who matters are those who actually, rather than merely possibly, exist (this means future actual people do count)

Necessitarianism: the only people who matter, when deciding between a set of outcomes, are those people who exist in all the outcomes under consideration. This is meant to exclude those whose existence is contingent on outcome of the current decision.

Each of the view captures the intuitive that creating some new person is not good: that person does not presently, actually, or necessarily exist. I won’t try to explain why you might like these views here (but see this footnote if you’re interested).[3]

I should note you could also think the far future doesn’t matter (as much) because you believe in pure time discounting (e.g. 1 unit of happiness next year is morally worth 98% of one unit of happiness this year). Whether you give now or later, if you endorse pure time discounting, just depends on whether the percentage annual increase in your money is higher or lower than the percentage annual decrease in the moral value of the future. I don’t think pure time discounting is particularly plausible, but discussing it is outside the scope of this essay.[4]

The (Person-Affecting) Philanthropist’s Paradox, a tale of foreseeable regret

I’ll come back to other person-affecting views later, but, for now, suppose you’re a presentist philanthropist, which means you just care about benefitting currently existing people, and you found the ‘give later’ argument convincing. What should you do?

Option 1: You could give away your money now in 2017. Say that will bring about 100 units of happiness.

Option 2: You could invest it. Following the logic of the paradox you put the money in trust and it doubles every 50 years or so. Now, after 200 years, in 2217, your investment can do 16 times more good.

We can feel a problem coming. The presentist judges outcomes by how they affect presently existing people. Assuming that no one alive at 2017 is also alive in 200 years, at 2217, nothing that happens at 2217 can count as good or bad from the perspective of a decision made at 2017. So, although we might have thought waiting until 2217 and giving later would do more good, it turns out the presentist should think it does no good at all.

Realising the trap awaiting him, what should the presentist do? What he could do is invest the money for just 50 years before giving it away (assume he’s a young philanthropist). This allows him to double his donation. Let’s assume he can use the money at 2067 to benefit only people who were presently alive at 2017. There is a clearly superior outcome to giving now at 2017 as he has less money than he would do at 2067. Remember, presentists doesn’t entail pure time discounting: a presentist can be neutral about giving someone a painkiller now versus giving a painkiller to that same person in 50 years’ time. Why? that person presently existed at the time when the decision was taken. Hence providing twice as many benefits at 2067 rather than 2017, given they are to the same people, is twice as good.

Yet now we find a new oddness. Suppose those 50 years have passed and the presentist is now about to dole out his investment. The presentist pauses for a moment and thinks “how can I most effectively benefit presently existing people?” He’s at 2067 and there is a whole load of new presently-existing people. They didn’t exist at 2017, it’s true, but the presentist is presently at 2067 and is a making a decision on that basis. Now the presentist finds himself facing exactly the same choice at 2067 that he faced at 2017, whether to give now or give later.

All the same logic applies so he decides, once again, that he should give later. Knowing he won’t live forever he puts the money in a trust and instructs the executors to “most effectively benefit those will presently exist at 2117”. But this situation will recur. Every time he (or rather, his executors) consider whether to give now or give later it will always do more good, on presentism, to invest with a view to giving later. This leads him through a series of decisions that means the money ends up being donated in the far future (at the death of the universe), at which point none of the people who presently existed at 2017 will be alive. Thus, the donation ends up being valueless, whereas if he’d just donated immediately, in 2017, he would have done at least some good.

It’s worth noting the difference between the presentist case and the earlier, totalist one. It might seem strange that the totalist should wait so long until his money gets spent, but at least this counted as doing a lot of good on totalism. Whereas the presentist runs through the same rational process as the totalist, also gives away his money at the end of time, but this counts as doing no good on presentism at all. Further, the presentist could foresee he would choose to later do things he currently (at 2017) considers will have no value. Hence the presentist has an extra problem in this paradox.

What should the presentist do?

One thing the presentist might do is to pre-commit himself to spending him money at some later time. Here, he faces a trade-off. He knows, if he invests the money, it will grow at X% a year. He also knows that the people who presently exist will all eventually die. Say 1% of the Earth’s population who are alive at 2017 will die each year (assume this doesn’t include him; he’s immortal, or something, for our purposes). Hence at 2067 half of them are alive. At 2117 they’ve all died and, from the perspective of 2017, nothing that happens can now good or bad. Let’s assume he works this out at realises the most good he can do is by legally binding himself at 2017 to spend the money he’ll have at 2067.

This seems to solve the problem, but there is something weird about it. When 2067 rolls around, they’ll be new people who will presently exist. He’ll be annoyed at his past self for tying his hands because what he, at 2067, wants to do is invest the money for just a bit longer to help them. I don’t have a better solution that this, but I would welcome someone suggesting one.

We could consider this a reductio ad absurdam against presentism, a fatal problem with presentism that causes us to abandon it. I’m not sure it is –  a point I’ll come back to at the end – but it does seem paradoxical.[5] If it is a reductio, isn’t uniquely a problem for presentism either: necessitarianism will face a similar kind of problem. I won’t discuss actualism because, for reasons also not worth getting into here, actualism isn’t action-guiding.[6]

Why necessitarian philanthropists get same problem

Necessitarians think the only people that matter are those who exist in all outcomes under consideration, hence we exclude the people whose existence is contingent on what we do.

As Parfit and others have noted, it looks like nearly any decision will eventually change the identity of all future people.[7] Suppose the necessitarian philanthropist decides not to spend his money now, but to invest it instead. This causes the people who would have benefitted from his money, had he spent it now, to make slightly different decisions. Even tiny decisions will ripple through society, causing different people to meet and conceive children at very slightly different times. As one’s DNA is a necessary condition for your identity, this changes the identities of future people, who become contingent people.

This means the necessitarian has a similar difficulty in valuing the far future as the presentist, albeit it for different reasons. To a presentist there’s no point benefitting those who will live in 10,000 years because such people do not presently exist. To a necessitarian there’s no way you could benefit people in 10,000 years’ time, no matter how hard you try, because whatever you do will change who those people are (hence making them the non-necessary people who don’t matter on the theory).

To illustrate, you want to help the X people, some group of future humans who you know will get squashed by an asteroid that will hit the Earth in 10,000 years’ time. You decide to act – you raise awareness, build a big asteroid-zapping laser, etc. – but your actions effect who gets born, meaning the Y people to be created instead of the original X people. On necessitarianism it’s not good for the X people if they are replaced with the Y people, nor it is good to create the Y people either (it’s never good for anyone to be created).

Hence, given that all actions eventually change all future identities, necessitarians should accept there’s a practical limit to how far in the future they can do good.[8] There might be uncertainty on what this limit is.[9] However, just as the presentist should worry about acting sooner rather than later because the number of presently existing people will dwindle the further from 2017 his money gets used, so the necessitarian will find himself with an effective discount rate (even though he doesn’t engage in pure time discounting): his act to give now or give later causes different people to be born. Hence if he invests for 50 years and then gives that money to a child who, at 2067, is currently aged 10, that child’s existence is presumably contingent on his investing the money. As the necessitarian discounts contingent people, he cannot claim investing and then using that money to benefit a contingent existing child is good. This is analogous to the presentist in 2017 realising there’s no point saving money to give to 10-year old in 2067 because that 10-year-old does not, in 2017, presently exist

What can the necessitarian do to avoid the paradox?

I can think of one more move the necessitarian could make. He could argue his investing the money doesn’t change any identities of future people, so it really is better, on his theory, to invest for it for many years.

This is less helpful that it first appears. If investing makes no difference to who gets born, presumably the necessitarian is now back in the same boat as the totalist: both agree it’s best to keep growing the cash until the end of time. The problem for the necessitarian is one of the things he view seems to commit him to is believing we can’t help people in the far future because anything we do will alter all the identities. He’s in a bind: he can’t simultaneously believe far future people don’t matter and that his investment does the most good if it’s spent in the far future.

All this is to say person-affecting views faces an additional twist to the philanthropist’s paradox. These aren’t to be waved away as theoretical fancies: there are real-world philanthropists who appear to have person-affecting views: they don’t care about the far future and they think it’s better to make people happy, rather than make happy people. If they want to do the most good with their money, this is paradox they should find a principled response to when they consider whether to give nor or give later.

Epilogue: a new reason not to be a person-affecting philanthropist?

Should we give up on person-affecting views because of this paradox? Maybe, but I doubt it. Two thoughts. First, there’s well-established fact that all views in population ethics have weird outcomes. The bar for ‘plausible theories’ is accepted to be pretty low.[10] I can imagine an advocate of presentism or necessitarianism acknowledging this as just another bullet to bite, and this is still, all things considered, the theory he believes it the least-worst one.

Second, it’s not clear to me exactly where the problem lies. I’m unsure if this should be understand of a problem for rationality (making good decisions), axiology (what counts as ‘good’), or maybe the way they are linked.[11] Perhaps what person affecting theories need is an account of why you should (or shouldn’t) be able to foreseeably regret your future decisions.



[1] See this summary for a note on the practical concerns:

[2] Jan Narveson, “Moral Problems of Population,” The Monist, 1973, 62–86.

[3] A key motivation for PA is the person-affecting restriction(PAR): one state of affairs can only be better than another if it better for someone. This is typically combined with existence non-comparativism: existence is neither better nor worse for someone than non-existence. The argument for existence non-comparativism most famously comes from John Broome, who puts it:

...[I]t cannot ever be  true that  it  is  better  for  a  person  that  she  lives  than  that  she  should  never  have  lived  at  all.  If  it  were  better  for  a  person  that  she  lives  than  that  she should never have lived at all, then if she had never lived at all, that would have been worse for her than if she had lived. But if she had never lived at all, there would have been no her for it to be worse for, so it could not have been worse for her.

I won’t motivate them or critique them further. My objective here is just to indicate a problem for them.

[4] As Greaves put the argument against pure time discounting: “But of course (runs the thought)  the  value  of utility is  independent  of  such  locational  factors:  there  is  no  possible  justification  for  holding  that  the  value  of(say) curing someone’s headache,  holding fixed her psychology,  circumstances and  deservingness,  depends  upon  which  year  it  is”  From Greaves, H. Discounting and public policy: A survey'. Conditionally accepted at Economics and Philosophy (link:

[5] By ‘paradoxical’ I mean that seeming acceptably premises and seemingly acceptable reasoning leading to seemingly unacceptable conclusions.

[6] How good an outcome is depends on which outcome you choose to bring about, so you can’t know what you should do until you already know what you’re going to do. Actualists might respond this is the best we can do.

[7] See Reasons and Persons (1984).

[8] As an example, if a necessitarian put nuclear missiles on the Moon set to explode in 5,000 years time, and an alien space ships happens to appear in 5,000 years, the necessitarian will admit he’s (unwittingly) made things worse. A presentist will (oddly) claim this is not bad on the assumption the Moon-visitors haven’t yet been born. However, if they Moon-visitors were presently alive when the missiles were put on the moon, the presentist would say the outcome is bad.

[9] For instance, we might think it takes some months or years before me choosing to buy tea rather than coffee at the super-market changes the identities of all future people. If you find the idea actions could change who gets born, ask yourself if you think you would have been born if World War One had never occurred.

[10] For a list of some problems see Hilary Greaves, “Population Axiology,” Philosophy Compass, 2017.

[11] In recent discussion Patrick Kaczmarek informs me I’m absolutely mistaken to think it can problem with decision theory and helpfully suggested the issue might be the bridging principle between one’s axiology and one’s decisions theory.