Hide table of contents

[#4 in my series of excerpts from Questioning Beneficence: Four Philosophers on Effective Altruism and Doing Good.][1]

How much should we care about future people? Total utilitarians answer, “Equally to our concern for presently-existing people.” Narrow person-affecting theorists answer, “Not at all”—at least in a disturbingly wide range of cases.[2] I think the most plausible answer is something in-between.

Person-Directed and Impersonal Reasons

Total utilitarianism is the view that we should promote the sum total of well-being in the universe. In principle, this sum could be increased by either improving people’s lives or by adding more positive lives into the mix (without making others worse off). I agree that both of these options are good, but it seems misguided to regard them as equally good. If you see a child drowning, resolving to have an extra child yourself is not (contra total utilitarianism) an adequate substitute for saving the existing child. In general, we’re apt to think, we have stronger reasons to make people happy than to make happy people.

On the other hand, the narrow person-affecting view can seem disturbing and implausibly extreme in its own way. Since it regards happy future lives as a matter of moral indifference, it implies that—if it would make us happier—it’d be worth preventing a future utopia by sterilizing everyone alive today and burning through all the planet’s resources before the last of us dies off. Utopia is no better than a barren rock, on this view, so if faced with a choice between the two, we’ve no moral reason to sacrifice our own interests to bring about the former.

Our own value—and that of our children—are seen as merely conditional: given that we exist, it’s better to make us better-off, just like if you make a promise, then you had better keep it. But there’s no reason to make promises just in order to keep them: kept promises are not in themselves or unconditionally good. And narrow person-affecting theorists think the same of individual persons. Bluntly put: we are no better than nothing at all, on this bleak view.

Fortunately, we do not have to choose between total utilitarianism and the narrow person-affecting view. We can instead combine the life-affirming aspects of total utilitarianism with extra weight for those who exist antecedently. On a commonsense hybrid approach, we have both (1) strong person-directed reasons to care especially about the well-being of antecedently existing individuals, and (2) weaker impersonal reasons to improve the world by bringing additional good lives into existence. When the amount of value at stake is sufficiently large, even reasons of the intrinsically weaker kind may add up to be very significant indeed. This can explain why avoiding human extinction should be a very high priority on a wide range of reasonable, life-affirming views, without depending on anything as extreme as total utilitarianism.

In Defense of Good Lives

There are three other common reasons why people are tempted to deny value to future lives, and they’re all terrible. First, some worry that we could otherwise be saddled with implausible procreative obligations. Second, some think that it allows them to avoid the paradoxes of population ethics. And, third, some are metaphysically confused about how non-existent beings could generate reasons. Let’s address these concerns in turn.

Imagine thinking that the only way to reject forced organ donation was to deny value to the lives of individuals suffering from organ failure. That would be daft. Commonsense morality grants us strong rights to bodily integrity and autonomy. However useful my second kidney may be to others, it is my body, and it would be supererogatory—above and beyond the call of duty—for me to give up any part of it for the greater good of others.

Now, what holds of kidneys surely holds with even greater stringency of uteruses, as being coerced into an unwanted pregnancy would seem an even graver violation of one’s bodily integrity than having a kidney forcibly removed. So recognizing the value of future people does not saddle us with procreative obligations, any more than recognizing the value of dialysis patients saddles us with obligations to donate our internal organs. Placing our organs in service to the greater good is above and beyond the call of duty. This basic commitment to bodily autonomy can survive whatever particular judgments we might make about which lives contribute to the overall good. It does not give us any reason to deny value to others’ lives, including future lives.[3]

The second bad argument begins by noting the paradoxes of population ethics, such as Parfit’s “Mere Addition Paradox,” which threatens to force us into the “Repugnant Conclusion” that any finite utopian population A can be surpassed in value by a sufficiently larger population Z of lives that are barely worth living. Without getting into the details, the mere addition paradox can be blocked by denying that good lives are absolutely good at all, and instead regarding different-sized populations as incomparable in value.

But this move ultimately avails us little, for two reasons: (1) it cannot secure the intuitively desirable result that the utopian world A is better than the repugnant world Z; and (2) all the same puzzles about quantity-quality tradeoffs can re-emerge within a single life, where it is not remotely plausible to deny that “mere additions” of future time can be of value or increase the welfare value of one’s life. Since we’re all committed to addressing quantity-quality tradeoffs within a life, we might as well extend whatever solution we ultimately settle upon to the population level too. So there’s really no philosophical gain to temporarily dodging the issue by denying the value of future lives.

The third argument rests on a simple confusion between absolute and comparative disvalue. Consider Torres:

[T]here can’t be anything bad about Being Extinct because there wouldn’t be anyone around to experience this badness. And if there isn’t anyone around to suffer the loss of future happiness and progress, then Being Extinct doesn’t actually harm anyone.

I call this the ‘Epicurean fallacy,’ as it mirrors the notorious reasoning that death cannot harm you because once you’re dead there’s no longer anyone there to be harmed. Of course, death is not an absolutely bad state to be in (it’s not a state that you are ever in at all, since to be in a state you must exist at that time). Death’s intrinsic neutrality instead makes you worse off in comparison to the alternative of continued positive existence. And so it goes at a population level: humanity’s extinction, while absolutely neutral, would be awful compared to the alternative of a flourishing future containing immensely positive lives (and thus value). If you appreciate that death can be bad—even tragic—then you should have no difficulty appreciating the metaphysical possibility that extinction could be even more so. (Though we can imagine worse things than extinction, just as we can imagine worse fates than death.)

An Agnostic Case for Longtermism in Practice

William MacAskill defines Longtermism as “the idea that positively influencing the longterm future is a key moral priority of our time.” After all, the future is vast. If all goes well, it could contain an astronomical number of wonderful lives. If it goes poorly, it might soon contain no lives at all—or worse, overwhelmingly miserable, oppressed lives. Because the stakes are so high, we have extremely strong moral reasons to prefer better long-term outcomes.

That in-principle verdict strikes me as difficult to deny. The practical question of what to do about it is much less clear, because it may not be obvious what we can do to improve long-term outcomes. But longtermists suggest that there is at least one clear-cut option available, namely: research the matter further. Longtermist investigation is relatively cheap, and the potential upside is immense. So it seems clearly worthwhile to look more into the matter.

MacAskill himself suggests two broad avenues for securing positive longterm impact: (1) contributing to economic, scientific, and (especially) moral progress—such as by building a morally exploratory world that can continue to improve over time; and (2) working to mitigate existential risks—such as from nuclear war, super-pandemics, or misaligned artificial intelligence—to ensure that we have a future at all.

This all seems very sensible to me. I personally doubt that misaligned AI will take over the world—that sure doesn’t seem the most likely outcome. But a bad outcome doesn’t have to be the “most likely” in order for it to be prudent to guard against. I don’t think any given nuclear reactor is likely to suffer a catastrophic failure, either, but I still think society should invest (some) in nuclear safety engineering, just to be safe.[4] Currently, the amount that our society invests in reducing global catastrophic risks is negligible (as a proportion of global GDP). I could imagine overdoing it—e.g., in a hypothetical neurotic society that invested the majority of its resources into such precautionary measures—but, in reality, we’re surely erring in the direction of under-investment.

So, while I don’t know precisely what the optimal balance would be between “longtermist” and “neartermist” moral ends, it’s worth noting that we don’t need to answer that difficult question in order to at least have a directional sense of where we should go from here. We should not entirely disregard the long-term future: it truly is immensely important. But we (especially non-EAs) currently do almost entirely disregard the long-term future. So it would seem wise to remedy this.


Questioning Beneficence: Four Philosophers on Effective Altruism and Doing Good book cover

In the subsequent discussion, Arnold and Brennan press me on whether tiny chances of averting extinction could really be worth more than saving many lives for certain. I argue that this result is basically undeniable, given the right kind of (objective) probabilities.

 

  1. ^

    Note that I haven’t bothered to add in most of the footnotes, and I’ve added links that weren’t in the printed text.

  2. ^

    They allow that we shouldn’t want future individuals to suffer. And they allow that we should prefer any given future individual to be better off rather than existing in a worse-off state. But they think we have no non-instrumental reason to want the happy future individual to exist at all. And also [at least on most such views] no non-instrumental reason to prefer for a happier individual to exist in place of a less well-off, alternative future person. For a general introduction to population ethics, see “Population Ethics,” in Chappell, Meissner, and MacAskill 2023.

  3. ^

    This basic argument is further developed in Chappell 2017.

  4. ^

    Of course, that’s not to endorse pathological regulation that results in effectively promoting coal power over nuclear, or other perverse incentives.

21

0
0

Reactions

0
0

More posts like this

Comments2
Sorted by Click to highlight new comments since:

I agree this is a commonsensical view, but it also seems to me that intuitions here are rather fragile and depend a lot on the framing. I can actually get myself to finding the opposite intuitive, i.e. that we have more reason to make happy people than to make people happy. Think about it this way: you can use a bunch of resources to make an already happy person still happier, or you can use these resources to allow the existence of a happy person who would otherwise not be able to exist at all. (Say, you could double the happiness of the existing person or create an additional person with the same level of happiness.) Imagine you create this person, she has a happy life and is grateful for it. Was your decision to create this person wrong? Would it have been any better not to create her but to make the original person happier yet? Intuitively, I'd say, the answer is 'no'. Creating her was the right decision.

Executive summary: While total utilitarianism and narrow person-affecting views offer extreme positions on valuing future generations, a more plausible middle ground combines strong person-directed reasons to care about existing individuals with weaker impersonal reasons to bring good lives into existence.

Key points:

  1. Total utilitarianism and narrow person-affecting views have significant flaws in how they value future lives.
  2. A hybrid approach balancing person-directed and impersonal reasons avoids these pitfalls while still prioritizing existential risk reduction.
  3. Common arguments against valuing future lives (procreative obligations, population ethics paradoxes, metaphysical confusion) are refuted.
  4. Longtermism, which prioritizes positively influencing the long-term future, is difficult to deny in principle but faces practical challenges.
  5. Investing in research on improving long-term outcomes and mitigating existential risks is a prudent course of action.
  6. While the optimal balance between "longtermist" and "neartermist" priorities is unclear, increasing consideration of the long-term future is warranted.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
Relevant opportunities