Hide table of contents

Longtermism states that we should judge the moral value of actions based on how they affect the possible future persons who will experience their consequences, not just present persons. We can extend this to all morally relevant beings, I just say persons to keep it simple, and my question can be limited to this consideration.

In Parfit’s “Reasons and Persons”, he considers some types of actions. For example, if you must choose between action A, which will bring about 100 future persons, and action B, which will bring about 10 future persons, each person having equivalent happiness, then you should choose A. This is an action which changes the number of persons that exist.

You also have a case where A causes 100 persons to exist and B causes 100 different persons to exist. This is an action that affects which persons exist, but the number is the same. In this case, assuming happiness is equal, you can be indifferent. You are not wronging the future persons in case B by choosing case A, and thereby preventing the B people from existing. If you can cause more happiness by bringing about group A, you act rightly.

He uses this second analysis for handling abortion, and I think Will MacAskill does too, in “What We Owe The Future.” Having an abortion now and having a different child later is permissible, and maybe obligatory, depending on the difference in suffering for each possible person/child. But this analysis ignores cases where an abortion is not followed by a later birth, or where the mother’s life is at stake.

Longtermism places moral value on future persons that is not discounted by time. The moral value of a future person is apparently the same as that of an existing person. Well we can consider the following case.

Future vs Present Trolley. A trolley is heading toward a person tied to a track. You can pull a switch that diverts it to a second track, which will stop 100 future persons from existing.

You can fill in the details with a story about fertilized embryos or something. 100 possible future persons are going to be de-existed if you pull the switch. I am honestly not sure how longtermist views handle such a case, where the wellbeing of an existing person is pitted against a greater number of possible persons.

Even a weaker case with one future person on the other track seems problematic for longtermism.

I’m really not sure how it shakes out, but it seems obvious that you should pull the switch. However, that’s a lot of possible future persons, and if they matter just as much as 100 existing persons, then it seems like you should not pull the switch. I’d welcome some feedback in the comments about how this type of dilemma is handled. It obviously affects a longtermist analysis of certain abortion cases, so it seems important to have an answer.




New Answer
New Comment

1 Answers sorted by

Two things worth flagging:

(1) Longtermism per se doesn't dictate how we should weigh death vs failing to create life. I personally find it plausible to apply a modest discount against the latter. I think it would be better to bring an extra 100 happy lives into existence than to save just 1 existing person. But you're free to apply a steeper discount if you find that most plausible on reflection.  (That's different from discounting future interests per se, as though future torture mattered less or something.)

(2) There's no reason to focus on abortion in particular; as far as longtermism per se is concerned, any non-procreative choice (e.g. celibacy, contraception, etc.) is relevantly similar.  And as I explain here, pro-natalist incentives are obviously preferable to force. (Just like we shouldn't force people to donate kidneys, good though kidney donation is.)

Am I to understand that the standard longtermist reply is to bite the bullet here?

Curated and popular this week
Relevant opportunities