Derek Parfit (00:09):

I've decided to read you some unpublished work of mine about two problems. As you first tried to undo some of the damage that I did when, many years ago, I wrote about what I called the Non-Identity Problem. I shall then discuss what I call the Triviality Problem. Both problems are, I believe, indirectly relevant to effective altruism. I shall mainly discuss how our acts may harm people, but most of my remarks would apply to acts that may benefit people. Though some of this talk will be hard for non-philosophers to follow, that isn't, I believe, true of my main claims and examples.

Derek Parfit (00:56):

Suppose that we discover how we could live for a thousand years, though in a way that made us unable to have children. Everyone chooses to have these long lives. After we all die, human history ends, since there would be no future people. Would that be bad? Would we have acted wrongly? Some pessimists would answer, "No," since they believe that there's too much suffering in most people's lives, and that it would be wrong to inflict such suffering on others by having children. In earlier centuries, this bleak view was fairly plausible, that our successors would [sic] be able to prevent most human suffering.

Derek Parfit (01:49):

Some optimists would also answer, "No." These people believed that most people's lives are worth living, but they accept what I called two Strong Narrow Person-Affecting Principles. According to one of them: one of two outcomes cannot be worse if this outcome will be worse for no one. According to the other: an act cannot be wrong if this act will be worse for no one. It will not be worse, these principles imply, if there were no future people; since there would be no one for whom that would be worse; nor would we be acting wrongly if we all chose to have no children, thereby ending human history.

Derek Parfit (02:40):

These principles, I believe, are deeply mistaken. Given what our successors could achieve in the next billion years, here and elsewhere in our galaxy, it would be very bad if there were no future people. These Narrow Principles are not however, obviously false, since some questions about actual and possible people are hard to answer. When we compare two outcomes or ways in which things might go, there are three possibilities. In these outcomes either the same people would exist, or the same number would exist but some of these would be different people, or different numbers would exist.

Derek Parfit (03:33):

Of these three kinds of case, 'different number' cases raised the hardest questions. Some of these questions can be partly answered if we first consider and compare the other two kinds of case. As our 'same people' case, we can suppose that Jane, who is pregnant, knows that unless she takes some painless treatment, the child she's carrying will have some disease, which will cause this child to live to only 40. If Jane takes this treatment, this child will live to 80. It would clearly be wrong for Jane to refuse to take this treatment, since that would be much worse for her child.

Derek Parfit (04:20):

As our first 'same number' case, we can suppose that Clare knows, that if she conceived some child now, this child would have this same disease and would live to only 40. If Clare waits for two months, she would later conceive a child who would not have this disease and who would live to 80. This case challenges the Strong Narrow Person-Affecting Principles. Most of us would believe that Clare ought to wait, so that she conceives a child who'd lived to 80 rather than to 40. But if Clare conceives a child now, that would not be worse for this child. This child's life, though only half as long, would be worth living. And if Clare had waited, this child would never have existed. It would've been a different child whom Clare would have later conceived, and who would have lived 80.

Derek Parfit (05:21):

These Narrow Principles, therefore, imply that it wouldn't be worse if Clare chooses to conceive a child who lived to only 40, nor could her choice be wrong. If we believe that Clare ought to wait, we must explain and defend this belief in some other way. Since this problem arises, when, in some possible outcomes, different people would exist, I called this the non-identity problem. Well, we might claim that if Clare conceives her child now, that would be worse for her child. That claim is in one sense true, but it's misleading. There won't be any child for whom this act would be worse.

Derek Parfit (06:12):

I said in earlier discussion, you might say, "Be worse for a 'general person'." That's just meant to show that it isn't a particular person. A 'general person' is a huge group of possible people that only one of which might be actual. The problem can arise in other ways, and on a different scale. If our parents' lives have gone slightly differently before we were conceived, most of us would not have been conceived, and our parents would have had different children. Given this fact about human reproduction, our choices between two acts or policies would often affect who are the people who would later exist. These effects would spread, so that in most future centuries, two quite different sets of people would later exist. One of two sets. And the Narrow Principles can't be applied to these acts or policies.

Derek Parfit (07:14):

Suppose for example, that we and the other members of our community, could choose between two energy policies. One of which will be cheaper, but significantly increased global warming. Now, that policy would predictably have various effects that would greatly lower the quality of life that would be had by people in several later centuries, and would also kill many of these people. Despite having these effects, our choice of this policy wouldn't be worse for any of these people, not even those who would be killed. If we chosen the other more expensive policy, which wouldn't have had these bad effects, these future people would never have existed. And that wouldn't have been better for them. The Narrow Principles, therefore, imply that our choice of the cheaper policy, wouldn't make things go worse and could not be wrong. Well, these implications are, I claimed, clearly false, and give us decisive reasons to reject these principles.

Derek Parfit (08:25):

When our acts would greatly lower the quality of life that would be had by future people, and would kill many of these people, these effects would be just as bad, and these acts would be just as wrong, even though they would be worse for no one. I call this the No-Difference view. When I first thought about the non-identity problem, I hoped that everyone would accept this view. That isn't yet true. Of those who've written about this problem, many believe that since it wouldn't be worse for these future people if they had this lower quality of life or were killed, we have weaker moral reasons to care whether our acts or policies will have such effects on future people.

Derek Parfit (09:20):

Now, in my defence of the no-difference view, I made what I now believed to be a serious mistake. I suggested that in the cases that raised the non-identity problem, we should appeal to principles that are impersonal, in the sense that they don't appeal to facts about what would affect particular people for better or worse. One example, is the principle that it would be better if there was more happiness. Many people reject such impersonal principles, because they believed that this part of morality ought to be explained in person-affecting terms. I suggested, how we might give such an explanation. Though we ought to reject Narrow Person-Affecting Principles, we could appeal, I claimed, to what I called Wide Principles.

Derek Parfit (10:28):

Though I made that suggestion, I then rejected that Wide Principle for a bad reason, and it was ignored. To introduce the Wide Principle, we can first ask, "Did our parents benefit us by causing us to exist?" Many people would answer "No". These people claim benefits are comparative; some act or event cannot benefit us unless the alternative would have been worse for us. Therefore, being caused to exist cannot have benefited us. We ought, I believe, to reject that argument. We should admit that most benefits are comparative. We receive such benefits when the alternative would have been worse for us. But our existence could be good for us, even though the alternative would not have been worse for us.

Derek Parfit (11:27):

When we ask whether our life is worth living, it's enough to ask whether it'd be worth putting up with what's bad in our life, such as our suffering, for the sake of what's good. We don't need to compare our life with never existing. Some other people claimed, "Being caused to exist cannot have benefited us, because if we'd never existed, there would have been no us, for whom that would have been worse." We can reply, "Though there would have been no actual person who didn't receive this benefit, there is an actual person, us, who did."

Derek Parfit (12:15):

We can agree that most benefits are comparative. We receive such benefits only when the alternative would have been worse for us, that our existence can be good for us or bad for us, even though the alternative would not have been better or worse for us. Such non-comparative benefits Jeff McMahan calls 'existential'. We can next compare two Person-Affecting Principles to other ones less strong. According to the Weak Narrow Principle, one of two outcomes would be one way worse if this outcome would be worse for people. According to the Wide Principle, one of two outcomes would be one way worse if this outcome would be less good for people, by benefiting people less than the other outcome would have benefited people.

Derek Parfit (13:18):

When we compare outcomes in which all of the same people would exist, those principles coincide. In such cases, outcomes that are less good for people, must also be worse for people. But when some people would exist in only one of two outcomes, one of these outcomes may be less good for people by benefiting people less, though this outcome would be worse for no one. In such cases, I should argue, we should appeal to the Wide Principle. This outcome would be worse, because it would benefit people less than the other outcome would have benefited the different people who would have existed instead. If Clare has a child who live to only 40, that would not be worse for this child than never existing. But this child will be benefited less than the different child Clare could have had, who would have lived to 80.

Derek Parfit (14:24):

Now, the difference between worse for and less good for may seem very small. Actually, it makes a great difference, and it's a much simpler response to the non-identity problem than other people have given. In comparing those principles, I shall discuss their direct implications in two or three cases. We can first consider people whose lives are intrinsically bad and worse than lives that are merely not worth living. Some examples involve children with some congenital fatal disease, whose brief lives contain much suffering. Suppose then in Case One, there are two possible outcomes. Either, Dick will be caused to exist and will die after two years of suffering; or Dick will never exist. It would clearly be worse if Dick lives this wretched life.

Derek Parfit (15:29):

If we believe that all benefits and harms are comparative, we'd have to claim that living this life couldn't harm Dick, since it wouldn't be better for him if he'd never existed. To explain why it would be worse if Dick lives this life, we might have to appeal to an Impersonal Principle, and people doubt such principles. Well, we could give such an explanation if we claimed instead that, "If Dick is caused to exist and has two years of suffering, this would be an existential harm." Dick suffering wouldn't be made to be less bad for him or less bad by the fact that he might never have existed. We could then appeal to the Wide Principle. We could claim, "It will be worse if Dick exists and lives this wretched life, since this will be bad for Dick. And if Dick never exists, that wouldn't be worse: bad for no one."

Derek Parfit (16:29):

Suppose next that in Case Two, the possible outcomes are that, either A) Dick will die after two years of suffering, and Tom will die soon after he starts to exist. Or B) Dick will never exist, and Tom will die after two years of suffering. (It's on the handout. But it's quite simple.) Since B would be worse than A for Tom, and B would not be better for Dick since he would never exist, the principle here implies that B would be a worse outcome than A. That's not true. These outcomes would be equally bad, since in each of these outcomes, one child would have two years of suffering. It's morally irrelevant that only Tom would be comparatively harmed, because it's only Tom who would exist in both these outcomes.

Derek Parfit (17:27):

Dick's two years of suffering would be just as bad for Dick, as Tom's two years would be bad for Tom. And Dick's two years would do as much to make the outcome worse. In cases of this kind, there's no significant difference, between the badness of comparative and existential harms. These cases count against the Strong Narrow Person-Affecting Principles. Those who accept such principles often quote Jan Narveson's claim that, "Though we ought to be in favor of making people happy, we need not be in favor of making happy people." No such claim applies to existential harms. Compared with making people miserable, it would be just as bad to make miserable people. I shall now argue that when applied to some other cases, the Narrow Principles are structurally flawed, and have some other implications that are clearly false. We should suppose that in my imagined cases, each year of life would be an equal benefit, and there are no other relevant differences between my imagined people.

Derek Parfit (18:50):

(Well, you may need to look at the handout now.) Suppose that in case five, three possible outcomes are A) Tom will live to 60, and Dick to 80. B) Tom lived to 80, Harry to 60. C) Dick lives to 60, Harry to 80. In each outcome, one of the people doesn't exist. That's the black line. Well, the Weak Narrow Principle here implies, that A would be worse than B, since A would be worse than B for Tom, and B would not be worse than A for anyone. B would be similarly worse than C, since B would be worse for Harry than C, and C would be worse for no one, and C would be similarly worse than A. Now, if we're discussing the intrinsic goodness of these outcomes, which I think we are, those claims couldn't be true. A couldn't be worse than B, if B is worse than C, which is worse than A. That would be like the claim that, gold weighs more than silver, which weighs more than copper, which weighs more than gold. That's a structural flaw.

Derek Parfit (20:16):

Some defenders of the Narrow Principle might claim that, "If only outcomes A and B are possible, A would be worse than B, because A would be worse for Tom, and B would be better for no one. But if all three outcomes are possible, they'd be equally good, since each outcome would be in the same way worse for one person." But the goodness of outcomes can't depend on which other outcomes are possible. If I could save two people's lives, that would be better than if I save only one, and nobody says, "Oh, no, it wouldn't be better because you can't do it." It would be a better outcome, there's a better view. In each of these outcomes, one person would live to 80 and another person would live to 60. Each outcome is related to the others in similar ways. Given these facts, the outcomes are clearly equally good. To get the right answer here, we should appeal to existential benefits and to the Wide Principle. These three outcomes would be equally good, because they'd be equally good for people.

Derek Parfit (21:33):

Suppose next that in Case Six, the possible outcomes are: A) Mary will live to 70, and Kate to 50. B) Kate will live to 60, and Ruth to 20. C) Ruth will live to 30, and Jill will live to 10. Well, the Narrow Principles here imply, that A would be worse than B, since A would be worse than B for Kate, and B would be worse than A for anyone, not be worse than A. B would be similarly worse than C. C would be the best, because it would be worse for no one. Well, those claims are clearly false. A would be better than B, which would be better than C. If Mary and Kate lived for total of 120 years, that would be better than if Kate and Ruth lived for a total of 85 years, which would be better than if Ruth and Jill lived for a total of 40 years.

Derek Parfit (22:27):

As before, we should appeal instead to the Wide Principles. C would be less good for people than B, which would be less good for people than A. Now, the Narrow Principles go astray, because when we apply these principles to any pair of outcomes, these principles take into account only what would happen to the people who would exist in both these outcomes. On these principles, A would be worse than B, because A would give Kate 10 fewer years of life. These principles ignore the fact that A would give Mary 70 years of life, and that B would give Ruth only 20 years. When these principles similarly imply that B would be worse than C, because B would be worse for Ruth, these principles ignore the fact that B would give Kate 60 years of life, and C would give Jill only 10 years. It's an obvious mistake to ignore such facts.

Derek Parfit (23:33):

We should appeal to existential benefits and the Wide Principles. If Ruth lives to 30 and Jill to 10, that would be much less good for them, than living to 70 and 50 would be for Mary and Kate. As these examples also help to show, this Wide Principle solves the non-identity problem. The problem arises when we believe that one of two outcomes would be worse, even though because different people would exist, this outcome wouldn't be worse for people. Well, they would be worse as the Wide Principle claims, when and because, although they're not worse for people, they're less good for people than the other outcome would have been.

Derek Parfit (24:28):

If we go back to the two energy policies, if we adopt the cheaper policy, that will greatly lower the quality of many future people's lives and will also kill many of these people. That won't be worse for any of those people. But it'd be much less good for those people than if we chose the other policy, that would give different future people much higher quality of life and would kill no one. So, that's my main, fairly simple response to the non-identity problem. Just in a phrase, I said, "Oh, it wouldn't be worse for people so we have to appeal to an Impersonal Principle." I appeal to the principle, "If the same number of people will exist in either outcome, it would be worse if the people who exist are worse off." Now, that isn't about what affects particular people for better or worse. So, people said, "No."

Derek Parfit (25:41):

By switching from worse for people to less good for people, and allowing that that applies when there are different people, we solve the problem. We can claim that we would make things go very much worse if we ended human history. That would be very much less good for very many future people. I turn now to the triviality problem, (which is on the back of the handout.) I'm here partly developing some remarks I made in my first book, calling it Mistakes in Moral Mathematics. When we ask whether some acts effects would make this act right or wrong, many of us make serious mistakes. One mistake is the belief that we can ignore very small benefits or harms. Many of us, for example, would believe that, J) we ought to give to a single person one more year of life, rather than giving to each of many people only one more minute of life.

Derek Parfit (27:24):

Suppose that we're a million people who could each treat another million people in either of these ways. (J) implies that each of us ought to give one of these people one more year of life, but that's clearly false. A year is about half a million minutes. If each of us instead, gave to each of these people one more minute of life, we, together, would give these people about a million more minutes, which would be not one, but two more years of life. We may similarly believe, "Okay. We ought to save one person from a year of pain rather than saving each of many people from only one minute of similar pain." Suppose that another million people without our help would have two years of pain. When applied to that case, (K) is false. If each of us saved each of these million people from one minute of pain, we together, save those people, not from one, but from two years of pain.

Derek Parfit (28:30):

Now, those imagined cases are artificially simple and unlikely to occur, but that's no objection. They're like artificially simple scientific experiments, which are precisely designed to be artificial to make the fundamental questions clearer. There are many actual cases that are relevantly similar. It's often true, that if we do what would be better for us or for a few other people, we would also be doing what would be very slightly worse for each of very many other people. That's true of, for example, the acts that increase pollution in some great city, or the acts with which we and millions of others are overheating the atmosphere. Each of these acts will make things go slightly worse for very many people.

Derek Parfit (29:26):

Consider an extra claim that most pain could become worse in some way that would be not merely very small, but imperceptible. In such cases, we couldn't even notice that our pain has become worse. This claim may seem obviously false. Pain is bad because of the way it feels, or what it's like to be in pain. This fact may seem to imply, that no pain could become imperceptibly worse. If our pain doesn't seem worse, we may believe, this pain can't be worse. We can easily show however, that this claim is true.

Derek Parfit (30:14):

Suppose we're volunteers in some experiment, which is intended to compare the effects of certain painful stimuli. The start of this experiment we're in mild pain, some psychologist tells us that during the experiment, he will sometimes increase some painful stimulus, and sometimes do nothing. He asked us to say, when a bell rings after each five seconds, whether during these seconds our pain seemed to have got worse. Well, in some versions of this case, our answer would always be, "No." But it would be clear after a few minutes, that our pain is much worse than it was at the start. There's nothing puzzling here. Our pain is worse in the relevant sense, if our dislike of some painful sensation is stronger or more intense. We are fairly good at noticing whether our dislike is becoming stronger, but we don't notice very small changes. That's like the way in which when we look at some clocks that have moving hands, we can't see the clock's hour hand move. That doesn't show that it isn't moving. The same is true of the strength of our dislike of some painful sensation and our ability to notice that this dislike has become stronger. It isn't surprising that though we are fairly good at noticing and describing how things feel to us, we can get things slightly wrong. During this experiment, there'd be some first moment, when we were inclined to believe that our pain is worse than it was at the start. But if we watched the hour hand on the clock, there'd also be some first moment when we were inclined to believe that the hand has moved. These beliefs would be based on memory. They wouldn't show that we can see the clock's hand move, or notice that our pain is getting worse. But if our pain becomes imperceptibly worse during each of many brief periods, these changes may together make our pain very bad.

Derek Parfit (32:30):

Well, as before, these facts can matter greatly. To illustrate how they can matter, we can compare two other imagined cases. Suppose first that in the bad old days, a thousand torturers each have one victim and one pain-producing machine. The start of each day, each victim is already feeling mild pain. Each of the torturers turned some switch a thousand times on his machine. Each turning of this switch, makes some victim's pain only imperceptibly worse. But after a thousand turnings, each victim is in severe pain which continues for the rest of the day.

Derek Parfit (33:17):

Suppose next that these torturers have moral doubts about what they are doing. One of them suggests, that to answer these doubts, they should connect their machines in a certain way. In the resulting case, which I've called 'harmless torturers', each of the thousand torturers pushes some button, which turns the switch once on each of the thousand machines. Since all of the switches are again turned a thousand times, all of the victims suffered the same severe pain. But since each torturer's act turns each switch only once, none of these people make any victim's pain perceptively worse. And these torturers might say, "It's not wrong to affect someone's pain in some way that's imperceptible. None of us makes anyone's pain perceptively worse, therefore, none of us is acting wrongly." Now, that conclusion is clearly false. The torturers are still acting wrongly since they inflict on their victims just as much pain as they did in the bad old days.

Derek Parfit (34:33):

We must, therefore, reject that argument's premise. We must claim, "It can be wrong to impose pain on people, even if these acts would make no one's pain perceptively worse." Now, there are two ways in which we might defend this claim. We might defend a different view about the effects of each particular act. On what I call this 'single-act' view, it's wrong to impose a great amount of pain on other people. Such an act would be wrong even if (because this great amount of pain would be very widely spread by being imposed on very many people) the effect on each person is imperceptible. That's about the single act. Each harm these torturers act ?is wrong?, Because each is making this total amount of pain come to the people in a way that's imperceptible to each.

Derek Parfit (35:39):

Well, we might instead appeal to the combined effects of several acts. On the many-acts view, even if some act would not make anyone's pain perceptively worse, this act may be one of a set of acts that would, together, impose great pain on one or more people. These effects can make such acts wrong. Well, that's what we can say about the torturers in the 'harmless torturers.' Each causes the switch to be turned only once on each of the thousand machines, but they, together, inflict just as much pain as in the 'bad old days', on the second view, that's why these acts are wrong.

Derek Parfit (36:30):

Of these ways of explaining why these acts are wrong, most of us would find the second more plausible. We may doubt that it could be seriously wrong to impose pain on people, if the amount of pain is too small to be perceptible. But we can plausibly believe that we'd be acting wrongly if we and others, together, made many people's pain much worse. And there are many actual cases to which we can apply this many-acts view. When millions of people continually pollute the air in some great city, each person's acts have bad effects on the health of millions of people. Since these bad effects are so thinly spread over so many people, no act is perceptively worse for anyone. But these acts, together, significantly damage many people's health, some of whom, this damage kills. So, in some cases, to reach the right moral conclusion, we can't appeal to what we, together, do. (I mentioned a case on the handout.)

Derek Parfit (37:43):

I think it's important that you can appeal to the effects of each particular act, but I'm defending the second view, because it's less counterintuitive. We can't claim that when we add to global warming, we can't be acting wrongly because our acts won't have any perceptible effect on anyone. The 'harmless torturers' act in a way, which doesn't have any perceptible effect on anyone, but they're acting very wrongly. Now, one way to bring out the difference, is a claim that I call (Q.) If there are two possible states of affairs, in which the same burdens would be imposed on people, the badness of these states wouldn't depend on whether these burdens would be imposed by one person on one person, or by each of many people on each of many people. Cases of this kind have great and growing importance. For most of human history, most people's acts could have good or bad effects only on a few other people. When such effects were very small, they could be justifiably ignored.

Derek Parfit (39:06):

But we can now act in ways that would have very small, bad effects on each of very many people. That's true of many of the acts of adding carbon dioxide to the atmosphere. Global warming has a simplifying feature. The molecules that we add to the air will be thoroughly mixed by the winds, and will remain in the atmosphere for several centuries. Acts that add similar numbers of molecules, therefore, have similar bad effects. Some of these effects will be very slightly bad for very many people. When we act in these ways by using air conditioners, for example, or cars, or aeroplanes, we are not aware of the harm that our acts will later cause. But we may be, together, imposing great burdens and killing many people.

Derek Parfit (39:59):

Now, see the connection between the two problems? The first, non-identity problem: what we're doing even if it greatly lowers the quality of life and kills many people, can't make the outcome worse or be wrong, because those acts will be worse for no one. No one will have a complaint. If anything, future people will be benefited by having been caused to exist. Wouldn't be better for them if we'd never existed. To that I think there's a simple reply. We should appeal, not to the claim that an outcome is worse if it's worse for people only if it's worse for people; it can be worse if it's much less good for people than the other outcome would have been. And then the other problem is, the effects or acts are so thinly spread, that we will often think, "What I'm doing will just make no perceptible difference to anyone." But that's clearly, as my example shows, a bad mistake. The harmless torturers could claim that.

Derek Parfit (41:24):

To illustrate this argument, I'll just end with another artificially simple imagined case. Suppose that in Case Three, were a group of a million people. To save ourselves from one hour of pain, each of us could either do what would cause one other person to have one whole day of pain, or do what would cause each of a million people to have 1,000,000th of a day of pain. Well, it would clearly be wrong to act in the first of these ways. We oughtn't to save ourselves from an hour of pain by doing what would impose much more pain on a single other person, a whole day of pain. When we consider acting in the second way, we may have a different view. Act in the second way, we would cause many people to be in pain for less than 1/10th of a second.

Derek Parfit (42:25):

Such very brief periods of pain may seem to us have no moral significance. That may lead us to believe that we could justifiably save ourselves through an hour of pain, by acting in the second way. But that's a mistake. A million millionths of a day of pain - is a day of pain. If we, million people, acted in the second way, we'd save ourselves from a million hours of pain, but we'd cause these other people to have a million days of pain. I shall end just with a few remarks from the end of my third book - the triviality problem is not a good closing. Some of this doesn't need saying to you.

Derek Parfit (43:17):

I say, "I regret that in a book called, On What Matters, I've said very little about what matters." One thing that greatly matters is the failure we reach people to prevent as we so easily could, much of the suffering in many of the early deaths of the poorest people in the world. The money we spend on an evening's entertainment might instead save some poor person from death, blindness, or chronic and severe pain. Now, if we believe that in our treatment of those people, we're not acting wrongly, we're like those who believe that they were justified in having slaves.

Derek Parfit (43:53):

Some of us ask, "How much of our wealth, we rich people, ought to give to these poorest people?" But that question wrongly assumes, that our wealth is ours to give. This wealth is legally ours, but these poorest people have much stronger moral claims to some of this wealth. We ought to transfer some of this to others. What now matters most, I think, is how we respond to various risks to the survival of humanity. We are creating some of these risks, and we're discovering how we could respond to these and other risks. If we reduce these risks and humanity survives the next few centuries, our descendants or successors could end these risks by spreading through the galaxy.

Derek Parfit (44:57):

Life can be wonderful as well as terrible, and we shall increasingly have the power to make life good. Since human history may be only just beginning, we can expect that future humans or supra-humans, may achieve some great goods that we can't now even imagine. In Nietzsche's words, "There's never been such a new dawn and clear horizon, and such an open sea." If we are the only rational beings in the universe, as some recent evidence suggests, it matters even more whether we shall have descendants or successors during the billions of years in which that would be possible. Some of our successors might live lives and create worlds that, though failing to justify past suffering, would have given us all, including those who suffered most, reasons to be glad that the Universe exists. End.





More posts like this

Sorted by Click to highlight new comments since:

It's nice to see this again <3 

I asked Parfit to give this talk at that EAGxOxford, a conference Jacob Lagerros and I were the lead organizers of [edit: I see James Aung posted this, who was on the team too!]. It was one of the last talks of his life. I remember writing him an email about what talk to give, and he wrote a very long word document back as an attachment. He was a very careful thinker.

Also I remember a pretty endearing interaction between him and Anders Sandberg, where Anders pretended to be a fan and got Parfit to sign a copy of his book. (It was a joke because Anders and Parfit were former roommates and good friends.)

In the Q&A after this talk, Sandberg asked "What is the moral relevance of Apple laptops booting half a second slower?" (since on Parfit's simple view of aggregation, with millions of devices, this is equivalent to a massive loss of life). I always thought Parfit was being rude by ignoring the question, but your comment makes it seem more like joshing.

Heeheehee. Sounds like Anders poking fun at his friend live.

Thank you so much! I used this in my research just last week. I can now revise this more easily!

More from james
Curated and popular this week
Relevant opportunities