Aug 06, 2014
Crossposted from the Global Priorities Project
In my dissertation on this topic, I tried to defend the conclusion that the distant future is overwhelmingly important without committing to a highly specific view about population ethics (such as total utilitarianism). I did this by appealing to more general principles, but I did end up delving pretty deeply into some standard philosophical issues related to population ethics. And I don’t see how to avoid that if you want to independently evaluate whether it’s overwhelmingly important for humanity to survive in the long-term future (rather than, say, just deferring to common sense).
In this post, I outline a relatively atheoretical argument that affecting long-run outcomes for civilization is overwhelmingly important, and attempt to side-step some of the deeper philosophical disagreements. It won’t be an argument that preventing extinction would be overwhelmingly important, but it will be an argument that other changes to humanity’s long-term trajectory overwhelm short-term considerations. And I’m just going to stick to the moral philosophy here. I will not discuss important issues related to how to handle Knightian uncertainty, “robust” probability estimates, or the long-term consequences of accomplishing good in the short run. I think those issues are more important, but I’m just taking on one piece of the puzzle that has to do with moral philosophy, where I thought I could quickly explain something that may help people think through the issues.
In outline form, my argument is as follows:
4’. Some of our actions are expected to change our development trajectory in not-ridiculously-small ways.
5’. If what matters most is that we maximize humanity’s future potential and some of our actions are expected to change our development trajectory in not-ridiculously-small ways, what it is best to do is primarily determined by how our actions are expected to change our development trajectory.
6’. Therefore, what it is best to do is primarily determined by how our actions are expected to change our development trajectory.The basic thought here is that what the astronomical waste argument really shows is that future welfare considerations swamp short-term considerations, so that long-term consequences for the distant future are overwhelmingly important in comparison with purely short-term considerations (apart from long-term consequences that short-term consequences may produce).
However, the concept of existential risk is wide enough to include any drastic curtailment to humanity’s long-term potential, and the concept of a “trajectory change” is wide enough to include any small but important change in humanity’s long-term development. And the value of these existential risks or trajectory changes need not depend on changes in the population. For example,
A puzzle for people with person-affecting views goes as follows:
Suppose that agents as a community have chosen to deplete rather than conserve certain resources. The consequences of that choice for the persons who exist now or will come into existence over the next two centuries will be “slightly higher” than under a conservation alternative (Parfit 1987, 362; see also Parfit 2011 (vol. 2), 218). Thereafter, however, for many centuries the quality of life would be much lower. “The great lowering of the quality of life must provide some moral reason not to choose Depletion” (Parfit 1987, 363). Surely agents ought to have chosen conservation in some form or another instead. But note that, at the same time, depletion seems to harm no one. While distant future persons, by hypothesis, will suffer as a result of depletion, it is also true that for each such person a conservation choice (very probably) would have changed the timing and manner of the relevant conception. That change, in turn, would have changed the identities of the people conceived and the identities of the people who eventually exist. Any suffering, then, that they endure under the depletion choice would seem to be unavoidable if those persons are ever to exist at all. Assuming (here and throughout) that that existence is worth having, we seem forced to conclude that depletion does not harm, or make things worse for, and is not otherwise “bad for,” anyone at all (Parfit 1987, 363). At least: depletion does not harm, or make things worse for, and is not "bad for," anyone who does or will exist under the depletion choice.The seemingly natural thing to say if you have a person-affecting view is that because conservation doesn’t benefit anyone, it isn’t important. But this is a very strange thing to say, and people having this conversation generally recognize that saying it involves biting a bullet. The general tenor of the conversation is that conservation is obviously important in this example, and people with person-affecting views need to provide an explanation consonant with that intuition.
Whatever the ultimate philosophical justification, I think we should say that choosing conservation in the above example is important, and this has something to do with the fact that choosing conservation has consequences that are relevant to the quality of life of many future people.
I wish to be the sort of person who would happily pay $1 for a robust (reliable, true, correct) 10/N probability of saving N lives, for astronomically huge N - while simultaneously refusing to pay $1 to a random person on the street claiming s/he will save N lives with it.More generally, we could say:
Principle of Scale: Other things being equal, it is N times better (in itself) to ensure that N people in some position have higher quality of life than other people who would be in their position than it is to do this for one person.I had to state the principle circuitously to avoid saying that things like conservation programs could “help” future generations, because according to people with person-affecting views, if our "helping" changes the identities of future people, then we aren't "helping" anyone and that's relevant. If I had said it in ordinary language, the principle would have said, “If you can help N people, that’s N times better than helping one person.” The principle could use some tinkering to deal with concerns about equality and so on, but it will serve well enough for our purposes.
The Principle of Scale may seem obvious, but even it would be debatable. You wouldn’t find philosophical agreement about it. For example, some philosophers who claim that additional lives have diminishing marginal value would claim that in situations where many people already exist, it matters much less if a person is helped. I attack these perspectives in chapter 5 of my dissertation, and you can check that out if you want to learn more. But, in any case, the Principle of Scale does seem pretty compelling—especially if you’re the kind of person that doesn’t have time for esoteric debates about population ethics—so let’s run with it.
Now for the most questionable steps: Let’s assume with the astronomical waste argument that the expected number of future people is overwhelming, and that it is possible to improve the quality of life for an overwhelming number of future people through forward-thinking interventions. If we combine this with the principle from the last paragraph and wave our hands a bit, we get the conclusion that shifting quality of life for an overwhelming number of future people is overwhelmingly more important than any short term consideration. And that is very close to what the long-run perspective says about helping future generations, though importantly different because this version of the argument might not put weight on preventing extinction. (I say “might not” rather than “would not” because if you disagree with the people with person-affecting views but accept the Principle of Scale outlined above, you might just accept the usual conclusion of the astronomical waste argument.)