Note: Besides the other researchers at GPI, I am grateful to Timothy Campbell, Matthew Clarke, Daniel Cohen, Tomi Francis, Anders Herlitz, Kacper Kowalczyk, David McCarthy, Aidan Penn, Stefan Riedener, and Tatjana Višak for much useful discussion.
The Asymmetry is the view in population ethics that, while we ought to avoid creating additional bad lives, there is no requirement to create additional good ones. The question is how to embed this view in a complete normative theory, and in particular one that treats uncertainty in a plausible way. After reviewing the many difficulties that arise in this area, I present general ‘supervenience principles’ that reduce arbitrary choices to uncertainty-free ones. In that sense they provide a method for aggregating across states of nature. But they also reduce arbitrary choices to one-person cases, and in that sense provide a method for aggregating across people. The principles are general in that they are compatible with total utilitarianism and ex post prioritarianism in fixed-population cases, and with a wide range of ways of extending these views to variable-population cases. I then illustrate these principles by writing down a complete theory of the Asymmetry, or rather several such theories to reflect some of the main substantive choice-points. In doing so I suggest a new way to deal with the intransitivity of the relation ‘ought to choose A over B’. Finally, I consider what these views have to say about the importance of extinction risk and the long-run future.
Consider two possible long-term futures for humanity: the Good Future, containing 10 to the twentieth power flourishing future human lives, and the Extinct Future, containing no future human lives at all. According to some views of population ethics, we would have incredibly strong reasons to bring about the Good Future rather than the Extinct one in a straight choice. Take, for example, Totalism, the view that we ought to act in a way that maximizes expected total welfare. The sheer number of people in the Good Future means that the total amount of welfare at stake in this choice would be vastly greater than any sacrifice the current generation could plausibly make, or any benefit they could endow upon themselves. Indeed, merely replacing a one-in-a-billion chance of the Extinct Future by a one-in-a-billion chance of the Good Future would justify the destruction of all the wellbeing of all the eight billion people currently alive—ten times over.
One could, perhaps, avoid such striking conclusions by appealing to non-welfarist considerations such as the rights of the present generation. But even within the domain of welfarist population ethics, many people are drawn to
In a straight choice between creating no one and creating some additional people, with no effect on those who independently exist…
If the additional people would certainly have bad lives, we ought not to create them.
If the additional people would certainly have good lives, it is permissible but not required to create them.
The Asymmetry entails that it would be permissible to choose the Extinct Future over the Good Future in a straight choice. This might not be the full story: just as one might supplement Totalism with a story about rights, one might interpret the Asymmetry as a pro tanto principle, and bring in other ingredients that would speak in favour of the Good Future. For example, if the continuance of humanity is morally important in non-welfarist ways, then it might turn out that, all things considered, we ought to choose the Good Future, but not at anything like the cost implied by Totalism. Be that as it may, my presumption is that there is some class of considerations—something like considerations of impartial beneficence—such that Totalism and the Asymmetry are straightforwardly disagreeing theories about what one ought to do as far as those considerations go; for the rest of this paper I am talking about what one ought to do, in just that sense.
The problem is that, unlike Totalism, the Asymmetry is nowhere near a complete theory, and in particular it is silent about what to do when we are uncertain about the outcomes of our acts—as indeed we always are. What if, again, the most we can do is reduce the probability of extinction while imposing some more certain cost on those currently alive? Totalism, by incorporating expected value theory, provides a clean story about how to think about such choices in principle, no matter how complicated things might be in practice. As far as I am aware, there is no worked-out view that combines the Asymmetry with a plausible story about uncertainty.
The goal of this paper is to fill this gap, and more generally to present a complete, extensionally plausible theory of the Asymmetry. The thrust of the paper is therefore more constructive than critical: the point is to beat a defensible path through the thicket of intuitions and theoretical puzzles that surround the Asymmetry, clearing the way for further exploration. This path-beating inevitably involves picking sides in some controversies, and I will make clear some of the main turning-points along the way. Indeed, I will ultimately present four possible destinations corresponding to different ways of resolving what strike me as the most important types of trade-off.
Here is the plan. In section 2, I present the best known extant approach to the Asymmetry, the so-called Harm Minimization View. I use this to introduce the main difficulties that arise in theorizing about the Asymmetry, and to lay down some markers. In particular I explain why it is difficult to reconcile the Asymmetry with expected value theory.
The centerpiece of the paper is section 3, in which I introduce some generic principles for choice under uncertainty. These principles are ‘generic’ in that they have nothing to do with the Asymmetry per se. In fixed-population cases (that is, in cases where the same people exist no matter what) they are compatible both with totalism and with ‘ex post’ prioritarianism, and allow for a wide range of views about how those fixed-population theories should extend to variable populations. These lead to a ‘Supervenience Theorem’ that reduces arbitrary choice scenarios to a class of simple choice scenarios, and which I prove in the Appendix. This class of simple choices can be taken either to be uncertainty-free choices or uncertain choices involving only one person. The Supervenience Theorem can thus be seen either as a way of aggregating across states of nature or across people.
It remains, then, to produce a plausible theory of the Asymmetry for these simple choice scenarios. In section 4, I consider extant proposals for determining the set of permissible options in any given choice scenario by comparing available options two at a time. I show why these proposals are unsatisfactory, and make a better one, based on Schulze’s ‘beatpath’ method in voting theory. In section 5 I use this proposal to sketch several detailed theories of the Asymmetry, incorporating different responses to the issues raised in section 2. I conclude in section 6 by illustrating what some of these views say about extinction risk and more generally about the importance of the long-run future. The Appendix contains a formal statement and proof of the Supervenience Theorem.
To keep things simple, I’ll consider only effects on the welfare of humans, or more generally ‘people’. The numbers are simply for illustration, but are indicative of those found in Bostrom (2003). ↩︎
See for example Roberts (2011) for a survey. ↩︎
I regret that I do not have more to say to elucidate this point. In practice, I will generally suppose that Expected Totalism holds in fixed-population cases; the question then is how fixed-population Expected Totalism can be plausibly extended to include the Asymmetry. However, most of the discussion will not be premissed on fixed-population Expected Totalism, and this way of developing the project should be of interest even to readers who think fixed-population Expected Totalism must ultimately be modified to incorporate egalitarian concerns, personal prerogatives, special obligations, deontic constraints, risk aversion, and whatever else. ↩︎
Throughout, I will understand uncertainty in an orthodox Bayesian way, using a probability distribution to represent the epistemic state of the agent (but not taking a stand on whether the relevant probabilities are purely subjective or objective, e.g. evidential). I will say nothing on the topic of ‘Knightian uncertainty’, ‘ambiguity’, or ‘imprecise probabilities’, although this is arguably an important area, especially when thinking about the long-run future. An out-of-the-box view would represent the agent’s epistemic state by a set of probability measures, and use a ‘liberal’ decision principle, on which an option is permissible if and only if it would be permissible with respect to some probability function in the set (see Weatherson (2000); Moss (2015) for discussion). ↩︎
McDermott (1982), Meacham (2012), and Cohen (2019) do make brief suggestions on this front, which I will criticize below. ↩︎