Hide table of contents

Some of the deepest puzzles in ethics concern how to coherently extend ordinary beneficence and decision theory to extreme cases. The notorious puzzles of population ethics, for example, ask us how to trade off quantity and quality of life, and how we should value future generations. Beckstead & Thomas discuss a paradox for tiny probabilities and enormous values, asking how we should take risk and uncertainty into account. Infinite ethics raises problems for both axiology and decision theory: it may be unclear how to rank different infinite outcomes, and it’s hard to avoid the “fanatical” result that the tiniest chance of infinite value swamps all finite considerations (unless one embraces alternative commitments that may be even more counterintuitive).

Puzzles galore! But these puzzles share a strange feature, namely, that people often mistakenly believe them to be problems specifically for utilitarianism.

 

[Image caption: "Fear not: there’s enough for everyone!"]

Their error, of course, is that beneficence and decision theory are essential components of any complete moral theory. (As even Rawls acknowledged, “All ethical doctrines worth our attention take consequences into account in judging rightness. One which did not would simply be irrational, crazy.” Rossian pluralism explicitly acknowledges a prima facie duty of beneficence that must be weighed against our other—more distinctively deontological—prima facie duties, and will determine what ought to be done if those others are not applicable to the situation at hand. And obviously any account relevant to fallible human beings needs to address how we should respond to uncertainty about our empirical circumstances and future prospects.)

Why, then, would anyone ever think that these puzzles were limited to utilitarianism? One hypothesis is that only utilitarianism is sufficiently clear and systematic to actually attempt an answer to these questions. Other theories too often remain silent and non-committal. Being incomplete in this way is surely not an advantage of those theories, unless there’s reason to think that a better answer will eventually be fleshed out. But what makes these questions such deep puzzles is precisely that we know that no wholly satisfying answer is possible. It’s a “pick your poison” situation. And there’s nothing clever about mocking utilitarians for endorsing a poisonous implication when it’s provably the case that every possibility remaining amongst the non-utilitarian options is similarly poisonous!

When all views have costs, you cannot refute a view just by pointing to one of its costs. You need to actually gesture towards a better alternative, and do the difficult work of determining which view is the least bad. Below I’ll briefly step through some basic considerations that bring out how difficult this task can be.

Population Ethics

In ‘The New Moral Mathematics’ (reviewing WWOTF), Kieran Setiya sets up a false choice between total utilitarianism and “the intuition of neutrality” which denies positive value to creating happy lives. (Note that MacAskill’s longtermism is in fact much weaker than total utilitarianism.) He swiftly dismisses the total view for implying the repugnant conclusion. But he doesn’t mention any costs to neutralism, which may give some readers the misleading impression that this is a cost-free, common-sense solution. It isn’t. Far from it.

Neutrality implies that utopia is (in prospect) no better than a barren, lifeless rock. It implies that the total extinction of all future value-bearers could be more than compensated for by throwing a good enough party for those who already exist. These implications strike me as far more repugnant than the repugnant conclusion. (If you think the big party doesn’t sound so bad, given that you’re already invited, instead imagine Cleopatra making the decision millennia ago.) Moreover, neutrality doesn’t even fully avoid the original problem! It still doesn’t imply that future utopia A is better than the repugnant world Z; just that they are “on a par”. (This is a result that totalists can just as well secure through a more limited critical range that still allows awesome lives to qualify as positive additions to the world.)

To fully avoid repugnance, we want a population axiology that can at least deliver both of the following verdicts:

(i) utopia (world A) is better than Parfit’s world Z, and

(ii) utopia is better than a barren rock.

The total view can’t secure (i), but at least it’s got (ii) covered. Neutrality gets us neither! (The only hope for both, I think, is some kind of variable value view, or possibly perfectionism, both of which allow that we have strong moral reasons to want more awesome, excellent lives to come into existence.)

To bring out just how little is gained by neutrality, note that all the same puzzles re-emerge when trading off quantity and quality within a single life, where neutrality is clearly not an option. (The intrapersonal “neutral” view would hold that early death is harmless, and adding extra good time to your life—however wonderful that time might be—is strictly “on a par” with never having that time at all. Assuming that you’d prefer to experience bliss than instant death, you already reject the “intuition of neutrality” in this domain!)

Consider the intrapersonal repugnant conclusion: A life contain zillions of barely-positive drab moments is allegedly better for you than a century in utopia. Seems wrong! So how are you going to avoid it? Not by appealing to neutrality, for the reasons we’ve just seen. An intrapersonal analogue of variable value or critical range views is surely more promising, though these views have their own significant costs and limitations (follow the links for details). Still, if you settle on a view that works to avoid the intrapersonal repugnant conclusion, why not carry it over to the inter-personal (population) case, if you’re also concerned to avoid the repugnant conclusion there?

Once you acknowledge that (i) the intrapersonal repugnant conclusion is just as counterintuitive as the inter-personal one, and yet (ii) unrestricted “neutrality” about creating new moments of immense value is not a feasible option, it becomes clear that neutrality about creating happy lives is no panacea for the puzzles of population ethics. Either we make our peace with some form of the repugnant conclusion, or we settle on an alternative account that’s nonetheless compatible with ascribing value to creating new loci of value (times or lives) at least when they are sufficiently good. Folks who think neutrality offers an acceptable general solution here are deluding themselves.

Decision Theory

In an especially striking example of conflating utilitarianism with anything remotely approaching systematic thinking, popular substacker Erik Hoel recently characterized the Beckstead & Thomas paper on decision-theoretic paradoxes as addressing “how poorly utilitarianism does in extreme scenarios of low probability but high impact payoffs.” Compare this with the very first sentence of the paper’s abstract: “We show that every theory of the value of uncertain prospects must have one of three unpalatable properties.” Not utilitarianism. Every theory.

(Alas, when I tried to point this out in the comments section, after a brief back-and-forth in which Erik initially doubled down on the conflation, he abruptly decided to instead delete my comments explaining his mistake.)

Just to briefly indicate the horns of the paradox: in order to avoid the “recklessness” of orthodox (risk-neutral) expected utility in the face of tiny chances of enormous payoffs, you must either endorse timidity or reject transitivity. Timidity “permit[s] passing up arbitrarily great gains to prevent a tiny increase in risk.” (Relatedly: risk-averse views may imply that we should prefer to destroy the world rather than risk a 1 in 10 million chance of a dystopian future, even on the assumption that a correspondingly wonderful utopia is vastly more likely to otherwise eventuate.) Doesn’t sound great! And rejecting transitivity strikes me as basically just giving up on the project of coherently systematizing how we should respond to uncertain prospects; I don’t view that as an acceptable option at all.

Conclusion

It’s really not easy for anyone to avoid uncomfortable verdicts in these puzzle cases. However bad the “utilitarian” verdict looks at first blush, a closer examination suggests that many alternatives are likely to be significantly worse. (When discussing related issues in ‘Double or Nothing Existence Gambles’, I suggest that a moderate mix of partiality and diminishing marginal value of intrinsic goods might help in at least some cases. But it’s really far from obvious how best to deal with these problems!)

Most of those who are most confident that the orthodox utilitarian answers are absurd haven’t actually thought through any sort of systematic alternative, so their confidence seems severely misplaced. Personally, I remain hopeful that both the Repugnant Conclusion and (at least some) reckless ‘Double or Nothing’ existence gambles can be avoided with appropriate tweaks to our axiology. But I’m far from confident: these puzzles are really tricky, and the options all have severe costs! Non-consequentialists may superficially look better by refusing to even talk about the problems, so—like skilled politicians—they cannot so easily be pinned down. But gaps in a theory shouldn’t be mistaken for solutions. It’s important to appreciate that any coherent completion of their view will likely end up looking just as bad—or worse.

As a result, I think many people who (like Erik Hoel) think they are opposed to utilitarianism are really reacting against a broader phenomenon, namely, systematic theorizing. The only way to entirely avoid the problems they deem so sneer-worthy is to stop thinking. Personally, I just can’t shake the feeling that that would be the most repugnant response of all.

Comments38
Sorted by Click to highlight new comments since: Today at 12:31 PM

There are other appeals of neutrality (about adding "positive" lives or "goods") besides just avoiding the RC:

  1. It can avoid the Very Repugnant Conclusion, although some of your proposed solutions like critical levels would work, too.
  2. Adding people at the cost to existing or otherwise necessary people. See here and here. I pretty much have the opposite intuition on the extinction vs party example from you, but I think the use of a party may confound people with intuitions against frivolousness or hedonism and is relatively low stakes for existing people. We can imagine cases where what's at stake for existing people seems much more serious: dreams or important life goals, suffering, freedom, spending time with loved ones, their lives (including replacement arguments, and the logic of the larder), and so on. The views you defend here allow all of these to be outweighed by the addition of new people. Furthermore, while there may still be strong instrumental reasons for respecting reproductive freedom regardless (which you've discussed elsewhere), neutrality seems to give a stronger principled reason, since the welfare of a new child wouldn't make up for a parent's overall loss in welfare on its own under any circumstance. Getting the right answer for more principled reasons is more satisfying and on firmer ground.
  3. In intrapersonal tradeoffs on theories where preferences matter terminally, it fits liberal, pluralistic and anti-paternalistic intuitions better. Under preference views that allow the addition of new contingent preferences to outweigh the lesser satisfaction of necessary preferences (and so violate neutrality with respect to adding satisfied preferences, but in a specific way), and ignoring indirect and instrumental reasons (which of course matter substantially in practice), it would in principle be good for the individual for you to violate any or all of their own existing preferences in order to induce/create and satisfy sufficiently strong new preferences in them. Preference-affecting views — basically person-affecting views, but treating individual preferences like persons* — can avoid this problem, and some can avoid "symmetric" problems at the same time, e.g. violating preferences to eliminate or prevent frustrated preferences (at least in the cases where it seems worst to do so).

* although there are more fundamental distinctions we could make if we wanted, e.g. between intrapersonal and interpersonal tradeoffs.

Hi Michael!  Thanks for your comments.

  1. I think my dialectical strategy works similarly against appealing to the Very Repugnant Conclusion to support neutrality.  To avoid the intra-personal VRC (compatibly with other commonsense commitments about the harm of death), we'd need a theory that assigns suitably more weight to quality than quantity. And if you've got such a theory, you don't need neutrality for interpersonal cases either.
  2. Fair enough if you just don't share my intuitions.  I think it would be horribly evil for the present generation to extinguish all future life, merely to moderately benefit ourselves (even in not purely frivolous ways).  When considering different cases, where there are much graver costs to existing people (e.g. full-blown replacement), I share the intuition that extreme sacrifice is not required; but appeal to some form of partiality or personal prerogative seems much more appropriate to me than denying the value of the beneficiaries. (I develop a view along these lines in my paper, 'Rethinking the Asymmetry'.)  Just like the permissibility of keeping your organs inside your own body is no reason to deny the value of potential beneficiaries of organ donation.

    That last point also speaks to the putative desirability of offering a "stronger principled reason".  Protecting bodily autonomy by denying the in-principle value of people in need of organ transplants would be horrifying, not satisfying.  So I don't think that question can be adjudicated independently of the first-order question of which view is simply right on the merits.
  3. How to deal with induced or changing preferences is a real problem for preferentist theories of well-being, and IMO is a good reason to reject all such views in favour of more objective alternatives.  Neutrality about future desires helps in some cases, as you note, but is utterly disastrous in others (e.g. potentially implying that a temporarily depressed child or teenager, who momentarily loses all his desires/preferences, might as well just die, even if he'd have a happy, flourishing future).

appeal to some form of partiality or personal prerogative seems much more appropriate to me than denying the value of the beneficiaries

I don't think this solves the problem, at least if one has the intuition (as I do) that it's not the current existence of the people who are extremely harmed to produce happy lives that makes this tradeoff "very repugnant." It doesn't seem any more palatable to allow arbitrarily many people in the long-term future (rather than the present) to suffer for the sake of sufficiently many more added happy lives. Even if those lives aren't just muzak and potatoes, but very blissful. (One might think that is "horribly evil" or "utterly disastrous," and isn't just a theoretical concern either, because in practice increasing the extent of space settlement would in expectation both enable many miserable lives and many more blissful lives.)

ETA: Ideally I'd prefer these discussions not involve labels like "evil" at all. Though I sympathize with wanting to treat this with moral seriousness!

Interesting!  Yeah, a committed anti-natalist who regrets all of existence -- even in an "approximate utopia" -- on the grounds that even a small proportion of very unhappy lives automatically trumps the positive value of a world mostly containing overwhelmingly wonderful, flourishing lives  is, IMO, in the grips of... um (trying to word this delicately)... values I strongly disagree with.  We will just have very persistent disagreements, in that case!

FWIW, I think those extreme anti-natalist values are unusual, and certainly don't reflect the kinds of concerns expressed by Setiya that I was responding to in the OP (or other common views in the vicinity, e.g. Melinda Roberts' "deeper intuition that the existing Ann must in some way come before need-not-ever-exist-at-all Ben").

certainly don't reflect the kinds of concerns expressed by Setiya that I was responding to in the OP

I agree. I happen to agree with you that the attempts to accommodate the procreation asymmetry without lexically disvaluing suffering don't hold up to scrutiny. Setiya's critique missed the mark pretty hard, e.g. this part just completely ignores that this view violates transitivity:

But the argument is flawed. Neutrality says that having a child with a good enough life is on a par with staying childless, not that the outcome in which you have a child is equally good regardless of their well-being. Consider a frivolous analogy: being a philosopher is on a par with being a poet—neither is strictly better or worse—but it doesn’t follow that being a philosopher is equally good, regardless of the pay.

...Having said that, I do think the "deeper intuition that the existing Ann must in some way come before need-not-ever-exist-at-all Ben" plausibly boils down to some kind of antifrustrationist or tranquilist intuition. Ann comes first because she has actual preferences (/experiences of desire) that get violated when she's deprived of happiness. Not creating Ben doesn't violate any preferences of Ben's.

I don't think so.  I'm sure that Roberts would, for example, think we had more reason to give Ann a lollipop than to bring Ben into existence and give him one, even if Ann would not in any way be frustrated by the lack of a lollipop.

The far more natural explanation is just that we have person-directed reasons to want what is good for Ann, in addition to the impersonal reasons we have to want a better world (realizable by either benefiting Ann or creating & benefiting Ben).

In fairness to Setiya, the whole point of parity relations (as developed, with some sophistication, by Ruth Chang) is that they -- unlike traditional value relations -- are not meant to be transitive.  If you're not familiar with the idea, I sketch a rough intro here.

I think it would be horribly evil for the present generation to extinguish all future life, merely to moderately benefit ourselves (even in not purely frivolous ways).

"Extinguish" evokes the wrong connotations since neutrality is just about not creating new lives. You make it seem like there's going to be all this life in the future and the proponents of neutrality want to change the trajectory. This introduces misleading connotations because some views with neutrality say that it's good to create new people if this is what existing people want, but not good to create new people for its own sake.

I think using the word "extinguish" is borderline disingenuous. [edit: I didn't mean to imply dishonesty – I was being hyperbolic in a way that isn't conducive to good discussion norms.]

Likewise, the Cleopatra example in the OP is misleading – at the very least it begs the question. It isn't obvious that existing people not wanting to die is a reason to bring them into existence. Existing people not wanting to die is more obviously a reason to not kill them once they exist. 

"Disingenuous"?  I really don't think it's OK for you to accuse me of dishonesty just because you disagree with my framing of the issue.  Perhaps you meant to write something like "misleading".

But fwiw, I strongly disagree that it's misleading.  Human extinction is obviously a "trajectory change". Quite apart from what anyone wants -- maybe the party is sufficient incentive to change their preferences, for example -- I think it's perfectly reasonable to expect the continuation of the species by default.  But I'm also not sure that default expectations are what matters here. Even if you come to expect extinction, it remains accurate to view extinction as extinguishing the potential for future life. 

Your response to the Cleopatra example is similarly misguided. I'm not appealing to "existing people not wanting to die", but rather existing people being glad that they got to come into existence, which is rather more obviously relevant. (But I won't accuse you of dishonesty over this.  I just think you're confused.)

Sorry, I didn’t mean to accuse you of dishonesty (I'm adding an edit to the OP to make that completely clear). I still think the framing isn’t defensible (but philosophy is contested and people can disagree over what's defensible).

Even if you come to expect extinction, it remains accurate to view extinction as extinguishing the potential for future life.

Yes, but that’s different from extinguishing future people. If the last remaining members of a family name tradition voluntarily decide against having children, are they “extinguishing their lineage”? To me, “extinguishing a lineage” evokes central examples like killing the last person in the lineage or carrying out an evil plot to make the remaining members infertile. It doesn’t evoke examples like “a couple decides not to have children.”

To be clear, I didn’t mean to say that I expect extinction. I agree that what we expect in reality doesn’t matter for figuring out philosophical views (caveat). I mentioned the point about trajectories to highlight that we can conceive of worlds where no one wants humanity to stick around for non-moral reasons (see this example by Singer). (By “non-moral reasons,” I’m not just thinking of some people wanting to have children. When people plant trees in their neighbourhoods or contribute to science, art, business or institutions, political philosophy, perhaps even youtube and tik tok, they often do so because it provides personal meaning in a context where we expect civilization to stay around. A lot of these activities would lose their meaning if civilization was coming to an end in the foreseeable future.) To evaluate whether neutrality about new lives  is repugnant, we should note that it only straightforwardly implies “there should be no future people” in that sort of world.

Your response to the Cleopatra example is similarly misguided. I'm not appealing to "existing people not wanting to die", but rather existing people being glad that they got to come into existence, which is rather more obviously relevant.

I think I was aware that this is what you meant. I should have explained my objection more clearly. My point is that there's clearly an element of deprivation when we as existing people imagine Cleopatra doing something that prevents us from coming to exist. It's kind of hard – arguably even impossible – for existing people to imagine non-existence as something different from no-longer-existence. By contrast, the deprivation element is absent when we imagine not creating future people (assuming they never come to exist and therefore aren’t looking back to us from the vantage point of existence).

To be clear, it's perfectly legitimate to paint a picture of a rich future where many people exist and flourish to get present people to care about creating such a future. However, I felt that your point about Cleopatra had a kind of "gotcha" quality that made it seem like people don't have coherent beliefs if they (1) really enjoy their lives but (2) wouldn't mind if people at some point in history decide to be the last generation. I wanted to point out that (1) and (2) can go together. 

For instance, I could be "grateful"  in a sense that's more limited than the axiologically relevant sense – "grateful" in a personal sense but not in the sense of "this means it's important to create other people like me." (I'm thinking out loud here, but perhaps this personal sense could be similar to how one can be grateful for person-specific attributes like introversion or a strong sense of justice. If I was grateful about these attributes in myself, that doesn't necessarily mean I'm committed to it being morally important to create people with those same attributes. In this way, people with the neutrality intuition may see existence as a person-specific attribute that only people who have that attribute can meaningfully feel grateful about. [I haven't put a lot of thought into this specific account. Another reply could be that it's simply unclear how to go about comparing one's existence to never having been born.])

Neutrality about future desires helps in some cases, as you note, but is utterly disastrous in others (e.g. potentially implying that a temporarily depressed child or teenager, who momentarily loses all his desires/preferences, might as well just die, even if he'd have a happy, flourishing future).

I think if we're already counting implicit preferences, so that, for example, people still have desires/preferences while in deep dreamless sleep and those still count, it's very hard to imagine someone losing all of their desires/preferences without dying or otherwise having their brains severely damaged, in which case their moral status seems pretty questionable. There's also a question of whether this has broken (psychological) continuity enough that we shouldn't consider this the same person at all: this case could be more like someone dying and either being replaced by a new person with a happy, flourishing future or just dying and not being replaced at all. Either way, the child has already died.

If they'd have the same desires/preferences in the future as they had before temporarily losing them, then we can ask if this is due to a causal connection from the child before the temporary loss. If not, then this again undermines the persistence of their identity and the child may have already died either way, since causal connection seems necessary. If there is such a causal connection, then our answer here should probably match how we think about destructive mind uploading and destructive teleportation.

Of course, it may be the case that a temporarily depressed child just prefers overall to die (or otherwise that that's best according to the preference-affecting view), even if they'd have a happy, flourishing future. The above responses wouldn't work for this case. But keeping them alive involuntarily also seems problematic. Furthermore, if we think it's better for them to stay alive even if they're indifferent overall, then this still seems paternalistic in a sense, but less objectionably so, and if we're making continuous tradeoffs, too, then there would be cases where we would keep them alive involuntary for their sake and against their wishes.

There's also the possibility that a preference still continues to count terminally even after it's no longer held, so even after a person dies or their preferences change, but I lean towards rejecting that view.

The connection to personal identity is interesting, thanks for flagging that!  I'd emphasize two main points in reply:

(i) While preference continuity is a component of person identity, it isn't clear that it's essential. Memory continuity is classically a major component, and I think it makes sense to include other personality characteristics too.  We might even be able to include values in the sense of moral beliefs that could persist even while the agent goes through a period of being unable to care in the usual way about their values; they might still acknowledge, at least in an intellectual sense, that this is what they think they ought to care about.  If someone maintained all of those other connections, and just temporary stopped caring about anything, I think they would still qualify as the same person. Their past self has not thereby "already died".

(ii) re: "paternalism", it's worth distinguishing between acting against another's considered preferences vs merely believing that their considered preferences don't in fact coincide with their best interests.  I don't think the latter is "paternalistic" in any objectionable sense.  I think it's just obviously true that someone who is depressed or otherwise mentally ill may have considered preferences that fail to correspond to their best interests.  (People aren't infallible in normative matters, even concerning themselves.  To claim otherwise would be an extremely strong and implausible view!)

fwiw, I also think that paternalistic actions are sometimes justifiable, most obviously in the case of literal children, or others (like the temporarily depressed!) for whom we have a strong basis to judge that the standard Millian reasons for deference do not apply.

But that isn't really the issue here. We're just assessing the axiological question of whether it would, as a matter of principle, be bad for the temporary depressive to die--whether we should, as mere bystanders, hope that they endure through this rough period, or that they instead find and take the means to end it all, despite the bright future that would otherwise be ahead of them.

  1. Agreed.
  2. That's a good point. I'd say the organ transplant case is disanalogous for basically person-affecting reasons (in the case where these contingent people don't come to exist, they have no need or interest to further satisfy), but to evaluate this claim of disanalogy, we need to consider "the first-order question of which view is simply right on the merits", as you say. (I'm not sympathetic to denying impartiality, though, and I don't think it solves the problem for tradeoffs between other people.)
  3. I find the alternatives to desire theories worse overall, based on the objections to them you raise in your article and similar ones.

I'm curating this post. I think it helps fight a real confusion — the idea that utilitarianism (or consequentialism) is the only moral theory that needs to grapple with extremely counterintuitive (or "repugnant") conclusions. 

As the author writes, "gaps in a theory shouldn’t be mistaken for solutions." (I'm not, however, nearly as confident that consequentialism is ultimately the best answer.)

I also want to highlight:

Amen, Richard. Amen. 

Richard - Thank you for a super post. A great statement of the view that problem cases held against utilitarianism would also be problems for any other theory that aimed to be systematic.

I don't entirely agree that to stop systematic theorising is to stop thinking.  Thinking can still be applied to making good decisions in particular cases and the balance between the particular and general principles can be debated.   

I totally support your arguments in your post and your replies against neutrality on creating positive lives.  I think this blog post by Joseph Carlsmith also makes the case against neutrality very well.

Thank you also for the  recent series of fine articles on your blog, Good Thoughts.  I would strongly recommend this to anyone interested in moral philosophy, utilitarianism and EA. 

Maybe a more descriptive title next time e.g. "These ethics puzzles aren't just challenges for utilitarianism" or something?

Despite this post being curated and me reading nearly all your posts, I personally ignored this one because it made me think it was something like "I've turned some sophisticated philosophical concepts/arguments into fun, simplistic games that you all can understand!", whereas it actually makes an extremely good point that I wish people recognised much more often. (I finally gave in and read it when I saw it on the EA UK newsletter just now.)

Mostly too late now as it's no longer curated, plus I'm only one data point and not really your target audience, but maybe still worth changing and/or a general thing to bear in mind for the future.

Ah, yeah that makes sense.  Thanks for the feedback!

Just to briefly indicate the horns of the paradox: in order to avoid the “recklessness” of orthodox (risk-neutral) expected utility in the face of tiny chances of enormous payoffs, you must either endorse timidity or reject transitivity.

(...)

And rejecting transitivity strikes me as basically just giving up on the project of coherently systematizing how we should respond to uncertain prospects; I don’t view that as an acceptable option at all.

 

On orthodox expected utility theory (EUT), boundedness, and hence timidity if we can conceptualize "enormous payoffs"*, follows from standard decision-theoretic assumptions. Unbounded EU maximization violates the sure-thing principle, is vulnerable to Dutch books and is vulnerable to money pumps, all plausibly irrational. See, e.g. Paul Christiano's comment with St. Petersburg lotteries and my response. So, it's pretty plausible that unbounded EU maximization (and perhaps recklessness generally) is just inevitably formally irrational and similarly gives up on the same project of coherent systematization. Timidity seems like the only rational option. Even if it has unintuitive implications, it at least doesn't conflict with principles of rationality.

However, I'm not totally sure, and I just wrote this post to discuss one of my major doubts. I do think all of this counts against normative realism about decision theory, though, and so Harsanyi's utilitarian theorem and probably moral realism generally.

* One might instead just respond that there are no enormous payoffs. We can only talk about enormous payoffs with timid/bounded EUT because we have two kinds of value: (in the kinds of cases we're interested in) impartial additive value, and decision-theoretic utility, as a function of this impartial additive value.

Also, I saw this cited somewhere, I think showing that there's a Dutch book that results in a sure loss for any unbounded utility function (I haven't read it myself yet to verify this, though):

https://www.jstor.org/stable/3328594

https://onlinelibrary.wiley.com/doi/abs/10.1111/1467-8284.00178

https://academic.oup.com/analysis/article-abstract/59/4/257/173397

(All links for the same paper.)

EDIT: It's an infinite sequence of bets, each of which has positive EV, so you should take each if offered in order, one at a time, but all of them together leads to a sure loss, because each bet's win condition is the lose condition for the next bet, and the loss is equal to or greater in magnitude than the win value. However, to guarantee a loss, there's no bound on the number of bets you'll need to make, although never infinitely many (with probability 0, if the conjunctions of the conditions has probability 0), like repeated double or nothing.

Though note that infinite sequences of choices are a well known paradox-ridden corner of decision theory, so proving that a theory falls down there is not conclusive.

I feel that exotic cases like this are interesting and help build up a picture of difficult cases for theories to cover, but don't count strongly against particular theories which are shown to fail them. This is because it isn't clear whether (1) any rival theories can deal with the exotic case, or (2) whether usual conditions (or theories) need to be slightly modified in the exotic setting. In other words, it may be another area where the central idea of Richard's post ('Puzzles for Everyone') applies.

There are also other cases, involving St. Petersburg-like lotteries as I mentioned in my top-level comment, and possibly others that only require a bounded number of decisions. There's a treatment of decision theory here that derives "boundedness" (EDIT: lexicographically ordered ordinal sequences of bounded real utilities) from rationality axioms extended to lotteries with infinitely many possible outcomes:

https://onlinelibrary.wiley.com/doi/pdf/10.1111/phpr.12704

I haven't come across any exotic cases that undermine the rationality of EU maximization with bounded utility functions relative to unbounded EU maximization, and I doubt there are, because the former is consistent with or implied by extensions of standard rationality axioms. Are you aware of any? Or are you thinking of conflicts with other moral intuitions (e.g. impartiality or against timidity or against local dependence on the welfare of unaffected individuals or your own past welfare)? Or problems that are difficult for both bounded and unbounded, e.g. those related to the debate over causal vs evidential decision theory?

We could believe we need to balance rationality axioms with other normative intuitions, including moral ones, so we can favour the violation of rationality axioms in some cases to preserve those moral intuitions.

In fact, I don't even see the immorality of human extinction for someone interested in making people happy, unless the thought of our species extinction, or of a future without our progeny, or the reality of not birthing progeny, somehow makes us unhappy for genetic or social or cultural reasons. And that might be true. I think there was a movie about that, a dystopian near-future where humanity suddenly loses the ability to reproduce, and society collapsed. In principle, we could simply all use birth control, never have more children, and our species would die out. 

Do you believe that we associate value with:

  •  our species continuance, 
  • or with procreation, 
  • or with having children in society, 
  • or some other feature of continuing to make people, 

whether or not we make happy people?

I ask because I believe that the intellectual support for challenging indifference to future happy people (as opposed to nonexistent people) is weak.

Hi Noah, just to be clear on the dialectic: my post isn't trying to argue a committed anti-natalist out of their view.  Instead, the population ethics section is just trying to make clear that Setiya's appeal to the "intuition of neutrality" is not a cost-free solution for someone with ordinary views who is worried about the repugnant conclusion, and in fact there are far better alternative solutions available that don't require resorting to neutrality.  (Here I take for granted that my target audience shares the initial intuition that "utopia is better than a barren rock".  Again, this isn't targeted at committed nihilists or anti-natalists who reject that claim.)

But you raise an interesting further question, of how one might best try to challenge "indifference to future happy people".  I think it's pretty difficult to challenge indifference in general. If someone is insistently indifferent to non-human animal welfare (or the global poor, or...), for example, you're not realistically going to be able to argue them out of that.

That said, I think some rational pressure can be put on indifference to future good lives through a combination of:

(i) showing that the putative advantages of the view (e.g. apparent helpfulness for avoiding the repugnant conclusion) are largely illusory, as I argue in the OP, and

(ii) showing that the view introduces further costs, e.g. violating independently plausible principles or axioms.

I'm not going to pursue the latter task in any depth here, but just to give a rough sketch of how it might go, consider the following dilemma for anti-natalists. Either:

(a) They deny that future good lives can have impersonal value, which is implausibly nihilistic, or
(b) They grant this axiological claim, but then violate the bridging principle that we always have some moral reason to prefer an impersonally better world to a worse one (such that, all else equal, we should bring about the better world).

Of course, they can always just bite the bullet and insist that they don't find (a) bothersome. I think argument runs out at that point.  You can't argue people out of nihilism.  All you can do, I think, is to express sadness and discourage others from following that bleak path.

Thank you for the reply!

I find the comparison you draw a bit pro-natalist. I can see comparing a utopia to a dystopia, where the utopia is obviously better. However, to reasonably compare utopia to a barren rock, you should believe that people living happy lives are somehow a container of moral value, and that by there being happy people existing, there's moral value or goodness in the fact of their existence.

I am comfortable with taking  a person with moral status who currently exists as a given, and running calculations based on their moral status from there, on the presumption that I might make their life better or worse, and a moral thing to do is to contribute to their well-being, given that I affect them somehow. However, I cannot do the same calculations in their absence from existence.

Funny that you use the phrase "a bit pro-natalist" as though that's a bad thing!   I am indeed unabashedly in favour of good things existing rather than nothing at all. And I'm also quite unembarrassed to share that I regard good lives to be a good thing :-)

So yes, I think good lives contain value.  You might be concerned to avoid the view that people are merely containers of value.  But you shouldn't deny that our lives (when good for us) are, in fact, valuable.

I think the sensible view here is to distinguish personal and impersonal value.  Creating value-filled lives is impersonally good: it makes the universe a better place.  But of course we shouldn't just care about the universe.  We should also care about particular persons.

Indeed, whenever possible (i.e. when dealing with existing persons, to whom we can directly refer), our concern should be primarily person-directed.  Visit your friend in the hospital for her sake, not just to boost aggregate happiness.  But not all moral action can be so personally motivated. To donate to charities or otherwise save "statistical" lives, we need to fall back on our general desire to promote the good.  And likewise for solving the non-identity problem: preferring better futures over worse ones (when different people would exist in either case). And, yes, this fall-back desire to promote better outcomes should also, straightforwardly, lead us to prefer good futures over lifeless futures (and lifeless futures over miserable futures).

I cannot do the same calculations in their absence from existence.

Then you would give me a lollipop even at the cost of bringing many miserable new lives into existence.  That would clearly be wrong.  But once you bring yourself to acknowledge reasons to not bring bad lives into existence (perhaps because the eventual person would regret your violating them), there's no deep metaphysical difference between that and the positive reasons to bring good lives into existence (which the eventual person would praise your following).

As far as people being containers of value, I don't find moral goodness in the mere fact of their existence at some level of happiness. This is easiest to tolerate with the view that a suffering person is not good by the fact of their existence or that a person that inflicts suffering on others is not good by the fact of their existence. However, I apply that view to all people, suffering or not, or that do or do not cause others to suffer. 

A person's existence at some level of happiness or altruistic behavior is not sufficient to establish that their existence is good. Instead, they might have good experiences or do good actions. They might be successfully selfish or successfully altruistic or both or neither. In general, I just don't see any person's existence as good, per se. Experiences can be good, and experiences of existent things are usually better than experiences of illusory things. Likewise, actions of people can be good, and actions with good consequences are usually better than actions with merely good intentions. So, decided in terms of experiences or actions, a person can "be" good, but that form of "being" is really a linguistic shorthand for the person's experiences or actions.

I wrote:

I cannot do the same calculations in their absence from existence.

and you replied:

Then you would give me a lollipop even at the cost of bringing many miserable new lives into existence.  That would clearly be wrong.

I was hoping to get some clarification of the thinking behind some of these thought experiments, and I guess I got some. I don't quite get the lollipop reference, maybe that you would give me the lollipop at that cost to me, and I wouldn't care? Or are  you writing that, since I can't do any calculations, I'd take any exchange of value.

I think you mistook my meaning though. I'm capable of knowing that happy or unhappy future lives could exist and can understand when exchanges of value serve my interests and when they don't. 

However, unless I believe that people will exist, I don't consider them in my moral calculations. Some I do believe will exist, some I don't believe will exist. For example, I believe that many people will continue to be conceived over the next few decades. 

I should have written, "I cannot do the same calculations in the eternal absence of their existence." which is a bit more clear than "I cannot do the same calculations in the absence of their existence." In fact, I take the fact of people existing as I find it, directly or indirectly, for example, through statistics about their existence or by face-to-face meetings.

I don't agree with the presumptions that:

  • some set of people that you assert will exist therefore will exist. I readily agree that they could, but not that they will. I believe that many people will continue to be conceived over the next few decades, for sure. 
  • I might make people at some cost, or with some benefit, to myself.  I'm not fertile and I don't sponsor new conceptions somehow, except through my food choices, unintentionally.

I think Richard was trying to make the point that

  • You believe that actions that bring about or prevent the existence of future people have no moral valence
  • Therefore, you believe that an action that brings about suffering lives is also morally neutral
  • Therefore, you would take any small positive moral trade (like getting a lollipop) in exchange for bringing about arbitrarily large amounts of suffering lives

If I'm not misinterpreting what you've said, it sounds like you'd be willing to bite this bullet?

Maybe it's true that you won't actually be able to make these choices, but we're talking about thought experiments, where implausible things happen all the time.

I think that actions that avoid the conception of future people (for example, possible parents deciding to use birth control) have no moral significance as far as the future moral status of the avoided future being goes since that being never exists.  

Why would my thinking that actions like using birth control are morally neutral imply that I should also think that having children is morally neutral?

Perhaps I will understand this better if you explain this to me carefully like I'm not that smart.

Sounds like there are four distinct kinds of actions we're talking about here:

  1. Bringing about positive lives
  2. Bringing about negative lives
  3. Preventing positive lives
  4. Preventing negative lives

I think I was previously only considering the "positive/negative" aspect, and ignoring the "bringing about/preventing" aspect.

So now I believe you'd consider 3 and 4 to be neutral, and 2 to be negative, which seems fair enough to me.

Why would my thinking that actions like using birth control are morally neutral imply that I should also think that having children is morally neutral?

Aren't you implying here that you think having children is not morally neutral, and so you would consider 1 to be positive? Wouldn't 1 best represent existential risk reduction - increasing the chances that happy people get to exist? It sounds like your argument would support x-risk reduction if anything.

You are correct about my assessments of 2-4. I would add 5 and 6:

  1. Bringing about conception of positive lives (morally neutral)
  2. Bringing about conception of negative lives (morally negative)
  3. Preventing conception of positive lives (morally neutral)
  4. Preventing conception of negative lives (morally neutral) 
  5. making existing lives more negative (morally negative)
  6. making existing lives more positive (morally positive)

I see having children as either morally neutral or negative toward the child, not morally positive or negative toward the child. I see having children as morally negative toward other people, in our current circumstances. Overall, any decision to have children is hard to justify as morally neutral.

I guess I would feel more inclined to add:

7. Bringing about conception of positive lives that are also positive for other people (morally positive)

for the sake of the thought experiment.

Is there some perspective or implication that I'm still missing here?

I would like to know.

What's the basis for claiming that (1) is neutral, rather than positive?

Looking ahead, believing that I will be a necessary cause in conception of a happy life of someone else, that leaves out the consequences for other people of the creation of that happy life. If I include those consequences, the balance of contributions to the self-interests of others (their welfare) tends towards neutral or negative. I should have written: 

 1. Bringing about conception of positive lives (morally neutral or negative)

but qualified with "all other things equal", I think the conception is just morally neutral. Why not morally positive? I find it hard to convince myself that happy experience or satisfaction of self-interest is ever morally neutral, but that is what we're talking about. I actually think that it's impossible. However, for thought experiment's sake, I added in 7.

7. Bringing about conception of positive lives that are also positive for other people (morally positive)

If someone could prove to me that, on balance, a positive life also contributes to other's lives overall, and that control of that life were possible to allow both experiences and behaviors aligned with a positive life that is also positive for others, then choice of conception of such a life, with that control available and utilized, would be morally positive. However, I don't believe that that control is available, much less utilized.

I'm also not comfortable with the problem of individual harm for collective help. So, for example, a situation that I take from:

+.-.*.*.*.*.*.   (1 positive, 1 negative, 5 neutrals, 7 total) 

to  

+.+.-.+.+.+.-.* (5 positives, 2 negatives, 1 neutral, 8 total)

that is, turning most neutrals into positives but some neutrals into negatives (* is neutral), does not necessarily appeal to me. Adding a person to a population could contribute positively to most lives but harm some as well. In that case, I tend to see the consequences (and so the choice) of the additional person as morally negative, depending on the details. 

Aaron Wolff in their red team mentions eating other beings having less positive value to the consumer than negative value for the consumed. Those sorts of asymmetries are common in modern life as we live it now (for example, in goods production vs use).

From Aaron:

There is arguably also an asymmetry between how good a universe filled with pleasure would be compared to how bad a universe filled with pain would be because it is possible for pain to be much worse than pleasure is good. As Schopenhauer put it “A quick test of the assertion that enjoyment outweighs pain in this world, or that they are at any rate balanced, would be to compare the feelings of an animal engaged in eating another with those of the animal being eaten.” If you buy this argument, then even say a 25% chance of the future being dominated by astronomical suffering could offset a 75% chance of utopia or, similarly, if the future will likely contain relatively small pockets of astronomical suffering, that could fully offset any value outside those pockets. 

I'm just talking about intrinsic value here, i.e. all else equal.

You write: "Why not morally positive? I find it hard to convince myself that happy experience or satisfaction of self-interest is ever morally neutral, but that is what we're talking about. I actually think that it's impossible."

I have no idea what this means, so I still don't know why you deny that positive lives have positive value.  You grant that negative lives have negative (intrinsic) value.  It would seem most consistent to also grant that positive lives have positive (intrinsic) value. To deny this obvious-seeming principle, some argument is needed!

Hi,Richard. I will try again to think this through. 

I think I understand your idea of intrinsic value better now. If I understand you properly:

  • when I consider improvement to a person's life quality/happiness/etc to be morally positive all other things equal, then the person has intrinsic value to me. If I consider this true regardless of the person's identity, then people have intrinsic value to me.

You might be right. For me a troublesome part of the thought experiment is the "all other things equal" part.

If I take a life of neutral happiness .*. and change it to one with much greater happiness .+., then I seem to have improved the situation. However, I am used to transformations like this:

.*.*.        =>       .-.+.

.-.-.       =>        .*.+ .

.+.*.       =>       .*.+.-.   

.*.*.-.  =>      .-.+.

.+.*.-.   =>     .*.+.+.-.

.*.*.-.-.-.    =>    .+.*.+.-.-.-.-.-.-.

I do not see things like:

.*.              =>     .+.+. 

.-.              =>     .+.

The making people happy vs making happy people thought experiment presumes that improvement in a person's quality of life has no impact on others, or that making a person happy or making a happy person is about just one person's life. It is not.

When you write:

 You grant that negative lives have negative (intrinsic) value.  It would seem most consistent to also grant that positive lives have positive (intrinsic) value.

Let me expand my list of moral positive/negative distinctions a little:

  1. Bringing about conception of positive lives (morally neutral)
  2. Bringing about conception of negative lives (morally negative)
  3. Preventing conception of positive lives (morally neutral)
  4. Preventing conception of negative lives (morally neutral) 
  5. Making existing lives more negative (morally negative)
  6. Making existing lives more positive (morally positive)
  7. Bringing about conception of positive lives that are also positive for other people (morally positive)
  8. Bringing about conception of positive lives that are negative for other people (morally negative)
  9. Bringing about conception of negative lives that are negative for other people (morally negative)
  10. Bringing about conception of negative lives that are positive for other people (morally negative)

and now  generalize it:

  1. preventing conception (morally neutral toward the eternally nonconceived)
  2. making all lives more positive (morally positive toward all)
  3. making any lives more negative (morally negative toward some)
  4. making some lives more positive without affecting any others negatively (morally positive toward some)
  5. conceiving negative lives (morally negative toward the conceived)
  6. conceiving positive lives that are positive for all (morally positive toward all)
  7. conceiving positive lives (morally positive toward the conceived)

All that I mean there by morally positive or morally negative actions is actions that serve or work against the interests of a (sub)set of who the action affects. A person with a positive life for themselves is one with positive experience. A person with a positive life for others is one who takes actions with positive consequences for others.

I do use "intrinsic value" to mean something, but it's just one side of a partition of value by "instrumental" and "intrinsic." Whether a person has intrinsic value only comes up for me in thought experiments about control of others and the implications for their moral status. By "intrinsic value" I do not mean a value that is a property of a person's identity (e.g., a friend) or type (e.g., human). 

Rather, intrinsic value is value that I assign to someone that is value not contingent on whether they serve my interests by the manifestation of that value. For example, some person X might go have a romance with someone else even though I'm also interested in X romantically. That might upset me, but that person X has intrinsic value, so they get to go be romantic with whomever they want instead of me and I still factor them into my moral calculations. 

EDIT: as far as what it means to factor someone into my moral calculations, I mean that I consider the consequences of my actions for them not just in terms of selfish criteria, but also in terms of  altruistic or moral criteria. I run the altruism numbers, so to speak, or at least I should, for the consequences of my actions toward them.

A different partitioning scheme of value is between contingent value and absolute value, but that scheme starts to test the validity of the concept of value, so I will put that aside for the moment. 

I want to head off a semantics debate about moral status in case that comes up. For me, moral status of a person only means that they figure in moral calculations of the consequences of actions. For me, a person having moral status does not mean that the person:

  • is a container of some amount of goodness 
  • is intrinsically good
  • has an existence that is good, per se
  • is someone for whom betterment of experience is good 

... according to some concept of "good" that I do not believe applies (for example, approval by god).

OK, so hopefully I explained my thinking a bit more fully. 

Can someone reveal the paradox in my thinking to me, if there is one (or more)?

EDIT: As far as I know, I have not claimed that altruistic action or an altruistic consequence of an action is good in some way distinct from the fact of serving someone else's self-interests. That is, I am treating "morally good action" as another way of saying "serving someone else's self-interests." I have not identified any form of "moral goodness" that is distinct from  serving the self-interests of entities affected by actions or events in the world. 

I recognize that it seems naive to treat moral goodness as simply serving other's self-interests. I have to answer questions like:

  • who defines those interests or their importance to a person? (me, ultimately, using whatever evidence or causal models I have)
  • what epistemological assumptions support continuing my moral calculations for entities without instrumental value to me? (I assume that real-world events and other people are unpredictable or uncontrollable. Therefore, denying moral status to people I shouldn't can unexpectedly harm me in various ways.)
  • is it valid to call a moral calculus "moral" if it is contrasts with how morality is typically decided? (If I am clear about it, then people understand my choice of terms and that my  approach to altruism or morality is my personal one, not a description of some wider standard)
  • is it moral to serve my own self-interest? (No. It's selfish. I think selfishness is really interesting.)
  • why do I perform moral calculations?  (For selfish reasons.)
  • why do I ever behave  morally instead of selfishly? (Good question.)

I'm still stuck thinking that:

  • eternally nonexistent people have no moral status.
  • there is nothing morally preferable about a world of happy people as opposed to a barren rock, but there is something personally preferable about a world of happy people.

I mean that our lives are not consequence-free for others, so not morally neutral to live. Our lives are something along the lines of a negative-sum game, approaching a zero-sum game, but hard to equate to a positive sum game ever for all affected.

I haven't been discussing intrinsic value intentionally, more just the value to the self-interest of oneself or others. 

Is there no difference to you?

Curated and popular this week
Relevant opportunities