The Repugnant Conclusion has always seemed straightforwardly and unobjectionably true to me. I've always been confused by its alleged repugnance, or why such an anodyne-seeming conclusion merits such a dramatic name.

This isn't like the other standard objections to utilitarianism. I'm not persuaded by concerns about utility monsters or trolley problems, but I feel the sting of those objections – they feel like bullets I need to bite. Whereas the Repugnant Conclusion just seems like a non-problem to me.

I say all this not to argue against concerns about the Repugnant Conclusion, but to motivate my question here. I'd like to have a better understanding of the intuitions that lead people to seeing this as such a serious problem, and whether I'm missing something that might cause me to put more weight on these sorts of concerns. I'm less interested in technical philosophical arguments here than in intuition pumps – simple thought experiments, or real-world scenarios, or related problems that might help me feel the sting of the objections a bit more.

New Answer
Ask Related Question
New Comment

16 Answers sorted by

I have asymmetric person-affecting intuitions, and I think the Repugnant Conclusion is a clear example of treating individuals as mere vessels/receptacles for value. Sacrificing the welfare of just one person so that another could be born — even if they would be far better off than the first person — seems wrong to me, ignoring other effects. That I could have an obligation to bring people into existence just for their own sake and at an overall personal cost seems wrong to me. The RC just seems like a worse and more extreme version of this.

In a hypothetical world where I'm the only one around, I feel I basically should be allowed to do whatever I want, as long as no one else will come into existence, and I should have no reason to bring them into existence. In my world, I should do whatever I want. If no one is born, I'm not harming anyone else or failing in my obligations to others, because they don't and won't exist to be able to experience harm (or experience an absence of benefit or worse benefits).

That I should make sacrifices to prevent people with bad lives from being born or to help future people who would exist anyway (including ensuring better off people are born instead of worse off people) does seem right to me. If and because these people will exist, I can harm them or fail to prevent harm to them, and that would be bad.

I have some more writing on the asymmetry here.

I'm confused by your answer.

  • You say that "sacrificing the welfare of just one person so that another could be born... seems wrong". But the Repugnant Conclusion is a claim about the relative value of two possible populations, neither of which is assumed to be actual. So I don't understand how you reach the conclusion that, in judging that one of these populations is more valuable, by bringing it about you'd be "sacrificing" the welfare of the possible people in the other population. The situation seems perfectly symmetrical, so either you are "sacrificing"
... (read more)

This comment seems to me to be requesting clarification in good faith. Might someone who downvoted it explain to why, if it wouldn't take too much time or effort? I'm fairly new to the forum and would like a more complete view of the customs.

Edited to add: Perhaps because it was perceived as lower effort than the parent comment, and required another high-effort post in response, which might have been avoided by a closer reading?

6MichaelStJules8mo
I never downvoted his comments, and have (just now) instead upvoted them. However, I would interpret all of Pablo's points in his response not just as requesting clarification but also as objections to my answer, in a post that's only asking for people's reasons to object to the RC and is explicitly not about technical philosophical arguments (although it's not clear this should extend to replies to answers), just basic intuitions. I don't personally mind, and these are interesting points to engage with. However, I can imagine others finding it too intimidating/adversarial/argumentative.
1Lumpyproletariat8mo
Thank you for the explanation!
8MichaelStJules8mo
(I've made a bunch of edits to the following comment within 2 hours of posting it.) If you're a consequentialist whose views are transitive and complete, and satisfy the independence of irrelevant alternatives, then the RC implies what I wrote (ignoring other effects and opportunity costs). The situation is not necessarily symmetrical in practice if you hold person-affecting views, which typically require the rejection of the independence of irrelevant alternatives. I'd recommend the "wide, hard view" in The Asymmetry, Uncertainty, and the Long Term by Teruji Thomas [https://globalprioritiesinstitute.org/teruji-thomas-the-asymmetry-uncertainty-and-the-long-term/] as the view closest to common sense that satisfies the intuitions of my answer above (that I'm aware of), and the talk is somewhat accessible, although the paper can get pretty technical. This view allows future contingent good lives to make up for (but not outweigh) future contingent bad lives, but, as a "hard" view, not to make up for losses to "necessary" people, who would exist regardless. Because it's "wide", it "solves" the Nonidentity problem. The wide version would still reject the RC even if we're choosing between two disjoint contingent populations, I think because "excess" (in number) contingent people with good lives wouldn't count in this particular pairwise comparison. Another way to think about it would be like matching counterparts [https://forum.effectivealtruism.org/posts/AWGwNWnMiTxPDJY39/critical-summary-of-meacham-s-person-affecting-views-and#Saturating_Counterpart_Relations] across worlds, and then we can talk about sacrifices as the differences in welfare between an individual and their counterpart, although I'm not sure the view entails something equivalent to this. My own views are much more asymmetric than the views in Thomas's work, and I lean towards negative utilitarianism, since I don't think future contingent good lives can make up for future contingent bad lives at all.
8Pablo8mo
[ETA: You say you've made edits to your post, so it's possible some of my replies are addressed by your revisions. I am always responding to the text I'm quoting, which may differ from the final version of your comment.] I don't have time to look into this right now, but I also feel that this probably won't provide an answer to the question I meant to ask. (Apologies if my wording was unclear.) Call the world with few, very happy people, A, and the world with lots of mildly happy people, Z. The question is, then, simply: "If bringing about Z sacrifices people in A, why doesn't bringing about A sacrifice people in Z?" You say that you'd be sacrificing someone "even if they would be far better off than the first person", which seems to commit you to the claim that you would indeed be sacrificing people in Z by bringing about A. I don't understand how this answer explains why you are not treating the person as a value receptacle, given that you believe this is what the total utilitarian does in the Repugnant Conclusion. I can see why a negative utilitarian and/or a person-affecting theorist would treat these two cases differently. What I don't understand is why the difference is supposed to consist in that people are being treated as value receptacles in one case, but not in the other. This just seems to misdiagnose what's going on here. The comment you shared helps me understand the Asymmetry, but not your claim about value receptacles. I agree that you can have people with lifetime wellbeing just above neutrality either because they live their entire lives at that level or because they have lots of ups and downs that almost perfectly cancel each other out (and anything in between). I think discussions of the Repugnant Conclusion sometimes make the stronger assumption that people's lives are continuously just above neutrality ("muzak and potatoes"), and that people may respond to the thought experiment differently depending on whether or not this assumption is mad
8MichaelStJules8mo
(FWIW, I never downvoted your comments and have upvoted them instead, and I appreciate the engagement and thoughtful questions/pushback, since it helps me make my own views clearer. Since I spent several hours on this thread, I might not respond quickly or at all to further comments.) Sorry, I tried to respond to that in an edit you must have missed, since I realized I didn't after posting my reply. In short, a wide person-affecting view means that Z would involve "sacrifice" and A would not, if both populations are completely disjoint and contingent, roughly because the people in A have worse off "counterparts" in Z, and the excess positive welfare people in Z without counterparts don't compensate for this. No one in Z is better off than anyone in A, so none are better off than their counterparts in A, so there can't be any sacrifice in a "wide" way in this direction. The Nonidentity problem would involve "sacrifice" in one way only, too, under a wide view. (If all the people in Z already exist, and none of the people in A exist, then going from Z to A by killing everyone in Z could indeed mean "sacrificing" the people in Z for those in A, under some person-affecting views, and be bad under some such views. Under a narrow view (instead of a wide one), with disjoint contingent populations, we'd be indifferent between A and Z, or they'd be incomparable, and both or neither would involve "sacrifice".) On value receptacles, here's a quote by Frick [https://onlinelibrary.wiley.com/doi/full/10.1111/phpe.12139] (on his website [https://scholar.princeton.edu/sites/default/files/jfrick/files/conditional_reasons_and_the_procreation_asymmetry_for_philosophical_perspectives.pdf] ), from a paper in which he defends the procreation asymmetry: I haven't thought much about this particular way of framing the receptacle objection, and what I have in mind is basically what Frick wrote later: This is a bit vague: what do we mean by "conditional"? But there are plausible inte
4Pablo8mo
Thanks for the detailed reply. For now, I will only address your comments at the end, since I haven't read the sources you cite and haven't thought about this much beyond what I wrote previously. (As a note of color, Johann and I did the BPhil together and used to meet every week for several hours to discuss philosophy, although he kept developing his views about population ethics after he moved to Harvard; you have rekindled my interest in reading his dissertation.) I mean that the intuitions triggered by the interpersonal and the intrapersonal cases feel very similar from the inside. For example, if I try to describe why the interpersonal case feels repugnant, I'm inclined to say stuff like "it feels like something would be missing" or "there's more to life than that"; and this is exactly what I would also say to describe why the intrapersonal case feels repugnant. How these two intuitions feel also makes me reasonably confident that fMRI scans of people presented with both cases would show very similar patterns of brain activity. I think that supposed difference is ruled out by the way the intrapersonal case is constructed. In any case, what I regard as the most interesting intrapersonal version is one where it is analogous to the interpersonal version in this respect. Of course, we can discuss a scenario of the sort you describe, but then I would no longer say that my intuitions about the two cases feel very similar, or that we can learn much by comparing the two cases. Makes sense. Thanks for the clarification.

Thanks, I appreciated reading this. I think you and I think about morality very differently, which means this doesn't update me very much, but it's still good to get a more emotional grasp of what people feel about these questions.

I'll try to help you understand why (I think) some people feel the sting of the repugnant conclusion (RC), but why I think they are ultimately wrong to do so. I should say that I personally don't find the repugnant conclusion repugnant so what I'm about to say might be completely missing the point. I am slightly stung by the "very repugnant conclusion", but that might be for another time.

In short, I think some people find RC repugnant based on a misunderstanding of what a life "barely worth living" would mean in practice. I think most people imagine such a life to be quite "bad" on the whole, but I think this is a mistake.

Note that the vast majority of people on earth want to continue living. This would include the vast majority of people who live in extreme poverty or who are undergoing horrific abuse. It would also include people who constantly consider suicide to end their pain but never go through with it. In normal parlance we would say these people live "bad" lives. However, we might conclude that these people are living lives worth living if they don't want their life to end / don't choose to end their life. So my guess is people imagine "a life barely worth living" to be a pretty "bad" one. The actual wording of "a life barely worth living" is inherently negative in how it is framed anyway. So RC would amount to a load of people with pretty "bad" lives by intuitive standards, being better than a smaller number of people with absolutely amazing lives. Accepting RC would be like creating another Africa with all it's poverty and hardship instead of creating another Norway with all it's happiness. Or creating loads of people attending daily suicide support groups rather than a smaller number of people living the best lives we can imagine. Most people would find these repugnant things to do and I personally would feel the sting here.

The problem with the above reasoning becomes clear when we think more carefully about "a life barely worth living". Firstly, to state what should be obvious, such a life is worth living by definition. So to be put off by the existence of such lives doesn't really make logical sense, unless you deny the theoretical existence of positive lives in the first place. This doesn't negate people's feeling of repugnance, but I think it should cause them to question it.

Where does this leave us with people attending daily suicide support groups? Well my preferred way forward is to question if these people do in fact have lives worth living, or at least to question if we have any idea on the matter. As is pointed out by Dasgupta (2016), the idea that someone who wants to continue living must be living a life of positive welfare ignores the badness of death. It is certainly possible for someone to be living a life of negative welfare, but be reluctant to end it because the subjective badness of death exceeds the badness of continuing to live. Death is indeed a horrible prospect for most when you consider factors such as religious prohibition, fear of the process of dying, the thought that one would be betraying family and friends, the deep resistance to the idea of taking one’s own life that has been built into us through selection pressure would cause someone even in deep misery to balk, and the revelation of one's misery to others when one wants it to remain undisclosed even after death.

In light of this Dasgupta puts forward the "creation test" as a way to determine the zero-level of wellbeing. What is the worst life that you would willingly create? Dasgupta says that should be the zero level. Most altruists wouldn't create more people living in extreme poverty, or people with constant thoughts of suicide, implying these people probably live negative lives. I personally would only create a life that most of us would say is very good!

I'm not saying Dasgupta's creation test is perfect - I'm undecided on how useful it is. This paper argues that we have no sufficiently clear sense of what a minimally good life is like. If this is indeed true, as the paper argues, the RC loses its probative force because we can not judge lives "barely worth living" as being "bad" as we don't really have a clue.

So to sum up my rather lengthy response, I think that many people who think RC is repugnant assume that "lives barely worth living" are those we would say are "bad" in common parlance which can lead to an understandable feeling of repugnance. I think they are wrong - either "lives barely worth living" are much better than being "bad", in which case RC loses repugnance, or we don't know how good "lives barely worth living" are and RC doesn't even get off the ground at all.

This is exactly my intuition. When I think about "lives barely worth living" I imagine someone who is constantly on the edge of suicide. Then I think, well that seems really bad to me, but who am I to say that that person's life is not worth living? If I can't look that person in the eye and say, "your life is not worth living" (which I almost certainly can't do) , then how can I say that my world of "lives barely worth living" is made up of people with better lives than them?

Your paraphrasing of Dasgupta's insights is helpful, and I think incorporating the negativity of death may alleviate some of my perceived Repugnancy of the aforementioned Conclusion.

Interesting suggestion! It sounds plausible that "barely worth living" might intuitively be mistaken as something more akin to 'so bad, they'd almost want to kill themselves, i.e. might well have even net negative lives' (which I think would be a poignant way to say what you write).

While I appreciate you sharing your thoughts, I don't think replying to a post asking people to talk about why they dislike the repugnant conclusion with a lengthy argument about why those people are making a basic mistake is really going to help me achieve my goal here.

I don't want to litigate these intuitions here, I want to understand them. We can do the litigation elsewhere.

6Jack Malde8mo
You say "I'd like to have a better understanding of the intuitions that lead people to seeing this as such a serious problem, and whether I'm missing something that might cause me to put more weight on these sorts of concerns" in which case I think my whole comment should be of relevance and I am confused by your pushback, unless of course you are only interested in the opinion of people who find RC repugnant in which case I apologise.
0Will Bradshaw8mo
I am also interested in the intuitions of people who find the RC intuitively problematic, even if they ultimately feel it is less bad than the alternatives. I'm not interested (here) in arguments about why people who do take serious issue with the RC are wrong, and I think spending significant time on those here is actively counterproductive to what I'm trying to achieve. There's an intermediate case of "asking people who report being bothered by the RC pointed questions" – this is good insofar as it comes from sincere curiosity and helps uncover more information about those intuitions, and bad insofar as it (deliberately or accidentally) makes those people feel attacked or forced to defend themselves. You've been responding to several other answers here in the latter kind of way, and I wish you'd stop.
8Jack Malde8mo
OK it's your thread and I will leave, despite only good intentions. I'm very surprised to have had this pushback. If anyone I have responded has felt attacked by me I apologise. Below is the relevant text from my original comment. Feel free to ignore the rest of it.
3Will Bradshaw8mo
Yep, I appreciated this part! I also agree that intuitions about the set point seem key here.

I find your attitude somewhat surprising. I'm much less sympathetic to trolley problems or utility monsters than the repugnant conclusion. I can see why some people aren't moved by it, but I have a hard time seeing how someone couldn't get what it is moving about it. Since it is a rather basic intuition, it's not super easy to pump. But I wonder, what do you think about this alternative, which seems to draw on similar intuitions for me:

Suppose that you could right now, at this moment, choose between continuing to live your life, with all its ups and downs and complexity, or going into a state of near-total suspended animation. In the state of suspended animation, you will have no thoughts and no feelings, except you will have a sensation of sucking on a rather disappointing but not altogether bad cough drop. You won't be able to meditate on your existence, or focus on the different aspects of the flavor. You won't feel pain or boredom. Just the cough drop. If you continue your life, you'll die in 40 years. If you go into the state of animation, it will last for 40,000 years (or 500,000, or 20 million, whatever number it takes.) Is it totally obvious that the right thing to do is to opt for the suspended animation (at least, from a selfish perspective) ?

Thanks for trying to come up with a thought experiment that targets your intuitions here! That's exactly what I was hoping people would do.

For me, this thought experiment feels like it raises more "value of complexity" questions than the canonical RC. Though from the comments it seems like complexity vs homogeneity intuitions are contributing to quite a few people's anti-RC feelings, so it's not bad to have a thought experiment that targets that.

In any case, I think there probably is a sufficiently large number of years at which I would take the cough drop... (read more)

By the way I apologise for implying you should "remove" something from your comment which I didn't literally mean. What I should have said is I think the words led to an unhelpful characterisation of the life being lived in the thought experiment. The OP doesn't appreciate my contributions so I am going to leave this post.

In the state of suspended animation, you will have no thoughts and no feelings, except you will have a sensation of sucking on a rather disappointing but not altogether bad cough drop.

Firstly remove the words "rather disappointing". Remember there is nothing bad in this world and terms like that don't help people put themselves in the situation.

You won't feel pain or boredom.

I for one find this very difficult to imagine, and perhaps counterproductive to the RC. A buddhist might say not feeling pain or boredom is akin to living an enlightened life which is ... (read more)

3Derek Shiller8mo
There is a challenge here in making the thought experiment specific, conceivable, and still compelling for the majority of people. I think a marginally positive experience like sucking on a cough drop is easy to imagine (even if it is hard to really picture doing it for 40,000 years) and intuitively just slightly better than non-existence minute by minute. Someone might disagree. There are some who think that existence is intrinsically valuable, so simply having no negative experiences might be enough to have a life well worth living. But it is hard to paint a clear picture of a life that is definitely barely worth living and involves some mix of ups and downs, because you then have to make sure that the ups and downs balance each other out, and this is more difficult to imagine and harder to gauge.

The term 'repugnant' is unfortunate; I think it's best to focus on whether there's anything morally problematic or deficient about such a world, irrespective of whether it elicits emotions of moral repugnance.

Personally, when I reflect on a universe that only contains experiences of "muzak and potatoes", I feel there's something missing from it, no matter how many such experiences it contains. I'm still willing to bite the bullet and conclude that my feeling is non-veridical, but I do experience the feeling.

One can also consider the parallel situation at the intrapersonal level. Parfit asks us to compare a "Century of Ecstasy" with a "Drab Eternity". I definitely feel the appeal of the former, even if, on reflection, I'd probably opt for the latter. (Though note that Parfit's wording here is also tendentious; a better name for the second option would be a "Mildly Pleasant Eternity".)

But I'm not sure I can describe this feeling more clearly or accurately, though, so this isn't really an answer to your question.

[+][comment deleted]8mo 0

Adding another answer, although I think it's basically pretty similar to my first.

I can imagine myself behind a veil of ignorance, comparing the two populations, even on a small scale, e.g. 2 vs 3 people. In the smaller population with higher average welfare, compared to the larger one with lower average welfare, I imagine myself either

  1. as having higher welfare and finding that better, or
  2. never existing at all and not caring about that fact, because I wouldn't be around to ever care.

So, overall, the smaller population seems better.

 

I can make it more concrete, too: optimal family size. A small-scale RC could imply that the optimal family size is larger than the parents and older siblings would prefer (ignoring indirect concerns), and so the parents should have another child even if it means they and their existing children would be worse off and would regret it. That seems wrong to me, because if those extra children are not born, they won't be wronged/worse off, but others will be worse off than otherwise.

In the long run, everyone would become contingent people, too, but then you can apply the same kind of veil of ignorance intuition pump. People can still think a world where family sizes are smaller would have been better, even if they know they wouldn't have personally existed, since they imagine themselves either

  1. as someone else (a "counterpart") in that other world, and being better off, or
  2. not existing at all (as an "extra" person) in their own world, which doesn't bother them, since they wouldn't have ever been around in the other world to be bothered.

Naively, at least, this seems to have illiberal implications for contraceptives, abortion, etc..

 

There's also an average utilitarian veil of ignorance intuition pump: imagine yourself as a random person in each of the possible worlds, and notice that your welfare would be higher in expectation in the world with fewer people, and that seems better. (I personally distrust this intuition pump, since average utilitarianism has other implications that seem very wrong to me.)

Thanks. We of course run here into the standard total-vs-person-affecting dispute, namely that I would prefer to exist with positive welfare than not exist, and all this "not around to care" stuff feels like a very odd way to compare scenarios to me.

It depends on the formulation. I don't find Parfit's version of the RC, where the people with muzak-and-potatoes lives "never suffer," repugnant. But according to total (symmetric) utilitarianism, that RC is morally equivalent to another version, which I find highly repugnant. Imagine (A) as large and blissful a utopia as you like. Now imagine (Z) a world where many more people than in this utopia each have the following life: for a million years, they endure constant, unbearable torture. After that, they eat potatoes and listen to muzak peacefully for a sufficiently large number of years.

I just don't see how the latter experiences, no matter how many of them, could be considered morally significant in a way that outweighs the torture. You can chalk this up to scope neglect if you want, but (1) my intuitions are definitely not scope-neglectful when comparing suffering to suffering, and (2) I have the same intuition about milder cases where the amount of happiness a classical utilitarian would (probably) accept as outweighing is practically imaginable. e.g. Each person is born experiencing 1 day of depression, then eats potatoes for a normal human lifespan (~30,000 days).

I have serious doubts about inter-personal trade-offs.
https://www.mattball.org/2021/12/note-and-more-on-ethics-including-case.html
which follows
https://www.mattball.org/2021/12/ethics-is-not-simple-math-problem.html
 

To answer on the level of imagery and associations rather than trying to make a strong philosophical argument: The Repugnant Conclusion makes me think of the dire misery of extremely poor places, like Haiti or Congo. People in extreme poverty are often malnourished, they have to put up with health problems and live in terrible conditions. On top of all those miseries, they have to get through it all with very limited education / access to information, and very limited freedom / agency in life. (But I agree with jackmalde that their lives are nevertheless worth living vs nonexistence -- I would still prefer to live if I was in their situation.)

Compared to an Earth with 10 Billion people living at developed-world standards, it just seems crazy to me that anyone would prefer a world with, say, 1 Trillion people eking out their lives in a trash-strewn Malthusian wasteland. The latter seems like a static world with no variety and no future, without the slack necessary for individuals to appreciate life or for civilization as a whole to grow, explore, learn, and change.

This image leads to various wacky political objections, which are not philosophically relevant since nobody said the Repugnant Conclusion was supposed to apply to the actual situation of Earth in 2021 (as opposed to, say, a hypothetical comparison between 10 Billion rich people vs 3^^^3 lives barely worth living). But emotionally and ideologically, the Repugnant Conclusion brings to mind appropriately aversive images like:

  • That EA should pivot away from interventions like GiveDirectly or curing diseases, and instead become all about boosting birthrates in whatever way possible. (New cause area: "Family Disempowerment Media"?)
  • That things like the invention of the birth control pill and the broader transition away from strict pro-fertility hierarchical gender norms (starting in the industrial revolution) were some of the worst events in history.
  • That almost all human values (art, love, etc) should be sacrificed in favor of supporting a higher total carrying capacity of optimized pure replicators, a la the essay "Meditations on Moloch".

So, in the practical world, the idea that humanity should aim to max out the Earth's carrying capacity without regard to quality-of-life seems insane, so the Repugnant Conclusion will therefore always seem like a bizarre idea totally opposed to ordinary moral reasoning, even if it's technically correct when you use sufficiently big numbers.

Separately from all the above, I also feel that there would be an extreme "samey-ness" to all of these barely-worth-living lives. It seems farfetched to me that you are still adding moral value linearly when you create the quadrillionth person to complete your low-quality-of-life population -- how could their repetitive overlapping experiences match up to the richness and diversity of qualia experienced by a smaller crew of less-deprived humans?

Thanks, this is one of my favourite responses here. I appreciated your sharing your mental imagery and listing out some consequences of that imagery. I think I am more inclined than you to say that many people alive today have lives not worth living, but you address confusion about that point in another comment. And while I'm more pro-hedonium than you I also wonder about "tiling" issues.

Do your intuitions about this stay consistent if you reverse the ordering? That is, as I think another comment on this post said elsewhere, if you start with a large population of just-barely-happy people, and then replace them with a much smaller population of very happy people, does that seem like a good trade to you?

5Jackson Wagner8mo
Yes, my intuition stays the same if the ordering is reversed; population A seems better than population Z and that's that. (For instance, if the population of an isolated valley had grown so much, and people had subdivided their farmland, to the point that each plot of land was barely enough for subsistence and the people regularly suffered conflict and famine, in most situations I would think it good if those people voluntarily made a cultural change towards having fewer children, such that over a few generations the population would reduce to say 1/3 the original level, and everyone had enough buffer that they could live in peace with plenty to eat and live much happier lives. Of course I would have trouble "wishing people into nonexistence" depending on how much the metaphysical operation seemed to resemble snuffing out an existing life... I would always be inclined to let people live out their existing lives.) Furthermore, I could even be tempted into accepting a trade of Population A (whose lives are already quite good, much better than barely-worth-living) for a utility-monster style even-smaller population of extremely good lives. But at this point I should clarify that although I might be a utilitarian, I am not a "hedonic" utilitarian and I find it weird that people are always talking about positive emotional valence of experience rather than a more complex basket of values. I already mentioned how I value diversity of experience. I also highly value something like intelligence or "developedness of consciousness": * It seems silly to me that the ultimate goal becomes Superhappy states of incredible joy and ecstasy. Perhaps this is a failure of my imagination, since I am incapable of really picturing just how good Superhappy states would be. Or perhaps I have cultural blinders that try to ward me off of wireheading (via drug addiction, etc) by indoctrinating me to believe statements like "life isn't all about happiness; being connected to r

"The latter seems like a static world with no variety and no future, without the slack necessary for individuals to appreciate life or for civilization as a whole to grow, >explore, learn, and change."

If you're a total utilitarian, you don't care about these things other than how they serve as a tool for utility. By the structure of the repugnant conclusion, there is no amount of appreciating life that will make the total utility in smaller world greater than total utility in bigger world. 

2Jackson Wagner8mo
Certainly. Some of those values I mentioned might be counted as direct forms of utility, and some might be counted as necessary means to the end of greater total utility later. And the repugnant conclusion can always win by turning up the numbers a bit and making Population Z's lives pretty decent compared to the smaller Population A. Partially I am just trying to describe the imagery that occurs to me when I look at the "population A vs population Z" diagram. I guess I am also using the repugnant conclusion to point out a complaint I have against varieties of utilitarianism that endorse stuff like "tiling the universe with rats on heroin". To me, once you start talking about very large populations, diversity of experiences is just as crucial as positive valence. That's because without lots of diversity I start doubting that you can add up all the positive valence without double-counting. For example, if you showed me a planet filled with one million supercomputers all running the exact same emulation of a particular human mind thinking a happy thought, I would be inclined to say, "that's more like one happy person than like a million happy people".
1Charlie_Guthmann8mo
I have the same feeling. I have an aversion to utility tiling as you describe it but I can't exactly pinpoint why other than that I guess I am not a utilitarian. As consequentialists perhaps we should focus more on the ends ends, i.e. aesthetically how much we like the look of future potential universes, rather than looking at the expected utility of said universes. E.g. Star wars is prettier than expansive VN probe network to me so I should prefer that. Of course this is just rejecting utiliarianism again.

(But I agree with JackM that their lives are nevertheless worth living vs nonexistence -- I would still prefer to live if I was in their situation.)

You have misunderstood my comment. Perhaps I have not been clear enough. Feel free to have another read and I would be happy to answer any questions.

3Jackson Wagner8mo
Yes, I guess it would've been more accurate to say "I'm one of those confused people jackmalde was referring to, who intellectually thinks that very deprived lives are still worth living but nevertheless feels uncomfortable and conflicted about the obvious logical implications of that." Potential sources of this conflictedness: * Maybe my mental picture of a deprived but barely worth-it life is cartoonishly exaggerated in its badness. Poor people that I have met IRL in rural India did not have the best lives, but most of them were basically happy such that it does seem like a moral boon rather than repugnant to imagine creating trillions of others like them. * Maybe I am still having difficulty extricating myself from practical/political concerns. In the real world, if a new continent magically appeared full of many new barely-worth-living people, we would feel morally obligated to share with them and help improve their lives. This is a good instinct which is at the core of EA itself, but the inevitability of this empathetic response does mean that the appearance of new large barely-worth-it populations seems like a threat to the ongoing wellbeing of Population A. But of course in the thought experiment the populations are totally separate. * I am definitely (and understandably) uncertain about how to figure what kind of life is barely worth living. I am strongly anti-death to a greater extent than you are in your comment, but even I would not endorse things like "tortured forever" as being necessarily better than nothing, so I do want to set a threshold somewhere. (But again maybe this is just political concerns and my own personal spoiledness?? If I was God deciding whether to create the universe, and it was either going to be torture-hell or no universes whatsoever, maybe I'd create hell rather than there being nothing at all. But if I got to create a normal happy universe first, I'd

For me (currently with minimalist intuitions), the repugnance depends on whether the lives in the larger population are assumed to never suffer (cf. this section). Judging from the different answers here, people seem to indeed have wildly different interpretations about what those lives feel like.

At one extreme, they could contain absolutely no craving for change and be simply lacking in additional bliss; at the other, they could be roller coaster lives in which extreme craving is assumed to be slightly positively counterbalanced by some of their other moments.

As a practical example, I deny that factory farms could be net positive (all else being equal) regardless of how much bliss the victims could be induced to experience.

A world which supports the maximum number of people has no slack. I instinctively shy away from wanting to be in a world with resource limits that tight.

I think the point of the RC is to assume away these kinds of practical contingencies - suppose you know for certain that the muzak-and-potatoes lives would never drop into the territory of more suffering than happiness.

(Another answer...)

In humans, fertility rates have been declining while average quality of life has been increasing. Considering only human life until now, the RC might suggest things would have been better had fertility rates and average quality of life remained constant, since we'd have far more people with lives worth living. It can undermine the story of human progress, and suggest past trajectories would have been better.

We could also ask whether lifting people out of poverty is good, in case it would lead to lower populations. In general, as incomes increase, people have more access to contraceptives and other family planning services, even if we aren't directly funding such things. (Life-saving interventions would likely not lead to lower populations than otherwise, and would likely lead to higher ones at least in some places, according to research by David Roodman for GiveWell (GiveWell blog post).)

From https://ourworldindata.org/future-population-growth

https://en.wikipedia.org/wiki/List_of_countries_by_population_growth_rate

https://en.wikipedia.org/wiki/List_of_sovereign_states_and_dependencies_by_total_fertility_rate

A slightly tongue-in-cheek response: the thought experiment is often introduced by name, and calling it 'repugnant' is priming people to consider it bad, in a way that 'the trolley problem' does not.

I suggest the following thought experiment. Imagine wild animal suffering can be solved. Then it would be possible to populate a square mile with millions of happy insects instead of a few happy human beings. If the repugnant conclusion was true, the best world would be populated with as many insects as possible and only a few human beings that take care that there is no wild animal suffering. 

Even more radical, the best thing to do would be to fill as much of the future light cone as possible with hedonium. Both scenarios do not match the moral intuitions of most people.

If you believe in the opposite, namely that a world with fewer individuals with higher cognitive functions is more worthy, you may arrive at the conclusion that a world populated with a few planet-sized AIs is the best.  

As other people have said, all kinds of population ethics lead to some counter-intuitive conclusions. The most conservative solution is to aim for outcomes that are not bad according to many ethical theories. 

In the maximally repugnant world, no one's life is all that good. I feel the sting of that. It's hard for me to get excited about a world in which all of the people I know personally have barely-net-positive lives full of suffering and struggle, even if that world contains more people.

The Wikipedia page you linked gives a pretty not-upsetting version of the paradox: 

From Wikipedia, the four situations, A, A+, B-, and B of the Mere Addition Paradox, illustrated as bars of different widths and heights with "water" between (in the case of A+ and B-), following Parfit's book Reasons and Persons, chapter 19.

whereas the thing that people find repugnant looks more like:
 

From the Stanford Encyclopedia page on the repugnant conclusion. 


I accept the conclusion, but it feels like I am biting a bullet when I say that World Z is worth fighting for.

It's hard for me to get excited about a world in which all of the people I know personally have barely-net-positive lives full of suffering and struggle, even if that world contains more people.

I'd imagine they must have lots of brilliant and amazing experiences to make up for the suffering, in order to leave them at a net-positive life.

2Tessa8mo
Is this necessary? I feel like many people judge their lives as worth living even though their day-to-day experiences contain mostly pain. I wonder if we're imagining different definitions for "barely-net-positive". Maybe you mean "adding up the magnitude of moment-to-moment negative or positive qualia over someone's entire life" (hedonistic utilitarianism) whereas I am usually imagining something more like "on reflection, the person judges their life as worth living" (kinda preference utilitarian).
8Jonathan Mustin8mo
My sense is that people choose to weather currently-net-negative lives for at least two reasons that they might endorse on reflection: 1. The negative parts of their life may be solvable, such that the EV of their future is plausibly positive 2. Ending their life has a few terrible externalities, e.g. the impact it would have on their close loved ones Eliminating those considerations, I would expect the bar for World Z to be much better than the worst lives people reflectively consider worth living today.

Might simply also a big portion of status-quo bias and/or omission bias (here both with similar effect) - be at play, helping to explain the typical classification of the conclusion as repugnant?

I think this might be the case when I ask myself  whether many people who classify the conclusion as repugnant, would not also have classified as just as repugnant the 'opposite' conclusion, if instead they had been offered the same experiment 'the other way round':

Start with a world counting huge numbers of lives worth-living-even-if-barely-so, and propose to destroy them all, for the sake of making very few really rich and happy! (Obviously with the nuance that it is the rich few whose net happiness is slightly larger than the sum of the others). It is just a gut feeling, but I'd guess this would evoke similar types of feelings of repugnance very often (maybe even more so than in the original RC experiment?)! A sort of Repugnant Conclusion 2.

I think the killing would probably explain the intuitive repugnance of RC2 most of the time, though.

3Florian Habermacher8mo
Fair point, even if my personal feeling is that it would be the same even without the killing (even if indeed the killing itself indeed would alone suffice too). We can amend the RC2 attempt to avoid the killing : Start with the world with the seeds for huge numbers of lives worth-living-even-if-barely-so, and propose to destroy that world, for the sake of creating a world for very few really rich and happy! (Obviously with the nuance that it is the rich few whose net happiness is slightly larger than the sum of the others). My gut feeling does not change about this RC2 still feeling repugnant to many, though I admit I'm less sure and might also be biased now, as in not wanting to feel different, oops.
3Will Bradshaw8mo
I moderately agree, but I do think there is commonly an ordering effect here, arising both from the phrasing of the RC and the way people often discuss it.

There was a somewhat unusual short philosophical paper this year signed by lots of philosophers which claimed that avoidance of the repugnant conclusion should not be seen as a necessary condition for an adequate population ethics. I guess it's driven by a similar concern you have here: the repugnant conclusion is much less obviously repugnant than its name makes it seem.     

9 comments, sorted by Click to highlight new comments since: Today at 4:13 AM

One approach I was expecting someone to try here, but haven't seen, is trying to motivate the intuition at a smaller scale – e.g. comparing a small number of very happy people to a large-but-easily-imaginable number of slightly happy people.

If the intuitions underlying aversion to the Repugnant Conclusion only kick in for extremely large populations, then I'm more confidently inclined to say they are a mistake arising from an inability to imagine at that scale. But given that the original argument for the RC is based on infinite regress, it seems like the issues that make people averse to it should start to kick in much sooner. But most commenters here have focused entirely on the vast-population case.

I thought my first answer already did what you're asking for, and it has (right now) the most upvotes, which may reflect endorsement. Are you looking for something more concrete or that isn't tied to people who would exist anyway being worse off? I added another answer.

The ways to avoid the RC, AFAIK, should fall under at least one of the following, and so intuitions/thought experiments should match:

  1. Have some kind of threshold (a critical level, a sufficientarian threshold or a lexical threshold), and marginally good lives fall below it while the very good lives are above. It could be a "vague" threshold.
  2. Non-additive (possibly aggregating in some other way, e.g. with decreasing marginal returns to additional people, average utilitarian, maximin or softer versions like rank-discounted utilitarianism which strong prioritize the worst off, or strongly prioritizing better lives, like geometrism).
  3. Person-affecting.
  4. Carry in other assumptions/values and appeal to them, e.g. more overall bad in the larger population.

See also:

https://plato.stanford.edu/entries/repugnant-conclusion/#EigWayDeaRepCon

This is a fair point. For what it's worth, I do honestly think a world of 10 people with utopian lives (of normal length) is better than a world with 10 billion people with lives like the ones I described in my answer. I guess it depends on the details of "utopian" - seems plausible that for me and many others to endorse this claim, such lives need not be so imaginably awesome that a classical utilitarian would agree the total utility of the 10 billion population world is worse.

Do you also find the Reverse Repugnant Conclusion to be straightforwardly and unobjectionably true? (This would help tailor an intuition pump that gets at the repugnance)

I think any scenario that involves hypothetical vast populations in a very simple abstract universe isn't going to change my views here. I can't actually imagine that scenario (a flaw with many thought experiments), so I'm forced to fall back on small-scale intuitions + intellectual beliefs. The latter say such a thing would be the right thing to do, given a sufficiently large blissful population and all the caveats and restrictions that always apply in these thought experiments. 

I think trying to convince the former might be more tractable, but big abstract thought experiments like this don't do that, because they are so unimaginable and unrealistic. That's (one framing of) why I'm looking for something less abstract. This is what I was trying to get at in the OP, though I accept I wasn't super clear about what exactly I was & wasn't looking for.

I thought the OP was clear. Sorry that most of the answers, including mine, do not actually answer your question.

Given what you say, maybe the reason you don't find the Repugnant Conclusion counterintuitive is that you have already internalized that you can't adequately represent the thought experiment in imagination, so your brain doesn't generate the relevant intuitions in the first place. Whereas I personally agree, on reflection, that my internal representation of the thought experiment is inadequate, but this doesn't prevent me from feeling the intuitive appeal of the less populous world. This might also explain why you do feel the sting of trolley problems, which generally involve small numbers of people. (However, you also say that you find utility monsters counterintuitive, which would be inconsistent with this explanation. Interestingly, in Reasons and Persons Parfit dismisses the force of Nozick's thought experiment on the grounds that it's impossible to properly imagine a utility monster. But he doesn't take this same approach for dealing with the Repugnant Conclusion.)

Yeah, I do think that "I can't actually realistically represent this scenario in my imagination, and if I try I'll just deceive myself, so I won't" has become a pretty deep intuition for me over the years.

I think it's more thoroughly internalised for scenarios that are unimaginably large (many people, very long stretches of time) than scenarios that are small but weird. Possibly because the intuition for size has been trained by  a lot of real-world experiences – I don't think a human can really imagine even a million people, so there are many real-world cases where the correct response is to back off from visual imagination and shut up and multiply.

Utility monsters (and the Fat Man trolley problem variant) are small but weird, so it's more difficult for me to accept that my intuitive imagination of the scenario is likely to be misleading. I've seen fictional representations of utility monsters, and in general when I try to imagine a single sentient being it's difficult not to imagine something like a human. So even though I believe that a real utility monster would in fact be a profoundly alien and hard-to-imagine being, when I think about the scenario my brain conjures up a human tyrant and it seems really bad.

Whereas for the RC my brain sees the words "unimaginably vast" and decides not to try and imagine.