Hide table of contents

In What We Owe the Future, William MacAskill delves into population ethics in a chapter titled “Is It Good to Make Happy People?” (Chapter 8). As he writes at the outset of the chapter, our views on population ethics matter greatly for our priorities, and hence it is important that we reflect on the key questions of population ethics. Yet it seems to me that the book skips over some of the most fundamental and most action-guiding of these questions. In particular, the book does not broach questions concerning whether any purported goods can outweigh extreme suffering — and, more generally, whether happy lives can outweigh miserable lives — even as these questions are all-important for our priorities.

The Asymmetry in population ethics

A prominent position that gets a very short treatment in the book is the Asymmetry in population ethics (roughly: bringing a miserable life into the world has negative value while bringing a happy life into the world does not have positive value — except potentially through its instrumental effects and positive roles).

The following is, as far as I can tell, the main argument that MacAskill makes against the Asymmetry (p. 172):

If we think it’s bad to bring into existence a life of suffering, why should we not think that it’s good to bring into existence a flourishing life? I think any argument for the first claim would also be a good argument for the second.

This claim about “any argument” seems unduly strong and general. Specifically, there are many arguments that support the intrinsic badness of bringing a miserable life into existence that do not support any intrinsic goodness of bringing a flourishing life into existence. Indeed, many arguments support the former while positively denying the latter.

One such argument is that the presence of suffering is bad and morally worth preventing while the absence of pleasure is not bad and not a problem, and hence not morally worth “fixing” in a symmetric way (provided that no existing beings are deprived of that pleasure).[1]

A related class of arguments in favor of an asymmetry in population ethics is based on theories of wellbeing that understand happiness as the absence of cravings, preference frustrations, or other bothersome features. According to such views, states of untroubled contentment are just as good — and perhaps even better than — states of intense pleasure.[2]

These views of wellbeing likewise support the badness of creating miserable lives, yet they do not support any supposed goodness of creating happy lives. On these views, intrinsically positive lives do not exist, although relationally positive lives do.

Another point that MacAskill raises against the Asymmetry is an example of happy children who already exist, about which he writes (p. 172): 

if I imagine this happiness continuing into their futures—if I imagine they each live a rewarding life, full of love and accomplishment—and ask myself, “Is the world at least a little better because of their existence, even ignoring their effects on others?” it becomes quite intuitive to me that the answer is yes.

However, there is a potential ambiguity in this example. The term “existence” may here be understood to either mean “de novo existence” or “continued existence”, and interpreting it as the latter is made more tempting by the fact that 1) we are talking about already existing beings, and 2) the example mentions their happiness “continuing into their futures”.[3]

This is relevant because many proponents of the Asymmetry argue that there is an important distinction between the potential value of continued existence (or the badness of discontinued existence) versus the potential value of bringing a new life into existence.

Thus, many views that support the Asymmetry will agree that the happiness of these children “continuing into their futures” makes the world better, or less bad, than it otherwise would be (compared to a world in which their existing interests and preferences are thwarted). But these views still imply that the de novo creation (and eventual satisfaction) of these interests and preferences does not make the world better than it otherwise would be, had they not been created in the first place. (Some sources that discuss or defend these views include Singer, 1980; Benatar, 1997; 2006; Fehige, 1998; Anonymous, 2015; St. Jules, 2019; Frick, 2020.)

A proponent of the Asymmetry may therefore argue that the example above carries little force against the Asymmetry, as opposed to merely supporting the badness of preference frustrations and other deprivations for already existing beings.[4]

Questions about outweighing

Even if one thinks that it is good to create more happiness and new happy lives all else equal, this still leaves open the question as to whether happiness and happy lives can outweigh suffering and miserable lives, let alone extreme suffering and extremely bad lives. After all, one may think that more happiness is good while still maintaining that happiness cannot outweigh intense suffering or very bad lives — or even that it cannot outweigh the worst elements found in relatively good lives. In other words, one may hold that the value of happiness and the disvalue of suffering are in some sense orthogonal (cf. Wolf, 1996; 1997; 2004).

As mentioned above, these questions regarding tradeoffs and outweighing are not raised in MacAskill’s discussion of population ethics, despite their supreme practical significance.[5] One way to appreciate this practical significance is by considering a future in which a relatively small — yet in absolute terms vast — minority of beings live lives of extreme and unrelenting suffering. This scenario raises what I have elsewhere (sec. 14.3) called the “Astronomical Atrocity Problem”: can the extreme and incessant suffering of, say, trillions of beings be outweighed by any amount of purported goods? (See also this short excerpt from Vinding, 2018.)

After all, an extremely large future civilization would contain such (in absolute terms) vast amounts of extreme suffering in expectation, which renders this problem frightfully relevant for our priorities.

MacAskill’s chapter does discuss the Repugnant Conclusion at some length, yet the Repugnant Conclusion does not explicitly involve any tradeoffs between happiness and suffering,[6] and hence it has limited relevance compared to, for example, the Very Repugnant Conclusion (roughly: that arbitrarily many hellish lives can be “compensated for” by a sufficiently vast number of lives that are “barely worth living”).[7]

Indeed, the Very Repugnant Conclusion and similar such “offsetting conclusions” would seem more relevant to discuss both because 1) they do explicitly involve tradeoffs between happiness and suffering, or between happy lives and miserable lives, and because 2) MacAskill himself has stated that he considers the Very Repugnant Conclusion to be the strongest objection against his favored view, and stronger objections generally seem more worth discussing than do weaker ones.[8]

MacAskill briefly summarizes a study that surveyed people’s views on population ethics. Among other things, he writes the following about the findings of the study (p. 173):

these judgments [about the respective value of creating happy lives and unhappy lives] were symmetrical: the experimental subjects were just as positive about the idea of bringing into existence a new happy person as they were negative about the idea of bringing into existence a new unhappy person.

While this summary seems accurate if we only focus on people’s responses to one specific question in the survey (cf. Caviola et al., 2022, p. 9), there are nevertheless many findings in the study that suggest that people generally do endorse significant asymmetries in population ethics.

Specifically, the study found that people on average believed that considerably more happiness than suffering is needed to render a population or an individual life worthwhile, even when the happiness and suffering were said to be equally intense (Caviola et al., 2022, p. 8). The study likewise found that participants on average believed that the ratio of happy to unhappy people in a population must be at least 3-to-1 for its existence to be better than its non-existence (Caviola et al., 2022, p. 5).

Another relevant finding is that people generally have a significantly stronger preference for smaller over larger unhappy populations than they do for larger over smaller happy populations, and the magnitude of this difference becomes greater as the populations under consideration become larger (Caviola et al., 2022, pp. 12-13).

In other words, people’s preference for smaller unhappy populations becomes stronger as population size increases, whereas the preference for larger happy populations becomes less strong as population size increases, in effect creating a strong asymmetry in cases involving large populations (e.g. above one billion individuals). This finding seems particularly relevant when discussing laypeople’s views of population ethics in a context that is primarily concerned with the value of potentially vast future populations.[9]

Moreover, a pilot study conducted by the same researchers suggested that the framing of the question plays a major role for people’s intuitions (Caviola et al., 2022, “Supplementary Materials”). In particular, the pilot study (n=172) asked people the following question:

Suppose you could push a button that created a new world with X people who are generally happy and 10 people who generally suffer. How high would X have to be for you to push the button?

When the question was framed in these terms, i.e. in terms of creating a new world, people’s intuitions were radically more asymmetric, as the median ratio then jumped to 100-to-1 happy to unhappy people, which is a rather pronounced asymmetry.[10]

In sum, it seems that the study that MacAskill cites above, when taken as a whole, mostly finds that people on average do endorse significant asymmetries in population ethics. I think this documented level of support for asymmetries would have been worth mentioning.

(Other surveys that suggest that people on average affirm a considerable asymmetry in the value of happiness vs. suffering and good vs. bad lives include the Future of Life Institute’s Superintelligence survey (n=14,866) and Tomasik, 2015 (n=99).)

The discussion of moral uncertainty excludes asymmetric views

Toward the end of the chapter, MacAskill briefly turns to moral uncertainty, and he ends his discussion of the subject on the following note (p. 187):

My colleagues Toby Ord and Hilary Greaves have found that this approach to reasoning under moral uncertainty can be extended to a range of theories of population ethics, including those that try to capture the intuition of neutrality. When you are uncertain about all of these theories, you still end up with a low but positive critical level [of wellbeing above which it is a net benefit for a new being to be created for their own sake].

Yet the analysis in question appears to wholly ignore asymmetric views in population ethics. If one gives significant weight to asymmetric views — not to mention stronger minimalist views in population ethics — the conclusion of the moral uncertainty framework is likely to change substantially, perhaps so much so that the creation of new lives is generally not a benefit for the created beings themselves (although it could still be a net benefit for others and for the world as a whole, given the positive roles of those new lives).

Similarly, even if the creation of unusually happy lives would be regarded as a benefit from a moral uncertainty perspective that gives considerable weight to asymmetric views, this benefit may still not be sufficient to counterbalance extremely bad lives,[11] which are granted unique weight by many plausible axiological and moral views (cf. Mayerfeld, 1999, pp. 114-116; Vinding, 2020, ch. 6).[12]

References

Ajantaival, T. (2021/2022). Minimalist axiologies. Ungated

Anonymous. (2015). Negative Utilitarianism FAQ. Ungated

Benatar, D. (1997). Why It Is Better Never to Come into Existence. American Philosophical Quarterly, 34(3), pp. 345-355. Ungated

Benatar, D. (2006). Better Never to Have Been: The Harm of Coming into Existence. Oxford University Press.

Caviola, L. et al. (2022). Population ethical intuitions. Cognition, 218, 104941. Ungated; Supplementary Materials

Contestabile, B. (2022). Is There a Prevalence of Suffering? An Empirical Study on the Human Condition. Ungated

DiGiovanni, A. (2021). A longtermist critique of “The expected value of extinction risk reduction is positive”. Ungated

Fehige, C. (1998). A pareto principle for possible people. In Fehige, C. & Wessels U. (eds.), Preferences. Walter de Gruyter. Ungated

Frick, J. (2020). Conditional Reasons and the Procreation Asymmetry. Philosophical Perspectives, 34(1), pp. 53-87. Ungated

Future of Life Institute. (2017). Superintelligence survey. Ungated

Gloor, L. (2016). The Case for Suffering-Focused Ethics. Ungated

Gloor, L. (2017). Tranquilism. Ungated

Hurka, T. (1983). Value and Population Size. Ethics, 93, pp. 496-507.

James, W. (1901). Letter on happiness to Miss Frances R. Morse. In Letters of William James, Vol. 2 (1920). Atlantic Monthly Press.

Knutsson, S. (2019). Epicurean ideas about pleasure, pain, good and bad. Ungated

MacAskill, W. (2022). What We Owe The Future. Basic Books.

Mayerfeld, J. (1999). Suffering and Moral Responsibility. Oxford University Press.

Parfit, D. (1984). Reasons and Persons. Oxford University Press.

Sherman, T. (2017). Epicureanism: An Ancient Guide to Modern Wellbeing. MPhil dissertation, University of Exeter. Ungated

Singer, P. (1980). Right to Life? Ungated

St. Jules, M. (2019). Defending the Procreation Asymmetry with Conditional Interests. Ungated

Tomasik, B. (2015). A Small Mechanical Turk Survey on Ethics and Animal Welfare. Ungated

Tsouna, V. (2020). Hedonism. In Mitsis, P. (ed.), Oxford Handbook of Epicurus and Epicureanism. Oxford University Press.

Vinding, M. (2018). Effective Altruism: How Can We Best Help Others? Ratio Ethica. Ungated

Vinding, M. (2020). Suffering-Focused Ethics: Defense and Implications. Ratio Ethica. Ungated

Wolf, C. (1996). Social Choice and Normative Population Theory: A Person Affecting Solution to Parfit’s Mere Addition Paradox. Philosophical Studies, 81, pp. 263-282.

Wolf, C. (1997). Person-Affecting Utilitarianism and Population Policy. In Heller, J. & Fotion, N. (eds.), Contingent Future Persons. Kluwer Academic Publishers. Ungated

Wolf, C. (2004). O Repugnance, Where Is Thy Sting? In Tännsjö, T. & Ryberg, J. (eds.), The Repugnant Conclusion. Kluwer Academic Publishers. Ungated

  1. ^

    Further arguments against a moral symmetry between happiness and suffering are found in Mayerfeld, 1999, ch. 6; Vinding, 2020, sec. 1.4 & ch. 3.

  2. ^

    On some views of wellbeing, especially those associated with Epicurus, the complete absence of any bothersome or unpleasant features is regarded as the highest pleasure, Sherman, 2017, p. 103; Tsouna, 2020, p. 175. Psychologist William James also expressed this view, James, 1901.

  3. ^

    I am not saying that the “continued existence” interpretation is necessarily the most obvious one to make, but merely that there is significant ambiguity here that is likely to confuse many readers as to what is being claimed.

  4. ^

    Moreover, a proponent of minimalist axiologies may argue that the assumption of “ignoring all effects on others” is so radical that our intuitions are unlikely to fully ignore all such instrumental effects even when we try to, and hence we may be inclined to confuse 1) the relational value of creating a life with 2) the (purported) intrinsic positive value contained within that life in isolation — especially since the example involves a life that is “full of love and accomplishment”, which might intuitively evoke many effects on others, despite the instruction to ignore such effects.

  5. ^

    MacAskill’s colleague Andreas Mogensen has commendably raised such questions about outweighing in his essay “The weight of suffering”, which I have discussed here.

    Chapter 9 in MacAskill’s book does review some psychological studies on intrapersonal tradeoffs and preferences (see e.g. p. 198), but these self-reported intrapersonal tradeoffs do not necessarily say much about which interpersonal tradeoffs we should consider plausible or valid. Nor do these intrapersonal tradeoffs generally appear to include cases of extreme suffering, let alone an entire lifetime of torment (as experienced, for instance, by many of the non-human animals whom MacAskill describes in Chapter 9). Hence, that people are willing to make intrapersonal tradeoffs between everyday experiences that are more or less enjoyable says little about whether some people’s enjoyment can morally outweigh the intense suffering or extremely bad lives endured by others. (In terms of people’s self-reported willingness to experience extreme suffering in order to gain happiness, a small survey (n=99) found that around 45 percent of respondents would not experience even a single minute of extreme suffering for any amount of happiness; and that was just the intrapersonal case — such suffering-for-happiness trades are usually considered less plausible and less permissible in the interpersonal case, cf. Mayerfeld, 1999, pp. 131-133; Vinding, 2020, sec. 3.2.)

    Individual ratings of life satisfaction are similarly limited in terms of what they say about intrapersonal tradeoffs. Indeed, even a high rating of momentary life satisfaction does not imply that the evaluator’s life itself has overall been worth living, even by the evaluator’s own standards. After all, one may report a very high quality of life yet still think that the good part of one’s life cannot outweigh one’s past suffering. It is thus rather limited what we can conclude about the value of individual lives, much less the world as a whole, based on people’s momentary ratings of life satisfaction.

    Finally, MacAskill also mentions various improvements that have occurred in recent centuries as a reason to be optimistic about the future of humanity in moral and evaluative terms. Yet it is unclear whether any of the improvements he mentions involve genuine positive goods, as opposed to representing a reduction of bads, e.g. child mortality, poverty, totalitarian rule, and human slavery (cf. Vinding, 2020, sec. 8.6).

  6. ^

    Some formulations of the Repugnant Conclusion do involve tradeoffs between happiness and suffering, and the conclusion indeed appears much more repugnant in those versions of the thought experiment.

  7. ^

    One might object that the Very Repugnant Conclusion has limited practical significance because it represents an unlikely scenario. But the same could be said about the Repugnant Conclusion (especially in its suffering-free variant). I do not claim that the Very Repugnant Conclusion is the most realistic case to consider. When I claim that it is more practically relevant than the Repugnant Conclusion, it is simply because it does explicitly involve tradeoffs between happiness and (extreme) suffering, which we know will also be true of our decisions pertaining to the future.

  8. ^

    For what it’s worth, I think an even stronger counterexample is “Creating hell to please the blissful”, in which an arbitrarily large number of maximally bad lives are “compensated for” by bringing a sufficiently vast base population from near-maximum welfare to maximum welfare.

  9. ^

    Some philosophers have explored, and to some degree supported, similar views. For example, Derek Parfit wrote (Parfit, 1984, p. 406): “When we consider the badness of suffering, we should claim that this badness has no upper limit. It is always bad if an extra person has to endure extreme agony. And this is always just as bad, however many others have similar lives. The badness of extra suffering never declines.” In contrast, Parfit seemed to consider it more plausible that the addition of happiness adds diminishing marginal value to the world, even though he ultimately rejected that view because he thought it had implausible implications, Parfit, 1984, pp. 406-412. See also Hurka, 1983; Gloor, 2016, sec. IV; Vinding, 2020, sec. 6.2. Such views imply that it is of chief importance to avoid very bad outcomes on a very large scale, whereas it is relatively less important to create a very large utopia.

  10. ^

    This framing effect could be taken to suggest that people often fail to fully respect the radical “other things being equal” assumption when considering the addition of lives in our world. That is, people might not truly have thought about the value of new lives in total isolation when those lives were to be added to the world we inhabit, whereas they might have come closer to that ideal when they considered the question in the context of creating a new, wholly self-contained world. (Other potential explanations of these differences are reviewed in Contestabile, 2022, sec. 4; Caviola et al., 2022, “Supplementary Materials”, pp. 7-8.)

  11. ^

    Or at least not sufficient to counterbalance the substantial number of very bad lives that the future contains in expectation, cf. the Astronomical Atrocity Problem mentioned above.

  12. ^

    Further discussion of moral uncertainty from a perspective that takes asymmetric views into account is found in DiGiovanni, 2021.

Comments115
Sorted by Click to highlight new comments since: Today at 7:33 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Thanks Magnus for your more comprehensive summary of our population ethics study.

You mention this already, but I want to emphasize how much different framings actually matter. This surprised me the most when working on this paper. I’d thus caution anyone against making strong inferences from just one such study.

For example, we conducted the following pilot study (n = 101) where participants were randomly assigned to two different conditions: i) create a new happy person, and ii) create a new unhappy person. See the vignette below:

Imagine there was a magical machine. This machine can create a new adult person. This new person’s life, however, would definitely [not] be worth living. They would be very unhappy [happy] and live a life full of suffering and misery [bliss and joy].

You can push a button that would create this new person.

Morally speaking, how good or bad would it be to push that button?

The response scale ranged from 1 = Extremely bad to 7 = Extremely good. 

Creating a happy person was rated as only marginally better than neutral (mean = 4.4), whereas creating an unhappy person was rated as extremely bad (mean = 1.4). So this would lead one to believe that there is stro... (read more)

Toby, Carl and Brian meet the next day, still looking very pale. They shake hands and agree to not do so much descriptive ethics anymore. 


Garbage answers to verbal elicitations on such questions (and real life decisions that require such explicit reasoning without feedback/experience, like retirement savings) are actually quite central to my views. In particular, my reliance on situations where it is easier for individuals to experience things multiple times in easy-to-process fashion and then form a behavioral response. I would be much less sanguine about error theories regarding such utterances if we didn't also see people in surveys saying they would rather take $1000 than a 15% chance of $1M, or $100 now rather than $140 a year later, i.e. utterances that are clearly mistakes.

Looking at the literature on antiaggregationist views, and the complete conflict of those moral intuitions with personal choices and self-concerned practice (e.g. driving cars or walking outside) is also important to my thinking. No-tradeoffs views are much more appealing outside our own domains of rich experience in talk. 

Good points!

situations where it is easier for individuals to experience things multiple times in easy-to-process fashion and then form a behavioral response

It's not obvious to me that our ethical evaluation should match with the way our brains add up good and bad past experiences at the moment of deciding whether to do more of something. For example, imagine that someone loves to do extreme sports. One day, he has a severe accident and feels so much pain that he, in the moment, wishes he had never done extreme sports or maybe even wishes he had never been born. After a few months in recovery, the severity of those agonizing memories fades, and the temptation to do the sports returns, so he starts doing extreme sports again. At that future point in time, his brain has implicitly made a decision that the enjoyment outweighs the risk of severe suffering. But our ethical evaluation doesn't have to match how the evolved emotional brain adds things up at that moment in time. We might think that, ethically, the version of the person who was in extreme pain isn't compensated by other moments of the same person having fun.

Even if we think enjoyment can outweigh severe suffering within a... (read more)

Hi Brian,

I agree that preferences at different times and different subsystems can conflict. In particular, high discounting of the future can lead to forgoing a ton of positive reward or accepting lots of negative reward in the future in exchange for some short-term change. This is one reason to pay extra attention to cases of near-simultaneous comparisons, or at least to look at different arrangements of temporal ordering. But still the tradeoffs people make for themselves with a lot of experience under good conditions look better than what they tend to impose on others casually. [Also we can better trust people's self-benevolence than their benevolence towards others,  e.g. factory farming as you mention.]

And the brain machinery for processing stimuli into decisions and preferences does seem very relevant to me at least, since that's a primary source of intuitive assessments of these psychological states as having value, and for comparisons where we can make them. Strong rejection of interpersonal comparisons is also used to argue that relieving one or more pains can't compensate for losses to another individual.

I agree the hardest cases for making any kind of interpersonal ... (read more)

e.g. 2 minds with equally passionate complete enthusiasm  (with no contrary psychological processes or internal currencies to provide reference points) respectively for and against their own experience, or gratitude and anger for their birth (past or future).  They  can respectively consider a world with and without their existences completely unbearable and beyond compensation. But if we're in the business of helping others for their own sakes rather than ours, I don't see the case for excluding either one's concern from our moral circle.

... 

But when I'm in a mindset of trying to do impartial good I don't see the appeal of ignoring those who would desperately, passionately want to exist, and their gratitude in worlds where they do.

I don't really see the motivation for this perspective. In what sense, or to whom, is a world without the existence of the very happy/fulfilled/whatever person "completely unbearable"? Who is "desperate" to exist? (Concern for reducing the suffering of beings who actually feel desperation is, clearly, consistent with pure NU, but by hypothesis this is set aside.) Obviously not themselves. They wouldn't exist in that counterfactual.

To ... (read more)

9
Brian_Tomasik
2y
What if the individual says that after thinking very deeply about it, they believe their existence genuinely is much better than not having existed? If we're trying to be altruistic toward their own values, presumably we should also value their existence as better than nothingness (unless we think they're mistaken)? One could say that if they don't currently exist, then their nonexistence isn't a problem. It's true that their nonexistence doesn't cause suffering, but it does make impartial-altruistic total value lower than otherwise if we would consider their existence to be positive.

Your reply is an eloquent case for your view. :)

This is one reason to pay extra attention to cases of near-simultaneous comparisons

In cases of extreme suffering (and maybe also extreme pleasure), it seems to me there's an empathy gap: when things are going well, you don't truly understand how bad extreme suffering is, and when you're in severe pain, you can't properly care about large volumes of future pleasure. When the suffering is bad enough, it's as if a different brain takes over that can't see things from the other perspective, and vice versa for the pleasure-seeking brain. This seems closer to the case of "univocal viewpoints" that you mention.

I can see how for moderate pains and pleasures, a person could experience them in succession and make tradeoffs while still being in roughly the same kind of mental state without too much of an empathy gap. But the fact of those experiences being moderate and exchangeable is the reason I don't think the suffering in such cases is that morally noteworthy.

we can better trust people's self-benevolence than their benevolence towards others

Good point. :) OTOH, we might think it's morally right to have a more cautious approach to imp... (read more)

3
MichaelStJules
2y
"I would be much less sanguine about error theories regarding such utterances if we didn't also see people in surveys saying they would rather take $1000 than a 15% chance of $1M, or $100 now rather than $140 a year later, i.e. utterances that are clearly mistakes." These could be reasonable due to asymmetric information and a potentially adversarial situation, so respondents don't really trust that the chance of $1M is that high, or that they'll actually get the $140 a year from now. I would actually expect most people to pick the $100 now over $140 in a year with real money, and I wouldn't be too surprised if many would pick $1000 over a 15% chance of a million with real money. People are often ambiguity-averse. Of course, they may not really accept the premises of the hypotheticals. With respect to antiaggregationist views, people could just be ignoring small enough probabilities regardless of the severity of the risk. There are also utility functions where any definite amount of A outweighs any definite amount of B, but probabilistic tradeoffs between them are still possible: https://forum.effectivealtruism.org/posts/GK7Qq4kww5D8ndckR/michaelstjules-s-shortform?commentId=4Bvbtkq83CPWZPNLB

In the surveys they know it's all hypothetical.

You do see a bunch of crazy financial behavior in the world, but it decreases as people get more experience individually and especially socially (and with better cognitive understanding).

People do engage in rounding to zero in a lot of cases, but with lots of experience will also take on pain and injury with high cumulative or instantaneous probability (e.g. electric shocks to get rewards, labor pains, war,  jobs that involve daily frequencies of choking fumes or injury.

Re lexical views that still make probabilistic tradeoffs, I don't really see the appeal  of contorting lexical views that will still be crazy with respect to real world cases so that one can say they assign infinitesimal value to good things in impossible hypotheticals (but effectively 0 in real life). Real world cases like labor pain and risking severe injury doing stuff aren't about infinitesimal value too small for us to even perceive, but macroscopic value that we are motivated by.  Is there a parameterization you would suggest as plausible and addressing that?

5
MichaelStJules
2y
Yes, but they might not really be able to entertain the assumptions of the hypotheticals because they're too abstract and removed from the real world cases they would plausibly face. Very plausibly none of these possibilities would meet the lexical threshold, except with very very low probability. These people almost never beg to be killed, so the probability of unbearable suffering seems very low for any individual. The lexical threshold could be set based on bearableness or consent or something similar (e.g. Tomasik, Vinding). Coming up with a particular parameterization seems like a bit of work, though, and I'd need more time to think about that, but it's worth noting that the same practical problem applies to very large aggregates of finite goods/bads, e.g. Heaven or Hell, very long lives, or huge numbers of mind uploads. There's also a question of whether a life of unrelenting but less intense suffering can be lexically negative even if no particular experience meets some intensity threshold that would be lexically negative in all lives. Some might think of Omelas this way, and Mogensen's "The weight of suffering" is inclusive of this view (and also allows experiential lexical thresholds), although I don't think he discusses any particular parameterization.

Very plausibly none of these possibilities would meet the lexical threshold, except with very very low probability.

I'm confused. :) War has a rather high probability of extreme suffering. Perhaps ~10% of Russian soldiers in Ukraine have been killed as of July 2022. Some fraction of fighters in tanks die by burning to death:

The kinetic energy and friction from modern rounds causes molten metal to splash everywhere in the crew compartment and ignites the air into a fireball. You would die by melting.

You’ll hear of a tank cooking off as it’s ammunition explodes. That doesn’t happen right away. There’s lots to burn inside a tank other that the tank rounds. Often, the tank will burn for quite awhile before the tank rounds explode.

It is sometimes a slow horrific death if one can’t get out in time or a very quick one. We had side arms and all agreed that if our tank was burning and we were caught inside and couldn’t get out. We would use a round on ourselves. That’s how bad it was.

Some workplace accidents also produce extremely painful injuries.

I don't know what fraction of people in labor wish they were dead, but probably it's not negligible: "I remember repeatedly saying I ... (read more)

4
MichaelStJules
2y
Good points. I don't expect most war deaths to be nearly as painful as burning to death, but I was too quick to dismiss the frequency of very very bad deaths. I had capture and torture in mind as whatever passes the lexical threshold, and so very rare. Also fair about labor. I don't think it really gives us an estimate of the frequency of unbearable suffering, although it seems like trauma is common and women aren't getting as much pain relief as they'd like in the UK. On workplace injuries, in the US in 2020, the highest rate by occupation seems to be around 200 nonfatal injuries and illnesses per 100,000 workers, and 20 deaths per 100,000 workers, but they could be even higher in more specific roles: https://injuryfacts.nsc.org/work/industry-incidence-rates/most-dangerous-industries/ I assume these are estimates of the number of injuries in 2020 only, too, so the lifetime risk is several times higher in such occupations. Maybe the death rate is similar to the rate of unbearable pain, around 1 out of 5,000 per year, which seems non-tiny when added up over a lifetime (around 0.4% over 20 years assuming a geometric distribution https://www.wolframalpha.com/input?i=1-(1-1%2F5000)^20), but also similar in probability to the kinds of risks we do mitigate without eliminating (https://forum.effectivealtruism.org/posts/5y3vzEAXhGskBhtAD/most-small-probabilities-aren-t-pascalian?commentId=jY9o6XviumXfaxNQw).
1
JackM
2y
I agree there are some objectively stupid answers that have been given to surveys, but I'm surprised these were the best examples you could come up with. Taking $1000 over a 15% chance of $1M can follow from risk aversion which can follow from diminishing marginal utility of money. And let's face it - money does have diminishing marginal utility. Wanting $100 now rather than $140 a year later can follow from the time value of money. You could invest the money, either financially or otherwise. Also, even though it's a hypothetical, people may imagine in the real scenario that they are less likely to get something promised in a year's time and therefore that they should accept what is really a similar-ish pot of money now.

They're wildly quantitatively off. Straight 40% returns are way beyond equities, let alone the risk-free rate. And it's inconsistent with all sorts of normal planning, e.g. it would be against any savings in available investments, much concern for long-term health, building a house, not borrowing everything you could on credit cards, etc.

Similarly the risk aversion for rejecting a 15% of $1M for $1000 would require a bizarre situation (like if you needed just $500 more to avoid short term death), and would prevent dealing with normal uncertainty integral to life, like going on dates with new people, trying to sell products to multiple customers with occasional big hits, etc.

3
Brian_Tomasik
2y
This page says: "The APRs for unsecured credit cards designed for consumers with bad credit are typically in the range of about 25% to 36%." That's not too far from 40%. If you have almost no money and would otherwise need such a loan, taking $100 now may be reasonable. There are claims that "Some 56% of Americans are unable to cover an unexpected $1,000 bill with savings", which suggests that a lot of people are indeed pretty close to financial emergency, though I don't know how true that is. Most people don't have many non-401k investments, and they roughly live paycheck to paycheck. I also think people aren't pure money maximizers. They respond differently in different situations based on social norms and how things are perceived. If you get $100 that seems like a random bonus, it's socially acceptable to just take it now rather than waiting for $140 next year. But it doesn't look good to take out big credit-card loans that you'll have trouble repaying. It's normal to contribute to a retirement account. And so on. People may value being normal and not just how much money they actually have. That said, most people probably don't think through these issues at all and do what's normal on autopilot. So I agree that the most likely explanation is lack of reflectiveness, which was your original point.

I've seen the asymmetry discussed multiple times on the forum - I think it is still the best objection to the astronomical waste argument for longtermism.

 

I don't think this has been addressed enough by longtermists (I would count "longtermism rejects the assymetry and if you think the assymetry is true than you probably reject longtermism" as addressing it).

The idea that "the future might not be good" comes up on the forum every so often, but this doesn't really harm the core longtermist claims. The counter-argument is roughly:
- You still want to engage in trajectory changes (e.g. ensuring that we don't fall to the control of a stable totalitarian state)
- Since the effort bars are ginormous and we're pretty uncertain about the value of the future, you still want to avoid extinction so that we can figure this out, rather than getting locked in by a vague sense we have today

I think the asymmetry argument is quite different to the “bad futures” argument?

(Although I think the bad futures argument is one of the other good objections to the astronomical waste argument).

I think we might disagree on whether “astronomical waste” is a core longtermist claim - I think it is.

I don’t think either objection means that we shouldn’t care about extinction or about future people, but both drastically reduce the expected value of longtermist interventions.

And given that the counterfactual use of EA resources always has high expected value, the reduction in EV of longtermist interventions is action-relevant.

People who agree with asymmetry and people who are less confident in the probability of / quality of a good future would allocate fewer resources to longtermist causes than Will MacAskill would.

Someone bought into the asymmetry should still want to improve the lives of future people who will necessarily exist.

In other words the asymmetry doesn’t go against longtermist approaches that have the goal to improve average future well-being, conditional on humanity not going prematurely extinct.

Such approaches might include mitigating climate change, institutional design, and ensuring aligned AI. For example, an asymmetrist should find it very bad if AI ends up enslaving us for the rest of time…

I don't get why this is being downvoted so much. Can anyone explain?

I think that even in the EA community, there are people who vote based on whether or not they like the point being made, as opposed to whether or not the logic underlying a point is valid or not. I think this happens to explain the downvotes on my comment - some asymmetrists just don’t like longtermism and want their asymmetry to be a valid way out of it.

I don’t necessarily think this phenomenon applies to downvotes on other comments I might make though - I’m not arrogant enough to think I’m always right!

I have a feeling this phenomenon is increasing. As the movement grows we will attract people with a wider range of views and so we may see more (unjustifiable) downvoting as people downvote things that don’t align to their views (regardless of the strength of argument). I’m not sure if this will happen, but it might, and to some degree I have already started to lose some confidence in the relationship between comment/post quality and karma.

8
freedomandutility
2y
Yes, this is basically my view!
5
JackM
2y
I think the upshot of this is that an asymmetrist who accepts the other key arguments underlying longtermism (future is vast in expectation, we can tractably influence the far future) should want to allocate all of their altruistic resources to longtermist causes. They would just be more selective about which specific causes. For an asymmetrist, the stakes are still incredibly high, and it's not as if the marginal value of contributing to longtermist approaches such AI alignment, climate change etc. have been driven down to a very low level. So I'm basically disagreeing with you when you say:
2
Alex Mallen
2y
This post by Rohin attempts to address it. If you hold the asymmetry view then you would allocate more resources to [1] causing a new neutral life to come into existence (-1 cent) then later once they exist improve that neutral life (many dollars) than you would to  [2] causing a new happy life to come into existence (-1 cent). They both result in the same world. In general you can make a dutch booking argument like this whenever your resource allocation doesn't correspond to the gradient of a value function (i.e. the resources should be aimed at improving the state of the world).
9
Anthony DiGiovanni
2y
This only applies to flavors of the Asymmetry that treat happiness as intrinsically valuable, such that you would pay to add happiness to a "neutral" life (without relieving any suffering by doing so). If the reason you don't consider it good to create new lives with more happiness than suffering is that you don't think happiness is intrinsically valuable, at least not at the price of increasing suffering, then you can't get Dutch booked this way. See this comment.

You object to the MacAskill quote

If we think it’s bad to bring into existence a life of suffering, why should we not think that it’s good to bring into existence a flourishing life? I think any argument for the first claim would also be a good argument for the second.

And then say 

Indeed, many arguments support the former while positively denying the latter. One such argument is that the presence of suffering is bad and morally worth preventing while the absence of pleasure is not bad and not a problem,

But I don't see how this challenges MacAskill's point, so much as restates the claim he was arguing against. I think he could simply reply to what you said by asking, "okay, so why do we have reason to prevent what is bad but no reason to bring about what is good?" 

Thanks for your question, Michael :)

I should note that the main thing I take issue with in that quote of MacAskill's is the general (and AFAICT unargued) statement that "any argument for the first claim would also be a good argument for the second". I think there are many arguments about which that statement is not true (some of which are reviewed in Gloor, 2016; Vinding, 2020, ch. 3; Animal Ethics, 2021).

As for the particular argument of mine that you quote, I admit that a lot of work was deferred to the associated links and references. I think there are various ways to unpack and support that line of argument.

One of them rests on the intuition that ethics is about solving problems (an intuition that one may or may not share, of course).[1] If one shares that moral intuition, or premise, then it seems plausible to say that the presence of suffering or miserable lives amounts to a problem, or a problematic state, whereas the absence of pleasure or pleasurable lives does not (other things equal) amount to a problem for anyone, or to a problematic state. That line of argument (whose premises may be challenged, to be sure) does not appear "flippable" such that it becomes a simila... (read more)

I'm not sure how I feel about relying on intuitions in thought experiments such as those. I don't necessarily trust my intuitions.

If you'd asked me 5-10 years ago whose life is more valuable: an average pig's life or a severely mentally-challenged human's life I would have said the latter without a thought. Now I happen to think it is likely to be the former. Before I was going off pure intuition. Now I am going off developed philosophical arguments such as the one Singer outlines in his book Animal Liberation, as well as some empirical facts.

My point is when I'm deciding if the absence of pleasure is problematic or not I would prefer for there to be some philosophical argument why or why not, rather than examples that show that my intuition goes against this. You could argue that such arguments don't really exist, and that all ethical judgement relies on intuition to some extent, but I'm a bit more hopeful. For example Michael St Jules' comment is along these lines and is interesting.

On a really basic level my philosophical argument would be that suffering is bad, and pleasure is good (the most basic of ethical axioms that we have to accept to get consequentialist ethics off the g... (read more)

7
MichaelStJules
2y
With respect to your last paragraph, someone who holds a person-affecting view might respond that you have things backwards (indeed, this is what Frick claims): welfare matters because moral patients matter, rather than the other way around, so you need to put the person first, and something something therefore person-affecting view! Then we could discuss what welfare means, and that could be more pleasure and less suffering, or something else. That being said, this seems kind of confusing to me, too. Welfare matters because moral patients matter, but moral patients are, in my view, just those beings capable of welfare. So, welfare had to come first anyway, and we just added extra steps. I suspect this can be fixed by dealing directly with interests themselves as the atoms that matter, rather than entire moral patients. E.g. preference satisfaction matters because preferences matter, and something something therefore preference-affecting view! I think such an account would deny that giving even existing people more pleasure is good in itself: they'd need to have an interest in more pleasure for it to make them better off. Maybe we always do have such an interest by our nature, though, and that's something someone could claim, although I find that unintuitive. Another response may just be that value is complex, and we shouldn't give too much extra weight to simpler views just because they're simpler. That can definitely go even further, e.g. welfare is not cardinally measurable or nothing matters. Also, I think only suffering (or only pleasure) mattering is actually in some sense a simpler view than both suffering and pleasure mattering, since with both, you need to explain why each matters and tradeoffs between them. Some claim that symmetric hedonism is not value monistic at all.
6
Anthony DiGiovanni
2y
It seems like you're just relying on your intuition that pleasure is intrinsically good, and calling that an axiom we have to accept. I don't think we have to accept that at all — rejecting it does have some counterintuitive consequences, I won't deny that, but so does accepting it. It's not at all obvious (and Magnus's post points to some reasons we might favor rejecting this "axiom").
2
JackM
2y
Would you say that saying suffering is bad is a similar intuition?
6
Anthony DiGiovanni
2y
No, I know of no thought experiments or any arguments generally that make me doubt that suffering is bad. Do you?
3
JackM
2y
Well if you think suffering is bad and pleasure is not good then the counterintuitive (to the vast majority of people) conclusion is that we should (painlessly if possible, but probably painfully if necessary) ensure everyone gets killed off so that we never have any suffering again. It may well be true that we should ensure everyone gets killed off, but this is certainly an argument that many find compelling against the dual claim that suffering is bad and pleasure is not good.
9
Anthony DiGiovanni
2y
* That case does run counter to "suffering is intrinsically bad but happiness isn't," but it doesn't run counter to "suffering is bad," which is what your last comment asked about. I don't see any compelling reasons to doubt that suffering is bad, but I do see some compelling reasons to doubt that happiness is good. * That's just an intuition, no? (i.e. that everyone painlessly dying would be bad.) I don't really understand why you want to call it an "axiom" that happiness is intrinsically good, as if this is stronger than an intuition, which seemed to be the point of your original comment. * See this post for why I don't think the case you presented is decisive against the view I'm defending.
6
JackM
2y
What is your compelling reason to doubt happiness is good? Is it thought experiments such as the ones Magnus has put forward? I think these argue that alleviating suffering is more pressing than creating happiness, but I don't think these argue that creating happiness isn't good. I do happen to think suffering is bad, but here is a potentially reasonable counterargument - some people think that suffering is what makes life meaningful. For example some think of the idea of drugs being widespread, alleviating everyone of all pain all the time, is monstrous. People's children would get killed and the parents just wouldn’t  feel any negative emotion - this seems a bit wrong... You could try to use your pareto improvement argument here i.e. that it's better if parents still have a preference for their child not to have been killed, but also not to feel any sort of pain related to it. Firstly, I do think many people would want there to be some pain in this situation and that they would think of a lack of pain being disrespectful and grotesque. Otherwise I'm slightly confused about one having a preference that the child wasn't killed, but also not feeling any sort of hedonic pain about it...is this contradictory? As I said I do think suffering is bad, but I'm yet to be convinced this is less of a leap of faith than saying happiness is good.

Say there is a perfectly content monk who isn't suffering at all. Do you have a moral obligation to make them feel pleasure?

2
JackM
2y
It would certainly be a good thing to do. And if I could do it costlessly I think I would see it as an obligation, although I’m slightly fuzzy on the concept of moral obligations in the first place. In reality however there would be an opportunity cost. We’re generally more effective at alleviating suffering than creating pleasure, so we should generally focus on doing the former.

To modify the monk case, what if we could (costlessly; all else equal) make the solitary monk feel a notional 11 units of pleasure followed by 10 units of suffering?

Or, extreme pleasure of "+1001" followed by extreme suffering of "-1000"?

Cases like these make me doubt the assumption of happiness as an independent good. I know meditators who claim to have learned to generate pleasure at will in jhana states, who don't buy the hedonic arithmetic, and who prefer the states of unexcited contentment over states of intense pleasure.

So I don't want to impose, from the outside, assumptions about the hedonic arithmetic onto mind-moments who may not buy them from the inside.

Additionally, I feel no personal need for the concept of intrinsic positive value anymore, because all my perceptions of positive value seem just fine explicable in terms of their indirect connections to subjective problems. (I used to use the concept, and it took me many years to translate it into relational terms in all the contexts where it pops up, but I seem to have now uprooted it so that it no longer pops to mind, or at least it stopped doing so over the past four years. In programming terms, one could say that up... (read more)

I’m not sure if “pleasure” is the right word. I certainly think that improving one’s mental state is always good, even if this starts at a point in which there is no negative experience at all.

This might not involve increasing “pleasure”. Instead it could be increasing the amount of “meaning” felt or “love” felt. If monks say they prefer contentment over intense pleasure then fine - I would say the contentment state is hedonically better in some way.

This is probably me defining “hedonically better” differently to you but it doesn’t really matter. The point is I think you can improve the wellbeing of someone who is experiencing no suffering and that this is objectively a desirable thing to do.

7
Teo Ajantaival
2y
Relevant recent posts: https://www.simonknutsson.com/undisturbedness-as-the-hedonic-ceiling/ https://centerforreducingsuffering.org/phenomenological-argument/ (I think these unpack a view I share, better than I have.) Edit: For tranquilist and Epicurean takes, I also like Gloor (2017, sec. 2.1) and Sherman (2017, pp. 103–107), respectively.
1
Anthony DiGiovanni
2y
I think one crux here is that Teo and I would say, calling an increase in the intensity of a happy experience "improving one's mental state" is a substantive philosophical claim. The kind of view we're defending does not say something like, "Improvements of one's mental state are only good if they relieve suffering." I would agree that that sounds kind of arbitrary. The more defensible alternative is that replacing contentment (or absence of any experience) with increasingly intense happiness / meaning / love is not itself an improvement in mental state. And this follows from intuitions like "If a mind doesn't experience a need for change (and won't do so in the future), what is there to improve?"
1
Dan Hageman
2y
Can you elaborate a bit on why the seemingly arbitrary view you quoted in your first paragraph wouldn't follow, from the view that you and Teo are defending?  Are you saying that from your and Teo's POVs, there's a way to 'improve a mental state' that doesn't amount to decreasing suffering (/preventing it)? The statement itself seems a bit odd, since 'improvements' seems to imply 'goodness', and the statement hypothetically considers situations where improvements may not be good..so thought  I would see if you could clarify. In regards to the 'defensible alternative', it seems that one could  defend a plausible view that a state of contentment, moved to a state of increased bliss, is indeed an improvement, even though there wasn't a need for change. Such an understanding seems plausible in a self-intimating way when one valence state transitions to the next, insofar as we concede that there are states of more or less pleasure, outside an negatively valanced states. It seems that one could do this all the while maintaining that such improvements are never capable of outweighing the mitigation of problematic, suffering states. **Note, using the term improvement can easily lead to accidental equivocation between scenarios of mitigating suffering versus increasing pleasure, but the ethical discernment  between each seems manageable. 
1
Anthony DiGiovanni
2y
No, that's precisely what I'm denying. So, the reason I mentioned that "arbitrary" view was that I thought Jack might be conflating my/Teo's view with one that (1) agrees that happiness intrinsically improves a mental state, but (2) denies that improving a mental state in this particular way is good (while improving a mental state via suffering-reduction is good). It's prima facie plausible that there's an improvement, sure, but upon reflection I don't think my experience that happiness has varying intensities implies that moving from contentment to more intense happiness is an improvement. Analogously, you can increase the complexity and artistic sophistication of some painting, say, but if no one ever observes it (which I'm comparing to no one suffering from the lack of more intense happiness), there's no "improvement" to the painting. You could, yeah, but I think "improvement" has such a strong connotation to most people that something of intrinsic value has been added. So I'd worry that using that language would be confusing, especially to welfarist consequentialists who think (as seems really plausible to me) that you should do an act to the extent that it improves the state of the world.
1
Dan Hageman
2y
Okay, thanks for clarifying for me! I think I was  confused in that opening line when you clarified that your views do not say that only a relief of suffering improves a mental state, but in reality it's that you do think such is the case, just not in conjunction with the claim that happiness also intrinsically improves a mental state, correct? >Analogously, you can increase the complexity and artistic sophistication of some painting, say, but if no one ever observes it (which I'm comparing to no one suffering from the lack of more intense happiness), there's no "improvement" to the painting. With respect to this, I should have clarified that the state of contentment, that becomes a more intense positive state was one of an existing and experiencing being, not a content state of non-existence and then pleasure is brought into existence. Given the latter, would the painting analogy hold, since in this thought experiment there is an experiencer who has some sort of improvement in their mental state, albeit not a categorical sort of improvement that is on par with the sort the relives suffering? I.e. It wasn't a problem per se (no suffering) that they were being deprived of the more intense pleasure, but the move from lower pleasure to higher pleasure is still an improvement in some way (albeit perhaps a better word would be needed to distinguish the lexical importance between these sorts of *improvements*).  
5
Anthony DiGiovanni
2y
I think they do argue that creating happiness isn't intrinsically good, because you can always construct a version of the Very Repugnant Conclusion that applies to a view that says suffering is weighed some finite X times more than happiness, and I find those versions almost as repugnant. E.g. suppose that on classical utilitarianism we prefer to create 100 purely miserable lives plus some large N micro-pleasure lives over creating 10 purely blissful lives. On this new view, we'd prefer to create 100 purely miserable lives plus X*N micro-pleasure lives over the 10 purely blissful lives. Another variant you could try is a symmetric lexical view where only sufficiently blissful experiences are allowed to outweigh misery. But while some people find that dissolves the repugnance of the VRC, I can't say the same. Increasing the X, or introducing lexicalities, to try to escape the VRC just misses the point, I think. The problem is that (even super-awesome/profound) happiness is treated as intrinsically commensurable with miserable experiences, as if giving someone else happiness in itself solves the miserable person's urgent problem. That's just fundamentally opposed to what I find morally compelling. (I like the monk example given in the other response to your question, anywho. I've written about why I find strong SFE compelling elsewhere, like here and here.) Yeah, that is indeed my response; I have basically no sympathy to the perspective that considers the pain intrinsically necessary in this scenario, or any scenario. This view seems to clearly conflate intrinsic with instrumental value. "Disrespect" and "grotesqueness" are just not things that seem intrinsically important to me, at all. Depends how you define a preference, I guess, but the point of the thought experiment is to suspend your disbelief about the flow-through effects here. Just imagine that literally nothing changes about the world other than that the suffering is relieved. This seems so obviously b
5
JackM
2y
“I have basically no sympathy to the perspective that considers the pain intrinsically necessary in this scenario, or any scenario.” I wasn’t expecting you to. I don’t have any sympathy for it either! I was just giving you an argument that I suspect many others would find compelling. Certainly if my sister died and I didn’t feel anything, my parents wouldn’t like that! Maybe it’s not particularly relevant to you if an argument is considered compelling by others, but I wanted to raise it just in case. I certainly don’t expect to change your mind on this - nor do I want to as I also think suffering is bad! I’m just not sure suffering being bad is a smaller leap than saying happiness is good.
5
Anthony DiGiovanni
2y
Here's another way of saying my objection to your original comment: What makes "happiness is intrinsically good" more of an axiom than "sufficiently intense suffering is morally serious in a sense that happiness (of the sort that doesn't relieve any suffering) isn't, so the latter can't compensate for the former"? I don't see what answer you can give that doesn't appeal to intuitions about cases.
2
MichaelStJules
2y
https://forum.effectivealtruism.org/posts/GK7Qq4kww5D8ndckR/michaelstjules-s-shortform?commentId=LZNATg5BoBT3w5AYz
2
Anthony DiGiovanni
2y
For all practical purposes suffering is dispreferred by beings who experience it, as you know, so I don't find this to be a counterexample. When you say you don't want someone to make you less sad about the problems in the world, it seems like a Pareto improvement would be to relieve your sadness without changing your motivation to solve those problems—if you agree, it seems you should agree the sadness itself is intrinsically bad.
3
Hank_B
2y
This response is a bit weird to me because the linked post has two counter-examples and you only answered one, but I feel like the other still applies. The other thought experiment mentioned in the piece is that of a cow separated from her calf and the two bovines being distressed by this. Michael says (and I'm sympathetic) that the moral action here is to fulfill the bovines preferences to be together, not remove their pain at separation without fulfilling that preference (e.g. through drugging the cows into bliss). Your response about Pareto Improvements doesn't seem to work here, or seems less intuitive to me at least. Removing their sadness at separation while leaving their desire to be together intact isn't a clear Pareto improvement unless one already accepts that pain is what is bad. And it is precisely the imagining of a separated cow/calf duo drugged into happiness but wanting one another that makes me think maybe it isn't the pain that matters.
3
Anthony DiGiovanni
2y
I didn't directly respond to the other one because the principle is exactly the same. I'm puzzled that you think otherwise. I mean, in thought experiments like this all one can hope for is to probe intuitions that you either do or don't have. It's not question-begging on my part because my point is: Imagine that you can remove the cow's suffering but leave everything else practically the same. (This, by definition, assesses the intrinsic value of relieving suffering.) How could that not be better? It's a Pareto improvement because, contra the "drugged into happiness" image, the idea is not that you've relieved the suffering but thwarted the cow's goal to be reunited with its child; the goals are exactly the same, but the suffering is gone, and it just seems pretty obvious to me that that's a much better state of the world.
3
Hank_B
2y
I think my above reply missed the mark here. Sticking with the cow example, I agree with you that if we removed their pain at being separated while leaving the desire to be together intact, this seems like a Pareto improvement over not removing their pain.   A preferentist would insist here that the removal of pain is not what makes that situation better, but rather that pain is (probably) dis-prefered by the cows, so removing it gives them something they want.   But the negative hedonist (pain is bad, pleasure is neutral) is stuck with saying that the "drugged into happiness" image is as good as the "cows happily reunited" image. A preferentist by contrast can (I think intuitively) assert that reuniting the cows is better than just removing their pain, because reunification fulfills (1) the cows desire to be free of pain and (2) their desire to be together.
2
MichaelStJules
2y
I don't have settled views on whether or not suffering is necessarily bad in itself. That someone (or almost everyone) disprefers suffering doesn’t mean suffering is bad in itself. Even if people always disprefer less pleasure, it wouldn't follow that the absence of pleasure is bad in itself. Even those with symmetric views wouldn't say so; they'd say its absence is neutral and its presence is good and better. We wouldn't say dispreferring suffering makes the absence of suffering an intrinsic good. I'm sympathetic to a more general "relative-only" view according to which suffering is an evaluative impression against the state someone is in relative to an "empty" state or nonexistence, so a kind of self-undermining evaluation. Maybe this is close enough to intrinsic badness and can be treated like intrinsic badness, but it doesn't seem to actually be intrinsic badness. I think Frick's approach, Bader’s approach and Actualism, each applied to preferences that are "relative only" rather than whole lives, could still imply that worlds with less suffering are better and some lives with suffering are better not started, all else equal, while no lives are better started, all else equal. This is compatible with the reason we suffer sometimes being because of mere relative evaluations between states of the world without being "against" the current state or things being worse than nothing. It seems that a hedonist would need to say that removing my motivation is no harm to me personally, either (except for instrumental reasons), but that violates an interest of mine so seems wrong to me. This doesn't necessarily count against suffering being bad in itself or respond to your proposed Pareto improvement, it could just count against only suffering mattering.

For what it's worth, Magnus cites me, 2019 and Frick, 2020 further down.

My post and some other Actualist views support the procreation asymmetry without directly depending on any kind of asymmetry between goods and bads, harms and benefits, victims and beneficiaries, problems and opportunities or any kind of claimed psychological/consciousness asymmetries, instead only asymmetry in treating actual world people/interests vs non-actual world people/interests. I didn't really know what Actualism was at the time I wrote my post, and more standard accounts like Weak Actualism (see perhaps Hare, 2007, Roberts, 2011 or Spencer, 2021, and the latter responds to objections in the first two) or Spencer, 2021's recent Stable Actualism may be better. Another relatively recent paper is Cohen, 2019. There are probably other Actualist accounts out there, too.

I think Frick, 2020 also supports the procreation asymmetry without depending directly on an asymmetry, although Bykvist and Campbell, 2021 dispute this. Frick claims we have conditional reasons of the following kind:

I have reason to (if I do p, do q)

(In evaluative terms, which I prefer, we might instead write "it's better that (if p, then q)... (read more)

I think it does challenge the point but could have done so more clearly.

The post isn't broadly discussing "preventing bad things and causing good things", but more narrowly discussing preventing a person from existing or bringing someone into existence, who could have a good life or a bad life.

 

"Why should we not think that it’s good to bring into existence a flourishing life?"

Assuming flourishing means "net positive" and not "devoid of suffering", for the individual with a flourishing life who we are considering bringing into existence:

The potential "the presence of suffering" in their life, if we did bring them into existence, would be "bad and morally worth preventing"

while

The potential "absence of pleasure", if we don't being them into existence, "is not bad and not a problem".

2
JackM
2y
This seems to be begging the question. Someone could flat out disagree, holding the position that it is a problem not to create wellbeing/pleasure when one can do so, just as it is a problem not to avoid suffering / pain when one can do so. It still doesn't seem to me that you have given any independent justification for the claim I've quoted.
7
freedomandutility
2y
In Magnus’s post, Will MacAskill makes the claim that: “If we think it’s bad to bring into existence a life of suffering, why should we not think that it’s good to bring into existence a flourishing life? I think any argument for the first claim would also be a good argument for the second.” Magnus presents the asymmetry as an example of a view that offers an argument for the first claim but not for the second claim. I agree that someone can just say they disagree with the asymmetry and many people do - it think of it as a terminal belief that doesn’t have “underlying” justification, similar to views like “suffering is bad”. (Is there a proper philosophy term for what I’m calling a “terminal belief”?)
2
JackM
2y
What is the reasoning that the asymmetry uses to argue for the first claim? This isn't currently clear to me.  I suspect whatever the reasoning is that it can also be used to argue for the second claim.
2
MichaelStJules
2y
See my comment here.

The fundamental disagreement here is about whether something can meaningfully be good without solving any preexisting problem. At least, it must be good in a much weaker sense than something that does solve a problem.

1
Dan Hageman
2y
Right, though would it not be distinct for one to differ on whether they agree with the evaluation (I do) that one situation lacks a preexisting problem? If one takes the absence of pleasure as a preexisting problem, and perhaps even on the same moral plane as the preexisting problem of existing suffering, then the fundamental disagreement may not sufficiently be identified in this manner, right? 

Hi - thanks for writing this! A few things regarding your references to WWOTF:

The following is, as far as I can tell, the main argument that MacAskill makes against the Asymmetry (p. 172)

I’m confused by this sentence. The Asymmetry endorses neutrality about bringing into existence lives that have positive wellbeing, and I argue against this view for much of the population ethics chapter, in the sections “The Intuition of Neutrality”,  “Clumsy Gods: The Fragility of Identity”, and “Why the Intuition of Neutrality is Wrong”.

this still leaves open the question as to whether happiness and happy lives can outweigh suffering and miserable lives, let alone extreme suffering and extremely bad lives.

It’s true that I don’t discuss views on which some goods/bads are lexically more important than others; I think such views have major problems, but I don’t talk about those problems in the book. (Briefly: If you think that any X outweighs any Y, then you seem forced to believe that any probability of X, no matter how tiny, outweighs any Y. So: you can either prevent a one in a trillion trillion trillion chance of someone with a suffering life coming into existence, or guarantee a trillion l... (read more)

I think such views have major problems, but I don’t talk about those problems in the book. (Briefly: If you think that any X outweighs any Y, then you seem forced to believe that any probability of X, no matter how tiny, outweighs any Y. So: you can either prevent a one in a trillion trillion trillion chance of someone with a suffering life coming into existence, or guarantee a trillion lives of bliss. The lexical view says you should do the former. This seems wrong, and I think doesn’t hold up under moral uncertainty, either. There are ways of avoiding the problem, but they run into other issues.)

It really isn't clear to me that the problem you sketched is so much worse than the problems with total symmetric, average, or critical-level axiology, or the "intuition of neutrality." In fact this conclusion seems much less bad than the Sadistic Conclusion or variants of that, which affect the latter three. So I find it puzzling how much attention you (and many other EAs writing about population ethics and axiology generally; I don't mean to pick on you in particular!) devoted to those three views. And I'm not sure why you think this problem is so much worse than the Very Repugnant Conclusion (among other problems with outweighing views), either.

I sympathize with the difficulty of addressing so much content in a popular book. But this is a pretty crucial axiological debate that's been going on in EA for some time, and it can determine which longtermist interventions someone prioritizes.

>The following is, as far as I can tell, the main argument that MacAskill makes against the Asymmetry (p. 172)

I’m confused by this sentence. The Asymmetry endorses neutrality about bringing into existence lives that have positive wellbeing, and I argue against this view for much of the population ethics chapter, in the sections “The Intuition of Neutrality”,  “Clumsy Gods: The Fragility of Identity”, and “Why the Intuition of Neutrality is Wrong”.

The arguments presented against the Asymmetry in the section “The Intuition of Neutrality” are the ones I criticize in the post. The core claims defended in “Clumsy Gods” and “Why the Intuition of Neutrality Is Wrong” are, as far as I can tell, relative claims: it is better to bring Bob/Migraine-Free into existence than Alice/Migraine, because Bob/Migraine-Free would be better off. Someone who endorses the Asymmetry may agree with those relative claims (which are fairly easy to agree with) without giving up on the Asymmetry.

Specifically, one can agree that it’s better to bring Bob into existence than to bring Alice into existence while also maintaining that it would be better if Bob (or Migraine-Free) were not brought into existenc... (read more)

The Asymmetry endorses neutrality about bringing into existence lives that have positive wellbeing, and I argue against this view for much of the population ethics chapter, in the sections “The Intuition of Neutrality”,  “Clumsy Gods: The Fragility of Identity”, and “Why the Intuition of Neutrality is Wrong”.

You seem to be using a different definition of the Asymmetry than Magnus is, and I'm not sure it's a much more common one. On Magnus's definition (which is also used by e.g. Chappell; Holtug, Nils (2004), "Person-affecting Moralities"; and McMahan (1981), "Problems of Population Theory"), bringing into existence lives that have "positive wellbeing" is at best neutral. It could well be negative.

The kind of Asymmetry Magnus is defending here doesn't imply the intuition of neutrality, and so isn't vulnerable to your critiques like violating transitivity, or relying on a confused concept of necessarily existing people.

3
Evžen
1y
If bringing into existence lives that have positive wellbeing is at best neutral (and presumably strongly negative for lives with negative wellbeing) — why have children at all? Is it their instrumental value they bring in their lives that we're after under this philosophy? (Sorry, I'm almost surely missing something very basic here — not a philosopher.)
8
david_reinstein
2y
I'm struggling to interpret this statement? What is the underlying sense in which pain and pleasure are measures in the same units and are thus ‘equal even though the pain is morally weighted more highly?’ Knutson states the problem well IMO [1] Maybe you have some ideas and intuition into how to think about this? ---------------------------------------- 1. Thanks MSJ for this reference ↩︎
2
JackM
2y
One way of thinking about this would be in relation to self-reported life satisfaction. Consider someone who rates their life satisfaction at 1/10, citing extreme hunger. Now suppose you give a certain amount of food to bring them up to 2/10. You have essentially reduced suffering by 1 unit. Now consider someone who rates their satisfaction at 10/10, believing that their life could not be any better. Then consider that you do something for them (e.g. you give them a wonderful present) and they realise that their life is even better than before and retrospectively think they have actually increased from 9/10 to 10/10. We might say that happiness has been increased by one unit (I take this ‘retrospection’ approach to try to avoid that I might also be ‘reducing suffering’ here by implying there was no suffering at all to begin with - not sure if it really works, or if it’s actually necessary). If someone finds it more important to bring the person to 2/10 from 1/10 than it is to bring the other person to 10/10 from 9/10 one might be weighting removing a unit of suffering as more important than creating a unit of happiness.
2
david_reinstein
2y
But how would I know that we were comparing the same 'amount of change' in these cases? What makes going from 1/10 to 2/10 constitute "one unit" and going from 9/10 to 10/10 as also "one unit"? And if these are not the same 'unit' then how do I know that the person who finds the first movement more valuable 'cares about suffering more'? Instead it might be that a 1-2 movement is just "a larger quantity" than a 9-10 movement.
2
JackM
2y
In practice you would have to make an assumption that people generally report on the same scale. There is some evidence from happiness research that this is the case (I think) but I’m not sure where this has got to. From your original question I thought you were essentially trying to understand, in theory, what weighting one unit of pain as greater than one unit of pleasure might mean. As per my example above, one could prioritise a one unit change on a self-reported scale if the change occurs at a lower position on the scale (assuming different respondents are using the same scale). Another perspective is that one could consider two changes that are the same in “intensity”, but one involves alleviating suffering (giving some food to a starving person) and one involves making someone happier (giving someone a gift) - and then prioritising giving someone the food. For these two actions to be the same in intensity, you can’t be giving all that much food to the starving person because it will generally be easy to alleviate a large amount of suffering with a ‘small’ amount of food, but relatively difficult to increase happiness of someone who isn’t suffering much, even with an expensive gift. Not sure if I’m answering your questions at all but still interesting to think through!
3
MichaelStJules
2y
Thank you for clarifying! This is true for utility/social welfare functions that are additive even over uncertainty (and maybe some other classes), but not in general. See this thread of mine. Is this related to lexical amplifications of nonlexical theories like CU under MEC? Or another approach to moral uncertainty? My impression from your co-authored book on moral uncertainty is that you endorse MEC with intertheoretic comparisons (I get the impression Ord endorses a parliamentary approach from his other work, but I don't know about Bykvist). 

Good post! I mostly agree with sections. (2) and (4) and would echo other comments that various points made are under-discussed. 

My "disagreement" - if you can call it that - is that I think the general case here can be made more compelling by using assumptions and arguments that are weaker/more widely shared/more likely to be true.  Some points:

  • The uncertainty Will fails to discuss (in short, the Very Repugnant Conclusion)  can be framed as fundamental moral uncertainty, but I think it's better understood as the more prosaic, sorta-almost-empirical question "Would a self-interested rational agent with full knowledge and wisdom choose to experience every moment of sentience in a given world over a given span of time?" 
    • I personally find this framing more compelling because it puts one in the position of answering something more along the lines of "would I live the life of a fish that dies by asphyxiation?" than"does some (spooky-seeming) force called 'moral outweighing' exist in the universe"
    • Even a fully-committed total utilitarian who would maintain that all amounts of suffering are in principle outweighable can have this kind of quasi-empirical uncertainty of w
... (read more)

Thanks for writing. You're right that MacAskill doesn't address these non-obvious points, though I want to push back a bit. Several of your arguments are arguments for the view that "intrinsically positive lives do not exist," and more generally that intrinsically positive moments do not exist. Since we're talking about repugnant conclusions, readers should note that this view has some repugnant conclusions of its own.

[Edit: I stated the following criticism too generally; it only applies when one makes an additional assumption: that experiences matter, whi... (read more)

It implies that there wouldn't be anything wrong with immediately killing everyone reading this, their families, and everyone else, since this supposedly wouldn't be destroying anything positive.

That's not how many people with the views Magnus described would interpret their views.

For instance, let's take my article on tranquilism, which Magnus cites. It says this in the introduction:

Tranquilism is not meant as a standalone moral theory, but as a way to think about well-being and the value of different experiences. Tranquilism can then serve as a building block for more complex moral views where things other than experiences also matter morally.

Further in the text, it contains the following passage:

As a theory limited to the evaluation of experienced well-being, tranquilism is compatible with pluralistic moral views where things other than experiences – for instance the accomplishment of preferences and life goals – can be (dis)valuable too.

And at the end in the summary:

Tranquilism is not committed to the view that cravings are all that matter. Our motivation is multifaceted, and next to impulsive motivation through cravings, we are also motivated by desires to achiev

... (read more)
6
Mau
2y
Thanks for the thoughtful reply. You're right, you can avoid the implications I mentioned by adopting a preference/goal-focused framework. (I've edited my original comment to flag this; thanks for helping me recognize it.) That does resolve some problems, but I think it also breaks most of the original post's arguments, since they weren't made in (and don't easily fit into) a preference-focused framework. For example: * The post argues that making happy people isn't good and making miserable people is bad, because creating happiness isn't good and creating suffering is bad. But it's unclear how this argument can be translated into a preference-focused framework. * Could it be that "satisfying preferences isn't good, and frustrating preferences is bad"? That doesn't make sense to me; it's not clear to me there's a meaningful distinction between satisfying a preference and keeping it from being frustrated. * Could it be that "satisfying positive preferences isn't good, and satisfying negative preferences is good?" But that seems pretty arbitrary, since whether we call some preference positive or negative seems pretty arbitrary (e.g. do I have a positive preference to eat or a negative preference to not be hungry? Is there a meaningful difference?). * The second section of the original post emphasizes extreme suffering and how it might not be outweighable. But what does this mean in a preference-focused context? Extreme preference frustration? I suspect, for many, that doesn't have the intuitive horribleness that extreme suffering does. * The third section of the post focuses on surveys that ask questions about happiness and suffering, so we can't easily generalize these results to a preference-focused framework. (I also agree--as I tried to note in my original comment's first bullet point--that pluralistic or "all-things-considered" views avoid the implications I mentioned. But I think ethical views should be partly judged based on the implications they hav
8
Lukas_Gloor
2y
My impression of the OP's primary point was that asymmetric views are under-discussed. Many asymmetric views are preference-based and this is mentioned in the OP (e.g., the link to Anti-frustrationism or mention of Benatar). Of the experience-based asymmetric views discussed in the OP, my posts on tranquilism and suffering-focused ethics mention value pluralism and the idea that things other than experiences (i.e., preferences mostly) could also be valuable. Given these explicit mentions it seems false to claim that "these views don't easily fit into a preference-focused framework." Probably similarly, the OP links to posts by Teo Ajantaival which I've only skimmed but there's a lengthy and nuanced-seeming discussion on why minimalist axiologies, properly construed, don't have the implications you ascribed to them. The NU FAQ is a bit more single-minded in its style/approach, but on the question "Does negative utilitarianism solve ethics" it says "ethics is nothing that can be 'solved.'" This at least tones down the fanaticism a bit and opens up options to incorporate other principles or other perspectives. (Also, it contains an entire section on NIPU – negative idealized preference utilitarianism. So, that may count as another preference-based view alluded in the OP, since the NU FAQ doesn't say whether it finds N(H)U or NIPU "more convincing.") I'm not sure why you think the argument would have to be translated into a preference-focused framework. In my previous comment I wanted to say the following: (1) The OP mentions that asymmetric positions are underappreciated and cites some examples, including Anti-Frustrationism, which is (already) a preference-based view. (2) While the OP does discuss experience-focused views that say nothing is of intrinsic value, those views are compatible with a pluralistic conception of "ethics/morality" where preferences could matter too. Therefore, killing people against their will to reduce suffering isn't a clear implication o
5
Mau
2y
I think this misunderstands the point I was making. I meant to highlight how, if you're adopting a pluralistic view, then to defend a strong population asymmetry (the view emphasized in the post's title), you need reasons why none of the components of your pluralistic view value making happy people.* This gets harder the more pluralistic you are, especially if you can't easily generalize hedonic arguments to other values. As you suggest, you can get the needed reasons by introducing additional assumptions/frameworks, like rejecting the principle that it's better for there to be more good things. But I wouldn't call that an "easy fit"; that's substantial additional argument, sometimes involving arguing against views that many readers of this forum find axiomatically appealing (like that it's better for there to be more good things). (* Technically you don't need reasons why none of the views consider the making of happy people valuable, just reasons why overall they don't. Still, I'd guess those two claims are roughly equivalent, since I'm not aware of any prominent views which hold the creation of purely happy people to be actively bad.) Besides that, I think at this point we're largely in agreement on the main points we've been discussing? * I've mainly meant to argue that some of the ethical frameworks that the original post draws on and emphasizes, in arguing for a population asymmetry, have implications that many find very counterintuitive. You seem to agree. * If I've understood, you've mainly been arguing that there are many other views (including some that the original post draws on) which support a population asymmetry while avoiding certain counterintuitive implications. I agree. * Your most recent comment seems to frame several arguments for this point as arguments against the first bullet point above, but I don't think they're actually arguments against the above, since the views you're defending aren't the ones my most-discussed criticism applies
4
Lukas_Gloor
2y
Thanks for elaborating! I agree I misunderstood your point here. (I think preference-based views fit neatly into the asymmetry. For instance, Peter Singer initially weakly defended an asymmetric view in Practical Ethics, as arguably the most popular exponent of preference utilitarianism at the time. He only changed his view on population ethics once he became a hedonist. I don't think I'm even aware of a text that explicitly defends preference-based totalism. By contrast, there are several texts defending asymmetric preference-based views: Benatar, Fehige, Frick, younger version of Singer.) Or that “(intrinsically) good things” don’t have to be a fixed component in our “ontology” (in how we conceptualize the philosophical option space). Or, relatedly, that the formula “maximize goods minus bads” isn’t the only way to approach (population) ethics. Not because it's conceptually obvious that specific states of the world aren't worthy of taking serious effort (and even risks, if necessary) to bring about. Instead, because it's questionable to assume that "good states" are intrinsically good, that we should bring them about regardless of circumstances, independently of people’s interests/goals. I agree that we’re mainly in agreement. To summarize the thread, I think we’ve kept discussing because we both felt like the other party was presenting a slightly unfair summary of how many views a specific criticism applies or doesn’t apply to (or applies “easily” vs. “applies only with some additional, non-obvious assumptions”). I still feel a bit like that now, so I want to flag that out of all the citations from the OP, the NU FAQ is really the only one where it’s straightforward to say that one of the two views within the text – NHU but not NIPU – implies that it would (on some level, before other caveats) be good to kill people against their will (as you claimed in your original comment). From further discussion, I then gathered that you probably meant that specific arg
2
Mau
2y
Fair points! Here I'm moving on from the original topic, but if you're interested in following this tangent--I'm not quite getting how preference-based views (specifically, person-affecting preference utilitarianism) maintain the asymmetry while avoiding (a slightly/somewhat weaker version of) "killing happy people is good." Under "pure" person-affecting preference utilitarianism (ignoring broader pluralistic views of which this view is just one component, and also ignoring instrumental justifications), clearly one reason why it's bad to kill people is that this would frustrate some of their preferences. Under this view, is another (pro tanto) reason why it's bad to kill (not-entirely-satisfied) people that their satisfaction/fulfillment is worth preserving (i.e. is good in a way that outweighs associated frustration)? My intuition is that one answer to the above question breaks the asymmetry, while the other revives some very counterintuitive implications. * If we answer "Yes," then, through that answer, we've accepted a concept of "actively good things" into our ethics, rejecting the view that ethics is just about fixing states of affairs that are actively problematic. Now we're back in (or much closer to?) a framework of "maximize goods minus bads" / "there are intrinsically good things," which seems to (severely) undermines the asymmetry. * If we answer "No," on the grounds that fulfillment can't outweigh frustration, this would seem to imply that one should kill people, whenever their being killed would frustrate them less than their continued living. Problematically, that seems like it would probably apply to many people, including many pretty happy people. * After all, suppose someone is fairly happy (though not entirely, constantly fulfilled), is quite myopic, and only has a moderate intrinsic preference against being killed. Then, the preference utilitarianism we're considering seems to endorse killing them (since killing them would "only" frustrat
2
Lukas_Gloor
2y
I would answer "No." The preference against being killed is as strong as the happy person wants it to be. If they have a strong preference against being killed then the preference frustration from being killed would be lot worse than the preference frustration from an unhappy decade or two – it depends how the person herself would want to make these choices. I haven't worked this out as a formal theory but here are some thoughts on how I'd think about "preferences." (The post I linked to primarily focuses on cases where people have well-specified preferences/goals. Many people will have under-defined preferences and preference utilitarians would also want to have a way to deal with these cases. One way to deal with under-defined preferences could be "fill in the gaps with what's good on our experience-focused account of what matters.")

[the view that intrinsically positive lives do not exist] implies that there wouldn't be anything wrong with immediately killing everyone reading this, their families, and everyone else, since this supposedly wouldn't be destroying anything positive.

This is not true. The view that killing is bad and morally wrong can be, and has been, grounded in many ways besides reference to positive value.[1]

First, there are preference-based views according to which it would be bad and wrong to thwart preferences against being killed, even as the creation and satisfaction of preferences does not create positive value (cf. Singer, 1980; Fehige, 1998). Such views could imply that killing and extinction would overall be bad.

Second, there are views according to which death itself is bad and a harm, independent of — or in addition to — preferences against it (cf. Benatar, 2006, pp. 211-221).

Third, there are views (e.g. ideal utilitarianism) that hold that certain acts such as violence and killing, or even intentions to kill and harm (cf. Hurka, 2001; Knutsson, 2022), are themselves disvaluable and make the world worse.

Fourth, there are nonconsequentialist views according to which we have moral duties... (read more)

9
Mau
2y
Thanks for the thoughtful reply; I've replied to many of these points here. On a few other ends: * I agree that strong negative utilitarian views can be highly purposeful and compassionate. By "semi-nihilistic" I was referring to how some of these views also devalue much (by some counts, half) of what others value. [Edit: Admittedly, many pluralists could say the same to pure classical utilitarians.] * I agree classical utilitarianism also has bullets to bite (though many of these look like they're appealing to our intuitions in scenarios where we should expect to have bad intuitions, due to scope insensitivity).

edit: I wrote this comment before I refreshed the page and I now see that these points have been raised!

Thanks for flagging that all ethical views have bullets to bite and for pointing at previous discussion of asymmetrical views!

However, I'm not really following your argument.

Several of your arguments are arguments for the view that "intrinsically positive lives do not exist,"  [...] It implies that there wouldn't be anything wrong with immediately killing everyone reading this, their families, and everyone else, since this supposedly wouldn't be destroying anything positive.

  • This doesn't necessarily follow, as Magnus explicitly notes that "many proponents of the Asymmetry argue that there is an important distinction between the potential value of continued existence (or the badness of discontinued existence) versus the potential value of bringing a new life into existence." So given that everyone reading this already exists, there is in fact potential positive value in continuing our existences.
  • However, I may have missed some stronger views that Magnus mentions that would lead to this implication. The closest I can find is when Magnus writes, some "views of wellbeing likewise
... (read more)
7
Mau
2y
Thanks for the thoughtful reply; I've replied to many of these points here. In short, I think you're right that Magnus doesn't explicitly assume consequentialism or hedonism. I understood him to be implicitly assuming these things because of the post's focus on creating happiness and suffering, as well as the apparent prevalence of these assumptions in the suffering-focused ethics community (e.g. the fact that it's called "suffering-focused ethics" rather than "frustration-focused ethics"). But I should have more explicitly recognized those assumptions and how my arguments are limited to them.

I understand that you feel that the asymmetry is true & important, but despite your arguments to the contrary, it still feels like it is a pretty niche position, and as such it feels ok not to have addressed it in a popular book.

Edit: Nope, a quick poll reveals this isn't the case, see this comment.

The Procreative Asymmetry is very widely held, and much discussed, by philosophers who work on population ethics (and seemingly very common in the general population). If anything, it's the default view, rather than a niche position (except among EA philosophers). If you do a quick search for it on philpapers.org there's quite a lot there. 

You might think the Asymmetry is deeply mistaken, but describing it as a 'niche position' is much like calling non-consequentialism a 'niche position'. 

The Asymmetry is certainly widely discussed by academic philosophers, as shown by e.g. the philpapers search you link to. I also agree that it seems off to characterize it as a "niche view".

I'm not sure, however, whether it is widely endorsed or even widely defended. Are you aware of any surveys or other kinds of evidence that would speak to that more directly than the fact that there are lot of papers on the subject (which I think primarily shows that it's an attractive topic to write about by the standards of academic philosophy)? 

I'd be pretty interested in understanding the actual distribution of views among professional philosophers, with the caveat that I don't think this is necessarily that much evidence for what view on population ethics should ultimately guide our actions. The caveat is roughly because I think the incentives of academic philosophy aren't strongly favoring beliefs on which it'd be overall good to act on, as opposed to views one can publish well about (of course there are things pushing in the other direction as well, e.g. these are people who've thought about it a lot and use criteria for critizing and refining views that are more widely endorsed, so i... (read more)

I agree with the 'spawned an industry' point and how that makes it difficult to assess how widespread various views really are.

As usual (cf. the founding impetus of 'experimental philosophy'), philosophers don't usually check whether the intuition is in fact widely held, and recent empirical work casts some doubt on that.

Magnus in the OP discusses the paper you link to in the quoted passage and points out that it also contains findings we can interpret in support of a (weak) asymmetry of some kind. Also, David (the David who's a co-author of the paper) told me recently that he thinks these types of surveys are not worth updating on by much [edit: but "casts some doubt on" is still accurate if we previously believed people would have clear answers that favor the asymmetry] because the subjects often interpret things in all kinds of ways or don't seem to have consistent views across multiple answers. (The publication itself mentions in the "Supplementary Materials" that framing effects play a huge role.)

4
Max_Daniel
2y
Thank you, that's interesting and I hadn't seen this.
6
David_Althaus
2y
(I now wrote a comment elaborating on some of these inconsistencies here.)
8
MichaelPlant
2y
This impression strikes me as basically spot on. It would have been more accurate for me to say it's taken to be a "widely held to be an intuitive desideratum for theories of population ethics". It does have its defenders, though, e.g. Frick, Roberts, Bader. I agree that there does not seem to be any theory that rationalises this intuition without having other problems (but this is merely a specific instance of the general case that there seems to be no theory of population ethics that retains all our intuitions - hence Arrhenius' famous impossibility result). I'm not aware of any surveys of philosophers on their views on population ethics. AFAIT, the number of professional philosophers who are experts in population ethics - depending on how one wants to define those terms - could probably fit into one lecture room.

and seemingly very common in the general population

 

So consider the wording in the post:

bringing a miserable life into the world has negative value while bringing a happy life into the world does not have positive value — except potentially through its instrumental effects and positive roles

If we do a survey of 100 Americans on Positly, with that exact wording, what percentage of randomly chosen people do you think would agree? I happen to respect Positly, but I am open to other survey methodologies.

I was intuitively thinking 5% tops, but the fact that you disagree strongly takes me aback a little bit.

Note that I think you were mostly thinking about philosophers, whereas I was mostly thinking about the general population.

I was intuitively thinking 5% tops

I'm surprised you'd have such a low threshold - I would have thought noise, misreading the question, trolling, misclicks etc. alone would push above that level.

2
NunoSempere
2y
You can imagine survey designs which would filter trolls &c, but you right I should have been slightly higher based on that.

It might also be worth distinguishing stronger and weaker asymmetries in population ethics. Caviola et al.'s main study indicates that laypeople on average endorse at least a weak axiological asymmetry (which becomes increasingly strong as the populations under consideration become larger), and the pilot study suggests that people in certain situations (e.g. when considering foreign worlds) tend to endorse a rather strong one, cf. the 100-to-1 ratio.

6
NunoSempere
2y
Makes sense.

Wow, I'd have said 30-65% for my 50% confidence interval, and <5% is only about 5-10% of my probability mass. But maybe we're envisioning this survey very differently.

Did a test run with 58 participants (I got two attempted repeats):

So you were right, and I'm super surprised here.

There is a paper by Lucius Caviola et al of relevance:

We found that people do not endorse the so-called intuition of neutrality according to which creating new people with lives worth living is morally neutral. In Studies 2a-b, participants considered a world containing an additional happy person better and a world containing an additional unhappy person worse. 

Moreover, we also found that people's judgments about the positive value of adding a new happy person and the negative value of adding a new unhappy person were symmetrical. That is, their judgments did not reflect the so-called asymmetry—according to which adding a new unhappy person is bad but adding a new happy person is neutral. 

The study design is quite different from Nuno's, though. No doubt the study design matters.

0
MichaelStJules
2y
In 2a, it looks like they didn't explicitly get subjects to try to control for impacts on other people in their question like Nuno did, and (I'm not sure if this matters) they assumed the extra person would be added to a world of a million neutral life people. They just asked, for each of adding a neutral life, adding a bad life and adding a good life: 2b was pretty similar, but used either an empty world or world of a billion neutral life people.
4
Stefan_Schubert
2y
2b involves an empty world - where there can't be an effect on other people - and replicates 2a afaict.
7
MichaelStJules
2y
Fair, my mistake. I wonder if the reason for adding the happy person to the empty world is not welfarist, though, e.g. maybe people really dislike empty worlds, value life in itself or think empty worlds lack beauty or something. EDIT: Indeed, it seemed some people preferred to add an unhappy life than not, basically no one preferred not to add a happy life and people tended to prefer adding a neutral life than not, based on figure 5 (an answer of 4 means "equally good", above means better and below means worse). Maybe another explanation compatible with welfarist symmetry is that if there's at least one life, good or bad, they expect good lives eventually, and for them to outweigh the bad. Also, does the question actually answer whether anyone in particular holds the asymmetry, or are they just averaging responses across people? You could have some people who actually give greater weight to adding a happy life to an empty world than adding a miserable life to an empty world (which seems to be the case, based on Figure 5), along with people holding the standard asymmetry or weaker versions, and they could roughly cancel out in aggregate to support symmetry.

Words cannot express how much I appreciate your presence Nuno.

Sorry for being off-topic but I just can't help myself. This is comment is such a perfect example of the attitude that made me fall in with this community.

9
Pablo
2y
It is "very widely held" by philosophers only in the sense that it is a pre-theoretic intuition that many people, including philosophers, share. It is not "very widely held" by philosophers on reflection.

The intuition seems to be almost universally held. I agree many philosophers (and others) think that this intuition must, on reflection, be mistaken. But many philosophers, even after reflection, still think the procreative asymmetry is correct. I'm not sure how interesting it would be to argue about the appropriate meaning of the phrase "very widely held". Based on my (perhaps atypical) experience, I guess that if you polled those who had taken a class on population ethics, I expect about 10% would agree with the statement "the procreative asymmetry is a niche position".

3
David Mathers
2y
Which version of the intuition? If you just mean 'there is greater value in preventing the creation of a life with X net utils of suffering than in creating a life with X net utils of pleasure', then maybe. But people often claim that 'adding net-happy people is neutral, whilst adding net-suffering people is bad' is intuitive, and there was a fairly recent paper claiming to find that this wasn't what ordinary people thought when surveyed: https://www.iza.org/publications/dp/12537/the-asymmetry-of-population-ethics-experimental-social-choice-and-dual-process-moral-reasoning    I haven't actually read the paper to check if it's any good though...

I upvoted this comment because I think there's something to it.

That said, see the comment I made elsewhere in this thread about the existence of selection effects. The asymmetry is hard to justify for believers in an objective axiology, but philosophers who don't believe in an objective axiology likely won't write paper after paper on population ethics.

Another selection effect is that consequentialists are morally motivated to spread their views, which could amplify consensus effects (even if it applies to consequentialists on both sides of the split, one group being larger and better positioned to start with can amplify the proportions after a growth phase). For instance, before the EA-driven wave of population ethics papers, presumably the field would have been split more evenly?

Of course, if EA were to come out largely against any sort of population-ethical asymmetry, that's itself evidence for (a lack of) convincingness of the position. (At the same time, a lot of EAs take moral realism seriously* and I don't think they're right – I'd be curious what a poll of anti-realist EAs would tell us about population-ethical asymmetries of various kinds and various strengths.)

*I should mention that this includes Magnus, author of the OP. I probably don't agree with his specific arguments for there being an asymmetry, but I do agree with the claim that the topic is underexplored/underappreciated.

3
David Mathers
2y
What exactly do you mean by "have an objective axiology" and why do you think it makes it (distinctively) hard to defend asymmetry? (I have an eccentric philosophical view that the word "objective" nearly always causes more trouble than it's worth and should be tabooed.) 

The short answer:

Thinking in terms of "something has intrinsic value" privileges particular answers. For instance, in this comment today, MichaelPlant asked Magnus the following:

[...] why do we have reason to prevent what is bad but no reason to bring about what is good?"

The comment presupposes that there's "something that is bad" and "something that is good" (in a sense independent of particular people's judgments – this is what I meant by "objective"). If we grant this framing, any arguments for why "create what's good" is less important than "don't create what's bad" will seem ad hoc!

Instead, for people interested in exploring person-affecting intuitions (and possibly defending them), I recommend taking a step back to investigate what we mean when we say things like "what's good" or "something has intrinsic value." I think things are good when they're connected to the interests/goals of people/beings, but not in some absolute sense that goes beyond it. In other words, I only understand the notion of (something like) "conditional value," but I don't understand "intrinsic value."

The longer answer:

Here's a related intuition:

  • There’s a tension between the beliefs “there’s an ob
... (read more)
3
David Mathers
2y
I'm not sure I really follow (though I admit I've only read the comment, not the post you've linked to.) Is the argument something like we should only care about fulfilling preferences that already exist, and adding people to the world doesn't automatically do that, so there's no general reason to add happy people if it doesn't satisfy a preference of someone who is here already? Couldn't you show that adding suffering people isn't automatically bad by the same reasoning, since it doesn't necessarily violate an existing preference? (Also, on the word "objective": you can definitely have a view of morality on which satisfying existing preference or doing what people value is all that matters, but it is mind-independently true that this is the correct morality, which makes it a realist view as academic philosophers classify things, and hence a view on which morality is objective in one sense of "objective". Hence why  I think "objective" should be tabooed.) 
6
Lukas_Gloor
2y
Pretty much, but my point is only that this is a perfectly defensible way to think about population ethics, not that I expect everyone to find it compelling over alternatives. As I say in the longer post: I agree with what you write about "objective" – I'm guilty of violating your advice. (That said, I think there's a sense in which preference utilitarianism would be unsatisfying as a "moral realist" answer to all of ethics because it doesn't say anything about what preferences to adopt. Or, if it did say what preferences to adopt, then it would again be subject to my criticism – what if objective preference utilitarianism says I should think of my preferences in one particular way but that doesn't resonate with me?) I tried to address this in the last paragraph of my previous comment. It gets a bit complicated because I'm relying on a distinction between "ambitious morality" and "minimal morality" ( = "don't be a jerk") which also only makes sense if there's no objective axiology. I don't expect the following to be easily intelligible to people used to thinking within the moral realist framework, but for more context, I recommend the section "minimal morality vs. ambitious morality" here. This link explains why I think it makes sense to have a distinction between minimal morality and ambitious morality, instead of treating all of morality as the same thing. ("Care morality" vs. "cooperation morality" is a similar framing, which probably tells you more about what I mean here.) And my earlier comment (in particular, the last paragraph in my previous comment) already explained why I think minimal morality contains a population-ethical asymmetry.
6
MichaelStJules
2y
I'd guess contractualists and rights-based theorists (less sure about deontologists generally) would normally take the asymmetry to be true, because if someone is never born, there are no claims or rights of theirs to be concerned with. I don't know how popular it is among consequentialists, virtue ethicists or those with mixed views. I wouldn't expect it to be extremely uncommon or for the vast majority to accept it.

I understand that you feel that the asymmetry is true

Just to clarify, I wouldn't say that. :)

and as such it feels ok not to have addressed it in a popular book.

But the book does briefly take up the Asymmetry, and makes a couple of arguments against it. The point I was trying to make in the first section is that these arguments don't seem convincing.

The questions that aren't addressed are those regarding interpersonal outweighing — e.g. can purported goods morally outweigh extreme suffering? Can happy lives morally outweigh very bad lives? (As I hint in the post, one can reject the Asymmetry while also rejecting interpersonal moral outweighing of certain kinds, such as those that would allow some to experience extreme suffering for the pleasure of others, or those that would allow extremely miserable lives to be morally outweighed by a large number of happy lives, cf. Vinding, 2020, ch. 3.) 

These questions do seem of critical importance to our future priorities. Even if one doesn't think that they need to be raised in a popular book that promises a deep dive on population ethics, they at least deserve to be discussed in depth by aspiring effective altruists.

That doesn't seem true to me (see MichaelPlant's comment).

Also, there's a selection effect in academic moral philosophy where people who don't find the concept of "intrinsic value" / "the ethical value of a life" compelling won't go on to write paper after paper about it. For instance, David Heyd wrote one of the earliest books on "population ethics" (the book was called "Genethics" but the term didn't catch on) and argued that it's maybe "outside the scope of ethics." Once you said that, there isn't a lot else to say. Similarly, according to this comment by peterhartree, Bernard Williams also has issues with the way other philosophers approach population ethics. He argues for his position of reasons anti-realism, which says that there's no perspective external to people's subjective reasons for action that has the authority to tell us how to live.

If you want an accurate count on philosophers' views on population ethics, you have to throw the net wide to include people who looked at the field, considered that it's a bit confused because of reasons anti-realism, and then moved on rather than repeating arguments for reasons anti-realism. (The latter would be a bit boring because you... (read more)

Could a focus on reducing suffering flatten the interpretation of life into a simplistic pleasure / pain dichotomy that does not reflect the complexity of nature? I find it counterintuitive to assume, that wild nature plausibly is net negative because of widespread wild animal suffering (WWOTF p.213). 

Curated and popular this week
Relevant opportunities