antimonyanthony

Posts

Sorted by New

Comments

Exploring a Logarithmic Tolerance of Suffering

Personally I still wouldn't consider it ethically acceptable to, say, create a being experiencing a -100-intensity torturous life provided that a life  with exp(100)-intensity happiness is also created. Even after trying strongly to account for possible scope neglect. Going from linear to log here doesn't seem to address the fundamental asymmetry. But I appreciate this post, and I suspect quite a few longtermists who don't find stronger suffering-focused views compelling would be sympathetic to a view like this one - and the implications for prioritizing s-risks versus extinction risks seem significant.

Spears & Budolfson, 'Repugnant conclusions'

But of course the A and Z populations are already impossible, because we already have present and past lives that aren't perfectly equal and aren't all worth living.  So-- even setting aside possible boundedness on the number of lives--the RC has always fundamentally been about comparing undeniably impossible  populations

I don't find this a compelling response to Guillaume's objection. There seems to be a philosophically relevant difference between physical impossibility of the populations, and metaphysical impossibility of the axiological objects. We study population ethics because we expect our decisions about the trajectory of the long-term future to approximate the decisions involved in these thought experiments. So the point is that NU would not prescribe actions with the general structure of "choose a future with arbitrarily many torturous lives and a sufficiently large number of slightly more happy than suffering lives [regardless of whether we call these positive utility lives], over a future with arbitrarily many perfectly happy lives," but these other axiologies would. (ETA: As Michael noted, there are other intuitively unpalatable actions that NU would prescribe too. But the whole message of this paper is that we need to distinguish between degrees of repugnance to make progress, and for some, the VRC is more repugnant than the conclusions of NU.)

How to PhD

You will find yourself justifying the stupidest shit on impact grounds, and/or pursuing projects which directly make the world worse.

Could you be a bit more specific about this point? This sounds very field-dependent.

Proposed Longtermist Flag

I downvoted for reasons similar to Stefan's comment: longtermism is not synonymous with a focus on x-risk and space colonization, and the black bar symbolism creates that association. In EA discourse, I have observed consistent conflation of longtermism with this particular subset of longtermist priorities, and I'd like to strongly push back against that. (I believe I would feel the same even if my priorities aligned with that subset.)

On future people, looking back at 21st century longtermism

But we should care about individual orangutans, & it seems plausible to me that they care whether they go extinct. Large parts of their lives are after all centered around finding mates & producing offspring. So to the extent that anything is important to them (& I would argue that things can be just as important to them as they can be to us), surely the continuation of their species/bloodline is.

I'm pretty skeptical of this claim. It's not evolutionarily surprising that orangutans (or humans!) would do stuff that decreases their probability of extinction, but this doesn't mean the individuals "care" about the continuation of their species per se. Seems we only have sufficient evidence to say they care about doing the sorts of things that tend to promote their own (and relatives', proportional to strength of relatedness) survival and reproductive success, no?

Against neutrality about creating happy lives

For NU (including lexical threshold NU), this can mean adding an arbitrarily huge number of new people to hell to barely reduce the suffering for each person in a sufficiently large population already in hell. (And also not getting the very positive lives, but NU treats them as 0 welfare anyway.)

This may be counterintuitive to an extent, but to me it doesn't reach "very repugnant" territory. Misery is still reduced here; an epsilon change of the "reducing extreme suffering" sort, evenly if barely so, doesn't seem morally frivolous like the creation of an epsilon-happy life or, worse, creation of an epsilon roller coaster life. But I'll have to think about this more. It's a good point, thanks for bringing it to my attention.

Against neutrality about creating happy lives

I am also interested by the claim in this paper that the repugnant conclusion afflicts all population axiologies, including person-affecting views

Not negative utilitarian axiology. The proof relies on the assumption that the utility variable u can be positive.

What if "utility" is meant to refer to the objective aspects of the beings' experience etc. that axiologies would judge as good or bad—rather than to moral goodness or badness themselves? Then I think there are two problems:

  • 1) Supposing it's a fair move to aggregate all these aspects into one scalar, the theorem assumes the function f  must be strictly increasing. Under this interpretation the NU function would be f(u) = min(u, 0).
  • 2) I deny that such aggregation even is a reasonable move. Restricting to hedonic welfare for simplicity, it would be more appropriate for f to be a function of two variables, happiness and suffering. Collapsing this into a scalar input, I think, obscures some massive moral differences between different formulations of the Repugnant Conclusion, for example. Interestingly, though, if we formulate the VRC as in that paper by treating all positive values of u as "only happiness, no suffering" and all negative values as "only suffering, no happiness" (thereby making my objection on this point irrelevant) the theorem still goes through for all those axiologies. But not for NU.

Edit: The paper seems to acknowledge point #2, though not the implications for NU:

One way to see that a ε increase could be very repugnant is to recall Portmore’s (1999) suggestion that ε lives in the restricted RC could be “roller coaster” lives, in which there is much that is wonderful, but also much terribly suffering, such that the good ever-so-slightly outweighs the bad. Here, one admitted possibility is that an ε-change could substantially increase the terrible suffering in a life, and also increase good components; such a ε-change is not the only possible ε-change, but it would have the consequence of increasing the total amount of suffering. ... Moreover, if ε-changes are of the “roller coaster” form, they could increase deep suffering considerably beyond even the arbitrarily many [u < 0] lives, and in fact could require everyone in the chosen population to experience terrible suffering.

Against neutrality about creating happy lives

I guess it was unclear that here I was assuming that the creator knows with certainty all the evaluative contents of the life they're creating. (As in the Wilbur and Michael thought experiments.) I would be surprised if anyone disagreed that creating a life you know won't be worth living, assuming no other effects, is wrong. But I'd agree that the claim about lives not worth living in expectation isn't uncontroversial, though I endorse it.

[edit: Denise beat me to the punch :)]

Against neutrality about creating happy lives

[Apologies for length, but I think these points are worth sharing in full.]

As someone who is highly sympathetic to the procreation asymmetry, I have to say, I still found this post quite moving. I’ve had, and continue to have, joys profound enough to know the sense of awe you’re gesturing at. If there were no costs, I’d want those joys to be shared by new beings too.

Unfortunately, assuming that we’re talking about practically relevant cases where creating a "happy" life also entails suffering of the created person and other beings, there are costs in expectation. (I assume no one has moral objections to creating utterly flawless lives, so the former is the sense in which I read "neutrality." See also this comment. Please let me know if I've misunderstood.) And I find those costs qualitatively more serious than the benefits. Let me see if I can convey where I’m coming from.

I found it surprising that you wrote:

I have refrained, overall, from framing the preceding discussion in specifically moral terms — implying, for example, that I am obligated to create Michael, instead of going on my walk. I think I have reasons to create Michael that have to do with the significance of living for Michael; but that’s not yet to say, for example, that I owe it to Michael to create him, or that I am wronging Michael if I don’t.

Because to me this is exactly the heart of the asymmetry. It’s uncontroversial that creating a person with a bad life inflicts on them a serious moral wrong. Those of us who endorse the asymmetry don’t see such a moral wrong involved in not creating a happy life. (If one is a welfarist consequentialist, a fortiori this calls into question the idea that the uncreated happy person is "wronged" in any prudential sense.)

To flesh that out a bit: You acknowledged, in sketching out Michael’s hypothetical life, these pains:

 I see a fight with that same woman, a sense of betrayal, months of regret. … I see him on his deathbed … cancer blooming in his stomach

When I imagine the prospect of creating Michael, these moments weigh pretty gravely. I feel the pang of knowing just how utterly crushing a conflict with the most important person in one’s life can be; the pit in the gut, the fear, shock, and desperation. I haven’t had cancer, but I at least know the fear of death, and can only imagine it gets more haunting when one actually expects to die soon. By all reports, cancer is clearly a fate I couldn’t possibly wish on anyone, and suffering it slowly in a hospital sounds nothing short of harrowing.

I simply can't comprehend creating those moments in good conscience, short of preventing greater pain broadly construed. It seems cruel to do so. By contrast, although Michael-while-happy would feel grateful to exist, it doesn’t seem cruel to me at all to not invite his nonexistent self to the "party," in your words. As you acknowledge, the objection is that "if [he] hadn’t been created, [he] wouldn’t exist, and there would be no one that [my] choice was ‘worse for.’" I don’t see a strong enough reason to think the Michael-while-happy experiences override the Michael-while-miserable experiences, given the difference in moral gravity. It seems cold comfort to tell the moments of Michael that beg for relief, "I’m sorry for the pain I gave you, but it's worth it for the party to come."

I feel inclined, not to "disagree" with them, but rather to inform them that they are wrong

Likewise I feel inclined to inform the Michael-creators that they are wrong, in implicitly claiming that the majority vote of Michael-while-happy can override the pleas of Michael-while-miserable. Make no mistake, I abhor scope neglect. But this is not a question of ignoring numbers, any more than someone who would not torture a person for any number of beautiful lifeless planets created in a corner of the universe where no one could ever observe them. It's about prioritizing needs over wants, the tragic over the precious.

Lastly, you mention the golden rule as part of your case. I personally would not want to be forced by anyone - including my past self, who often acts in a state of myopia and doesn't remember how awful the worst moments are - to suffer terribly because they judged it was worth it for the goods in life.

I do of course have some moral uncertainty on this. There are some counterintuitive implications to the view I sketched here. But I wouldn't say this is an unnecessary addition to the hardness of population ethics.

Layman’s Summary of Resolving Pascallian Decision Problems with Stochastic Dominance

While I think this is a fascinating concept, and probably pretty useful as a heuristic in the real hugely uncertain world, I don't think it addresses the root of the decision theoretic puzzles here. I - and I suspect most people? - want decision theory to give an ordering over options even assuming no background uncertainty, which SD can't provide on its own. If option A is 100% chance of -10 utility, and option B is 50% chance of -10^20 utility else 0, it seems obvious to me that B is a very very terrible, not rationally permitted choice. But in a world with no background uncertainty A would not stochastically dominate B.

Load More