I'm Anthony DiGiovanni, a suffering-focused AI safety researcher at the Center on Long-Term Risk. I (occasionally) write about altruism-relevant topics on my blog, Ataraxia. All opinions my own.
In "Against neutrality...," he notes that he's not arguing for a moral duty to create happy people, and it's just good "others things equal." But, given that the moral question under opportunity costs is what practically matters, what are his thoughts on this view?: "Even if creating happy lives is good in some (say) aesthetic sense, relieving suffering has moral priority when you have to choose between these." E.g., does he have any sympathy for the intuition that, if you could either press a button that treats someone's migraine for a day or one that creates a virtual world with happy people, you should press the first one?
(I could try to shorten this if necessary, but worry about the message being lost from editorializing.)
I am (clearly) not Tobias, but I'd expect many people familiar with EA and LW would get something new out of Ch 2, 4, 5, and 7-11. Of these, seems like the latter half of 5, 9, and 11 would be especially novel if you're already familiar with the basics of s-risks along the lines of the intro resources that CRS and CLR have published. I think the content of 7 and 10 is sufficiently crucial that it's probably worth reading even if you've checked out those older resources, despite some overlap.
Anecdote: My grad school personal statement mentioned "Concrete Problems in AI Safety" and Superintelligence, though at a fairly vague level about the risks of distributional shift or the like. I got into some pretty respectable programs. I wouldn't take this as strong evidence, of course.
I'm fine with other phrasings and am also concerned about value lock-in and s-risks though I think these can be thought of as a class of x-risks
I'm not keen on classifying s-risks as x-risks because, for better or worse, most people really just seem to mean "extinction or permanent human disempowerment" when they talk about "x-risks." I worry that a motte-and-bailey can happen here, where (1) people include s-risks within x-risks when trying to get people on board with focusing on x-risks, but then (2) their further discussion of x-risks basically equates them with non-s-x-risks. The fact that the "dictionary definition" of x-risks would include s-risks doesn't solve this problem.
e.g. 2 minds with equally passionate complete enthusiasm (with no contrary psychological processes or internal currencies to provide reference points) respectively for and against their own experience, or gratitude and anger for their birth (past or future). They can respectively consider a world with and without their existences completely unbearable and beyond compensation. But if we're in the business of helping others for their own sakes rather than ours, I don't see the case for excluding either one's concern from our moral circle.... But when I'm in a mindset of trying to do impartial good I don't see the appeal of ignoring those who would desperately, passionately want to exist, and their gratitude in worlds where they do.
e.g. 2 minds with equally passionate complete enthusiasm (with no contrary psychological processes or internal currencies to provide reference points) respectively for and against their own experience, or gratitude and anger for their birth (past or future). They can respectively consider a world with and without their existences completely unbearable and beyond compensation. But if we're in the business of helping others for their own sakes rather than ours, I don't see the case for excluding either one's concern from our moral circle.
But when I'm in a mindset of trying to do impartial good I don't see the appeal of ignoring those who would desperately, passionately want to exist, and their gratitude in worlds where they do.
I don't really see the motivation for this perspective. In what sense, or to whom, is a world without the existence of the very happy/fulfilled/whatever person "completely unbearable"? Who is "desperate" to exist? (Concern for reducing the suffering of beings who actually feel desperation is, clearly, consistent with pure NU, but by hypothesis this is set aside.) Obviously not themselves. They wouldn't exist in that counterfactual.
To me, the clear case for excluding intrinsic concern for those happy moments is:
Another takeaway is that the fear of missing out seems kind of silly. I don’t know how common this is, but I’ve sometimes felt a weird sense that I have to make the most of some opportunity to have a lot of fun (or something similar), otherwise I’m failing in some way. This is probably largely attributable to the effect of wanting to justify the “price of admission” (I highly recommend the talk in this link) after the fact. No one wants to feel like a sucker who makes bad decisions, so we try to make something we’ve already invested in worth it, or at least feel worth it. But even for opportunities I don’t pay for, monetarily or otherwise, the pressure to squeeze as much happiness from them as possible can be exhausting. When you no longer consider it rational to do so, this pressure lightens up a bit. You don’t have a duty to be really happy. It’s not as if there’s a great video game scoreboard in the sky that punishes you for squandering a sacred gift.
...Having said that, I do think the "deeper intuition that the existing Ann must in some way come before need-not-ever-exist-at-all Ben" plausibly boils down to some kind of antifrustrationist or tranquilist intuition. Ann comes first because she has actual preferences (/experiences of desire) that get violated when she's deprived of happiness. Not creating Ben doesn't violate any preferences of Ben's.
certainly don't reflect the kinds of concerns expressed by Setiya that I was responding to in the OP
I agree. I happen to agree with you that the attempts to accommodate the procreation asymmetry without lexically disvaluing suffering don't hold up to scrutiny. Setiya's critique missed the mark pretty hard, e.g. this part just completely ignores that this view violates transitivity:
But the argument is flawed. Neutrality says that having a child with a good enough life is on a par with staying childless, not that the outcome in which you have a child is equally good regardless of their well-being. Consider a frivolous analogy: being a philosopher is on a par with being a poet—neither is strictly better or worse—but it doesn’t follow that being a philosopher is equally good, regardless of the pay.
appeal to some form of partiality or personal prerogative seems much more appropriate to me than denying the value of the beneficiaries
I don't think this solves the problem, at least if one has the intuition (as I do) that it's not the current existence of the people who are extremely harmed to produce happy lives that makes this tradeoff "very repugnant." It doesn't seem any more palatable to allow arbitrarily many people in the long-term future (rather than the present) to suffer for the sake of sufficiently many more added happy lives. Even if those lives aren't just muzak and potatoes, but very blissful. (One might think that is "horribly evil" or "utterly disastrous," and isn't just a theoretical concern either, because in practice increasing the extent of space settlement would in expectation both enable many miserable lives and many more blissful lives.)
ETA: Ideally I'd prefer these discussions not involve labels like "evil" at all. Though I sympathize with wanting to treat this with moral seriousness!
I think such views have major problems, but I don’t talk about those problems in the book. (Briefly: If you think that any X outweighs any Y, then you seem forced to believe that any probability of X, no matter how tiny, outweighs any Y. So: you can either prevent a one in a trillion trillion trillion chance of someone with a suffering life coming into existence, or guarantee a trillion lives of bliss. The lexical view says you should do the former. This seems wrong, and I think doesn’t hold up under moral uncertainty, either. There are ways of avoiding the problem, but they run into other issues.)
It really isn't clear to me that the problem you sketched is so much worse than the problems with total symmetric, average, or critical-level axiology, or the "intuition of neutrality." In fact this conclusion seems much less bad than the Sadistic Conclusion or variants of that, which affect the latter three. So I find it puzzling how much attention you (and many other EAs writing about population ethics and axiology generally; I don't mean to pick on you in particular!) devoted to those three views. And I'm not sure why you think this problem is so much worse than the Very Repugnant Conclusion (among other problems with outweighing views), either.
I sympathize with the difficulty of addressing so much content in a popular book. But this is a pretty crucial axiological debate that's been going on in EA for some time, and it can determine which longtermist interventions someone prioritizes.
The Asymmetry endorses neutrality about bringing into existence lives that have positive wellbeing, and I argue against this view for much of the population ethics chapter, in the sections “The Intuition of Neutrality”, “Clumsy Gods: The Fragility of Identity”, and “Why the Intuition of Neutrality is Wrong”.
You seem to be using a different definition of the Asymmetry than Magnus is, and I'm not sure it's a much more common one. On Magnus's definition (which is also used by e.g. Chappell; Holtug, Nils (2004), "Person-affecting Moralities"; and McMahan (1981), "Problems of Population Theory"), bringing into existence lives that have "positive wellbeing" is at best neutral. It could well be negative.
The kind of Asymmetry Magnus is defending here doesn't imply the intuition of neutrality, and so isn't vulnerable to your critiques like violating transitivity, or relying on a confused concept of necessarily existing people.