I'm Anthony DiGiovanni, a suffering-focused AI safety researcher at the Center on Long-Term Risk. I (occasionally) write about altruism-relevant topics on my Substack. All opinions my own.
Given that you can just keep doing better and better essentially indefinitely, and that GPT is not anywhere near the upper limit, talking about the difficulty of the task isn't super meaningful.
I don't understand this claim. Why would the difficulty of the task not be super meaningful when training to performance that isn't near the upper limit?
In "Against neutrality...," he notes that he's not arguing for a moral duty to create happy people, and it's just good "others things equal." But, given that the moral question under opportunity costs is what practically matters, what are his thoughts on this view?: "Even if creating happy lives is good in some (say) aesthetic sense, relieving suffering has moral priority when you have to choose between these." E.g., does he have any sympathy for the intuition that, if you could either press a button that treats someone's migraine for a day or one that creates a virtual world with happy people, you should press the first one?
(I could try to shorten this if necessary, but worry about the message being lost from editorializing.)
I am (clearly) not Tobias, but I'd expect many people familiar with EA and LW would get something new out of Ch 2, 4, 5, and 7-11. Of these, seems like the latter half of 5, 9, and 11 would be especially novel if you're already familiar with the basics of s-risks along the lines of the intro resources that CRS and CLR have published. I think the content of 7 and 10 is sufficiently crucial that it's probably worth reading even if you've checked out those older resources, despite some overlap.
Anecdote: My grad school personal statement mentioned "Concrete Problems in AI Safety" and Superintelligence, though at a fairly vague level about the risks of distributional shift or the like. I got into some pretty respectable programs. I wouldn't take this as strong evidence, of course.
I'm fine with other phrasings and am also concerned about value lock-in and s-risks though I think these can be thought of as a class of x-risks
I'm not keen on classifying s-risks as x-risks because, for better or worse, most people really just seem to mean "extinction or permanent human disempowerment" when they talk about "x-risks." I worry that a motte-and-bailey can happen here, where (1) people include s-risks within x-risks when trying to get people on board with focusing on x-risks, but then (2) their further discussion of x-risks basically equates them with non-s-x-risks. The fact that the "dictionary definition" of x-risks would include s-risks doesn't solve this problem.
e.g. 2 minds with equally passionate complete enthusiasm (with no contrary psychological processes or internal currencies to provide reference points) respectively for and against their own experience, or gratitude and anger for their birth (past or future). They can respectively consider a world with and without their existences completely unbearable and beyond compensation. But if we're in the business of helping others for their own sakes rather than ours, I don't see the case for excluding either one's concern from our moral circle.
...
But when I'm in a mindset of trying to do impartial good I don't see the appeal of ignoring those who would desperately, passionately want to exist, and their gratitude in worlds where they do.
I don't really see the motivation for this perspective. In what sense, or to whom, is a world without the existence of the very happy/fulfilled/whatever person "completely unbearable"? Who is "desperate" to exist? (Concern for reducing the suffering of beings who actually feel desperation is, clearly, consistent with pure NU, but by hypothesis this is set aside.) Obviously not themselves. They wouldn't exist in that counterfactual.
To me, the clear case for excluding intrinsic concern for those happy moments is:
Another takeaway is that the fear of missing out seems kind of silly. I don’t know how common this is, but I’ve sometimes felt a weird sense that I have to make the most of some opportunity to have a lot of fun (or something similar), otherwise I’m failing in some way. This is probably largely attributable to the effect of wanting to justify the “price of admission” (I highly recommend the talk in this link) after the fact. No one wants to feel like a sucker who makes bad decisions, so we try to make something we’ve already invested in worth it, or at least feel worth it. But even for opportunities I don’t pay for, monetarily or otherwise, the pressure to squeeze as much happiness from them as possible can be exhausting. When you no longer consider it rational to do so, this pressure lightens up a bit. You don’t have a duty to be really happy. It’s not as if there’s a great video game scoreboard in the sky that punishes you for squandering a sacred gift.
...Having said that, I do think the "deeper intuition that the existing Ann must in some way come before need-not-ever-exist-at-all Ben" plausibly boils down to some kind of antifrustrationist or tranquilist intuition. Ann comes first because she has actual preferences (/experiences of desire) that get violated when she's deprived of happiness. Not creating Ben doesn't violate any preferences of Ben's.
Thanks for asking — you can read more about these two sources of s-risk in Section 3.2 of our new intro to s-risks article. (We also discuss "near miss" there, but our current best guess is that such scenarios are significantly less likely than other s-risks of comparable scale.)