I think that given a few generations of expansion to different stars in all directions, it is not implausible (i.e. at least 25% chance) that X-risk becomes extremely low (i.e. under 1 in 100,000 per century, once there are say, 60 colonies with expansion plans, and a lot less once there are 1000 colonies.) After all, we've already survived a million years, and most X-risks not from AI seem mostly to apply to single planet civilizations, plus the lightspeed barrier makes it hard for a risk to reach everywhere at once. But I think I agree that thinking through this stuff is very, very hard, and I'm sympathetic to David Thorstad's claim that if we keep finding ways current estimates of the value of X-risk reduction could be wildly wrong, at some point we should just lose trust in current estimates (see here for Thorstad making the claim: https://reflectivealtruism.com/2023/11/03/mistakes-in-the-moral-mathematics-of-existential-risk-part-5-implications/), even though I am a lot less confident than Thorstad is that very low future per year risk is an "extreme" assumption.
It is disturbing to me how much Thorstad's work on this stuff seems to have been ignored by leading orgs; it is very serious work criticizing key assumptions that they base their decisions on, even if I personally think he tends to push points in his favour a bit far. I assume the same is true for the Rethink report you cite, although it is long and complicated enough, unlike Thorstad's short blog posts, that I haven't read any of it.
"More generally, I am very skeptical of arguments of the form "We must ignore X, because otherwise Y would be bad". Maybe Y is bad! What gives you the confidence that Y is good? If you have some strong argument that Y is good, why can't that argument outweigh X, rather than forcing us to simply close our eyes and pretend X doesn't exist?"
This is very difficult philosophical territory, but I guess my instinct is to draw a distinction between:
a) ignoring new evidence about what properties something has, because that would overturn your prior moral evaluation of that thing.
b) Deciding that well-known properties of a thing don't contribute towards it being bad enough to overturn the standard evaluation of it, because you are committed to the standard moral evaluation. (This doesn't involve inferring that something has particular non-moral properties from the claim that it is morally good/bad, unlike a).)
A) feels always dodgy to me, but b) seems like the kind of thing that could be right, depending on how much you should trust judgments about individual cases versus judgements about abstract moral principles. And I think I was only doing b) here, not a).
Having said that, I remember a conversation I had in grad school with a faculty member who was probably much better at philosophy than me claimed that even a) is only automatically bad if you assume moral anti-realism.
One reason to be suspicious of taking into account lost potential lives here is that if you always do so, it looks like you might get a general argument for "development is bad". Rich countries have low fertility compared to poor countries. So anything that helps poor countries develop is likely to prevent some people from being born. But it seems pretty strange to think we should wait until we find out how much development reduces fertility before we can decide if it is good or bad.
A bit of a tangent in the current context, but I have slight issues with your framing here: mechanisms that prevent the federal government telling the state governments what to do are not necessarily mechanisms that protect individuals citizens, although they could be. But equally, if the federal government is more inclined to protect the rights of individual citizens than the state government is, then they are the opposite. And sometimes framing it in terms of individual rights is just the wrong way to think about it: i.e. if the federal government wants some economic regulation and the state government doesn't, and the regulation has complex costs and benefits that work out well for some citizens and badly for others, then "is it the feds or the state government protecting citizen's rights" might not be a particularly helpful framing.
This isn't just abstract, historically in the South, it was often the feds who wanted to protect Black citizens and the state governments who wanted to avoid this under the banner of state's rights.
I am biased because Stuart is an old friend, but I found this critique of the idea that social media use causes poor mental health fairly convincing when I read it: https://www.thestudiesshowpod.com/p/episode-25-is-it-the-phones Though obviously you shouldn't just make your mind up about this based on a single source, and there's might be a degree of anti-woke and therefore anti-anti-tech bias creeping in.
Interesting, the paper is older than Thorstad's blogposts, but it could still be that people are thinking of this as "the answer".