A

AndreaSR

20 karmaJoined May 2021

Comments
4

Yeah, I've had the same thought. But as far as I can tell, it still doesn't add up, so I figured there must be something else going on. Thanks for your reply, though.

Thanks for your reply. I'm glad my calculation doesn't seem way off. Still feel like it's too obvious a mistake for it not to have been caught, if it indeed were a mistake...

Thanks for your answer. I don't think I under stand what you're saying, though. As I understand it, it makes a huge difference to the resource distribution that longtermism recommends, because if you allow for e.g. Bostrom's 10^52 happy lives to be the baseline utility, avoiding x-risk becomes vastly more important than if you just consider the 10^10 people alive today. Right?

Thanks for your reply. A follow-up question: when I see the 'cancelling out'-argument, I always wonder why it doesn't apply to the x-risk case itself. It seems to me that you could just as easily argue that halting biotech research in order to enter the Long Reflection might backfire in some unpredictable way, or that aiming at Bostrom's utopia would ruin the chances of ending up in a vastly better state that we had never even dreamt of - and so on and so forth.

Isn't the whole case for longtermism so empirically uncertain as to be open to the 'cancelling out'-argument as well?

 

Hope it makes sense what I'm saying.