What are the theoretical obstacles to abandoning expected utility calculations regarding extremities like x-risk from a rogue AI system in order to avoid biting the bullet on Pascal’s Mugging? Does Bayesian epistemology really require that we assign a credence to any proposition at all and if so - shouldn’t we reject this framework in order to avoid fanaticism? It does not seem rational to me that we should assign credences to e.g. the success of specific x-risk mitigation interventions when there are so many unknown unknowns governing the eventual outcome.
I hope you can help me sort out this confusion.
Thanks for your reply. A follow-up question: when I see the 'cancelling out'-argument, I always wonder why it doesn't apply to the x-risk case itself. It seems to me that you could just as easily argue that halting biotech research in order to enter the Long Reflection might backfire in some unpredictable way, or that aiming at Bostrom's utopia would ruin the chances of ending up in a vastly better state that we had never even dreamt of - and so on and so forth.
Isn't the whole case for longtermism so empirically uncertain as to be open to the 'cancelling out'-argument as well?
Hope it makes sense what I'm saying.