What are the theoretical obstacles to abandoning expected utility calculations regarding extremities like x-risk from a rogue AI system in order to avoid biting the bullet on Pascal’s Mugging? Does Bayesian epistemology really require that we assign a credence to any proposition at all and if so - shouldn’t we reject this framework in order to avoid fanaticism? It does not seem rational to me that we should assign credences to e.g. the success of specific x-risk mitigation interventions when there are so many unknown unknowns governing the eventual outcome.
I hope you can help me sort out this confusion.
It does and we should. I wrote a post about this you might find useful: https://vmasrani.github.io/blog/2021/the_credence_assumption/