What are the theoretical obstacles to abandoning expected utility calculations regarding extremities like x-risk from a rogue AI system in order to avoid biting the bullet on Pascal’s Mugging? Does Bayesian epistemology really require that we assign a credence to any proposition at all and if so - shouldn’t we reject this framework in order to avoid fanaticism? It does not seem rational to me that we should assign credences to e.g. the success of specific x-risk mitigation interventions when there are so many unknown unknowns governing the eventual outcome.
I hope you can help me sort out this confusion.
I think timidity, as described in your first link, e.g. with a bounded social welfare function, is basically okay, but it's a matter of intuition (similarly, discomfort with Pascalian problems is a matter of intuition). However, it does mean giving up separability in probabilistic cases, and it may instead support x-risks reduction (depending on the details).
I would also recommend https://globalprioritiesinstitute.org/christian-tarsney-the-epistemic-challenge-to-longtermism/ https://globalprioritiesinstitute.org/christian-tarsney-exceeding-expectations-stochastic-dominance-as-a-general-decision-theory/
Also, questions of fanaticism may be relevant for these x-risks, since it's not the probability of the risks that matter, but the difference you can make. There's also ambiguity, since it's possible to do more harm than good, by increasing the risk instead or increasing other risks (e.g. reducing extinction risks may increase s-risks, and you may be morally uncertain about how to weigh these).