In short:
- Bayesianism is largely about how to assign probabilities to things, it is not a ethical/normative doctrine like utilitarianism that tells you how you should prioritize your time. And as a (non-naïve) utilitarian will emphasize, when doing so-called “utilitarian calculus” (and related forms of analysis) is inefficient/less effective than using intuition, then you should rely on intuition.
- Especially when dealing with facially implausible/far-fetched claims about extremely high risk, I think it’s helpful to fight dubious fire with similarly dubious fire and then trim off the ashes: if someone says “there’s a slight (0.001%) chance that this (weird/dubious) intervention Y could prevent extinction, and that’s extremely important,” you might be able to argue that it is equally or even more likely that doing Y backfires or that doing Y prevents you from doing intervention Z which plausibly has a similar (unlikely) chance of preventing extinction. (See longer illustration block below)
In the end, these two points are not the only things to consider, but I think they tend to be the most neglected/overlooked whereas the complementary concepts are decently understood (although I might be forgetting something else).
Regarding 2 in more detail: Take for example classic Pascal's mugging-type situations, like "A strange-looking man in a suit walks up to you and says that he will warp up to his spaceship and detonate a super-mega nuke that will eradicate all life on earth if and only if you do not give him $50 (which you have in your wallet), but he will give you $3^^^3 tomorrow if and only if you give him $50." We could technically/formally suppose the chance he is being truthful is nonzero (e.g., 0.0000000001%), but still abide by rational expectation theory if you suppose that there are indistinguishably likely cases that cause the opposite expected value -- for example, the possibility that he is telling you the exact opposite of what he will do if you give him the money (for comparison, see the philosopher God response to Pascal's wager), or the possibility that the "true" mega-punisher/rewarder is actually just a block down the street and if you give your money to this random lunatic you won't have the $50 to give to the true one (for comparison, see the "other religions" response to the narrow/Christianity-specific Pascal's wager). More realistically, that $50 might be better donated to an X-risk charity. Add in the fact that stopping and thinking through this entire situation would be a waste of time that you could perhaps be using to help avert catastrophes in some other way (e.g., making money to donate to X-risk charities), and you’ve got a pretty strong case for not even entertaining the fantasy for a few seconds, and thus not getting paralyzed by naive application of expected value theory.
I think timidity, as described in your first link, e.g. with a bounded social welfare function, is basically okay, but it's a matter of intuition (similarly, discomfort with Pascalian problems is a matter of intuition). However, it does mean giving up separability in probabilistic cases, and it may instead support x-risks reduction (depending on the details).
I would also recommend https://globalprioritiesinstitute.org/christian-tarsney-the-epistemic-challenge-to-longtermism/ https://globalprioritiesinstitute.org/christian-tarsney-exceeding-expectations-sto... (read more)