I'm going to struggle to cast a meaningful vote on this, since I find 'existential risk' terminology as used in the OP more confusing than helpful, since e.g. it includes nonexistential considerations and in practice excludes non-extinction catastrophes from a discussion they should very much be in, in favour of work on the heuristical-but-insufficient grounds of focusing on events that have maximal extinction probability (i.e. AI).
I've argued here that non-extinction catastrophes could be as or more valuable to work on than immediate extinction events, even if all we care about is the probability of very long-term survival. For this reason I actually find Scott's linked post extremely misleading, since it frames his priorities as 'existential' risk, then pushes people entirely towards working on extinction risk - and gives reasons that would apply as well to non-extinction GCRs. I gave some alternate terminology here, and while I don't want to insist on my own clunky suggestions, I wish serious discussions would be more precise.
For what it's worth, there used to be an 80k pledge along similar lines. They quietly dropped it several years ago, so you might want to find someone involved in that decision to try and understand why (I suspect and dimly remember that it was some combination of non-concreteness, and concerns about other-altruism-reduction effects).
I happen to strongly agree that moral discount rate should be 0, but a) it's still worth acknowledging that as an assumption, and b) I think it's easy for both sides to equivocate it with risk-based discounting. It seems like you're de facto doing when you say 'Under that assumption, the work done is indeed very different in what it accomplishes' - this is only true if risk-based discounting is also very low. See e.g. Thorstad's Existential Risk Pessimism and the Time of Perils and Mistakes in the Moral Mathematics of Existential Risk for formalisms of why it might not be - I don't agree with his dismissal of a time of perils, but I do agree that the presumption that explicitly longtermist work is actually better for the long term than short-to-medium-term focused work is is based on little more than Pascalian handwaving.
I'm confused by your paragraph about insurance. To clarify:
Of course you can disagree about the high risk to flourishing from non-existential catastrophes but that's going to be a speculative argument about which people might reasonably differ. To my knowledge, no-one's made the positive case in depth, and the few people who've looked seriously into our post-catastrophe prospects seem to be substantially more pessimistic than those who haven't. See e.g.:
The extent to which you think they're the same is going to depend heavily on
Given the high uncertainty behind each of those considerations (arguably excluding the first), I think it's too strong to say they're 'not the same at all'. I don't know what you mean by fields only looking into regional disasters - how are you differentiating those investigations from the fields that you mention that the general public has heard of in large part because a ton of academic and governmental effort has gone into it?
It's difficult if the format requires a 1D sliding scale. I think reasonable positions can be opposed on AI vs other GCRs vs infrastructure vs evidenced interventions, and future (if it exists) is default bad vs future is default good, and perhaps 'future generations should be morally discounted' vs not.