There seem to be two main framings emerging from recent AGI x-risk discussion: default doom, given AGI, and default we're fine, given AGI.
I'm interested in what people who have low p(doom|AGI) think are the reasons that things will basically be fine once we have AGI (or TAI, PASTA, ASI). What mechanisms are at play? How is alignment solved so that there are 0 failure modes? Can we survive despite imperfect alignment? How? Is alignment moot? Will physical limits be reached before there is too much danger?
If you have high enough p(doom|AGI) to be very concerned, but you're still only at ~1-10%, what is happening in the other 90-99%?
Added 22Apr: I'm also interested in detailed scenarios and stories, spelling out how things go right post-AGI. There are plenty of stories and scenarios illustrating doom. Where are the similar stories illustrating how things go right? There is the FLI World Building Contest, but that took place in the pre-GPT-4+AutoGPT era. The winning entry has everyone acting far too sensibly in terms of self-regulation and restraint. I think we can now say, given the fervour over AutoGPT, that this will not happen, with high likelihood.
I believe that there's more uncertainty about the future than there was previously.
This means that
(a) it's hard for me to commit to a doom outcome with high confidence
(b) it's hard for me to commit to any outcome with high confidence
(c) even if I think that doom has <10% chance of happening, it doesn't mean I can articulate what the rest of the probability space looks like.
To be clear, I think that someone with this set of beliefs, including 1% chance of doom, should be highly concerned and should want action to be taken to keep everyone safe from the risks of AI.
I agree with Tyler Cowen that it's hard to predict what will happen, although my argument has a (not mega important) nuance that his blog post doesn't have, namely that the difficulty of predictions is increasing.
A (more important) difference is that I don't commit what Scott Alexander calls the Safe Uncertainty Fallacy. I've encountered that argument a lot with climate sceptics for many years, and have found it infuriating how it's simultaneously a very bad argument and yet can be made to sound sensible.