There seem to be two main framings emerging from recent AGI x-risk discussion: default doom, given AGI, and default we're fine, given AGI.
I'm interested in what people who have low p(doom|AGI) think are the reasons that things will basically be fine once we have AGI (or TAI, PASTA, ASI). What mechanisms are at play? How is alignment solved so that there are 0 failure modes? Can we survive despite imperfect alignment? How? Is alignment moot? Will physical limits be reached before there is too much danger?
If you have high enough p(doom|AGI) to be very concerned, but you're still only at ~1-10%, what is happening in the other 90-99%?
Added 22Apr: I'm also interested in detailed scenarios and stories, spelling out how things go right post-AGI. There are plenty of stories and scenarios illustrating doom. Where are the similar stories illustrating how things go right? There is the FLI World Building Contest, but that took place in the pre-GPT-4+AutoGPT era. The winning entry has everyone acting far too sensibly in terms of self-regulation and restraint. I think we can now say, given the fervour over AutoGPT, that this will not happen, with high likelihood.
I am at high P(doom|AGI pre-2035), but not at near-certainty. Say, 75% but not 99.9%.
The reason for that is that I find both "fast takeoff takeover" and "continous multipolar takeoff" scenarios plausible (with no decisive evidence for one or the other). In "continuous multipolar takeoff", you still get superintelligences running around. However, they would be "superintelligent with respect civilization-2023" but not necessarily wrt civilization-then. And for the standard somewhat-well-thought-out AI takeover arguments to aply, you need to be superintelligent wrt civilization-then.
Two disclaimers: (1) Just because you don't get discontinuity in influence around human level does not mean you can't get it later. In my book, world can look "Christiano-like", until suddenly it looks "Yudkowsky-like". (2) Even if we never get AI singleton, things can still go horribly wrong (ie, Christiano's what failure looks like). But imo those scenarios are much harder to reason about, and we have haven't thought them out in enough detail to justify high certainty of either outcome.
My intuitive aggregation of this gives, say, 80% P(doom this century|AGI pre-2035). On top of that, I add some 5-10% on "I am so wrong about some of this that even the high-level reasoning doesn't apply". (Which includes being wrong about where the burden of proofs, and priors, lie for P(doom|AGI).) And that puts me at the (ass-) number 75%.