There seem to be two main framings emerging from recent AGI x-risk discussion: default doom, given AGI, and default we're fine, given AGI.
I'm interested in what people who have low p(doom|AGI) think are the reasons that things will basically be fine once we have AGI (or TAI, PASTA, ASI). What mechanisms are at play? How is alignment solved so that there are 0 failure modes? Can we survive despite imperfect alignment? How? Is alignment moot? Will physical limits be reached before there is too much danger?
If you have high enough p(doom|AGI) to be very concerned, but you're still only at ~1-10%, what is happening in the other 90-99%?
Added 22Apr: I'm also interested in detailed scenarios and stories, spelling out how things go right post-AGI. There are plenty of stories and scenarios illustrating doom. Where are the similar stories illustrating how things go right? There is the FLI World Building Contest, but that took place in the pre-GPT-4+AutoGPT era. The winning entry has everyone acting far too sensibly in terms of self-regulation and restraint. I think we can now say, given the fervour over AutoGPT, that this will not happen, with high likelihood.
The framing here seems really strange to me. You seem to have a strong prior that doom happens, while, to me, most arguments for doom require quite a few hypothesis to be true and hence their conjunction is a priori unlikely. I guess I don't find the inside view arguments very persuasive to majorly update, much like the median to AI experts, who are around 2%.
To go into your questions specifically.
AGI is closer to a very intelligent human than to a naive optimiser.
I don't see why this is required, I'm not arguing p(doom) is 0.
AGI either can't or "chooses" not to cause an x-risk.
This seems like a case of different prior distributions. I think it's a specific hypothesis to say that strong optimisers won't happen (i.e. there has to be a specific reason for this, otherwise it's the default, for convergent instrumental reasons).