Hi! I’m slightly familiar with the literature on existential risk from having read the books What We Owe The Future and The Precipice a year ago as well as all of the 80k problem profiles over the last year. I remember being reasonably convinced by their arguments at the time, but I now feel a lot less confident about them so I figured I would share my thoughts with you guys. Currently, I think that the known risks of extinction over the next hundred years are quite low (less than .01%) if AGI is not developed.
As far as I know, natural extinction risks and the risks of extinction from climate change are extremely unlikely. As such, this just leaves us with just the anthropogenic risks of pandemics and nuclear war.
My primary reason for thinking existential risk from known threats is low is that, for these two existential risks, in order for humanity to survive, all we would need is literally a single biological woman safe in a bunker somewhere to have a large permanent store of male reproductive material and food. With this, it would be possible to re-populate the entire planet over a long enough time period.
It seems like, unless there is an agent which is actively seeking out every human on Earth and killing them or the Earth is literally unlivable for humans for thousands of years, we should expect a reasonably high chance that humanity survives via the use of bunkers.
This is the part where, perhaps, I'm very misinformed.
It seems like, if there were a pandemic, if the pandemic only infects humans, we should expect for everyone infected to die and then for the pandemic to go away. After this, people could just leave the bunkers and humanity would survive.
If the pandemic infected animals too and could easily pass from animals to humans, one could simply live in a hazmat suit and have their food disinfected before they eat it, in which case, humanity would still survive.
Lastly, it also seems like, if there were a nuclear war with a serious nuclear winter, we shouldn’t expect it to last more than like a hundred years, and, as such, we should expect Earth to eventually become livable for humans so that people can leave their bunkers.
This post is focused on known risks, but, as an aside, if I had to guess the risk of extinction from unknown risks, I couldn’t see it reasonably being higher than 10%.
What do you guys think about this perspective?

Your arguments seem to basically be "I can't think of how this could kill everyone, therefore it's extremely unlikely." You should assign more probability to the hypothesis that it could happen in a way you didn't think of. For example, did you know about mirror biology?
Your thesis seems to be "non-AI x-risks are very unlikely", but your title says all x-risks are very unlikely. Those are two very different things, since AI is the biggest x-risk.
You're right. I should have mentioned mirror biology. That's definitely the greatest biorisk I know of. That said, it still seems to be a low risk, considering that a significant number of people are actively working on it and they are being taken seriously.
And, also, I am accounting for ways that extinction could occur that I haven't thought of when I mentioned unknown risks (which I said could plausibly be as high as 10%). As such, my thesis perhaps should be adjusted to "the known non-AI risks that are commonly discussed seem really unlikely, although the unknown risks could plausibly be somewhat high." That said, in terms of known existential risks, I don't think I should weigh a risk particularly high if every scenario I hear for it almost certainly wouldn't kill everyone on Earth.
I have also changed the title to account for the fact that I'm not referring to AI.
Have they solved the problem yet? How likely are they to solve the problem? How confident are you about that? I don't think you can answer those questions in a way that justifies a 0.01% probability of extinction.