Easily available BCI may fuel a possible epidemic of wireheading, which may result in civilisational decline.
I read in Tweeter (so it is not very good source) that one of the problem of the 3GD is cavitation inside discharge tubes. Cavitation is happening when the speed of the waterflow is above 10 meter per second and water creates "vacuum bubbles" which later collapse and create shockwaves which are able to destroy even strongest materials. The discharge channels are inside the body of the dam as we can see on photos and if there will be a problem, they will affect the dam from inside without overtoping. Obviously, such channels could be closed but this will slow water release and increase chances of the use of the emergency spillway. Such spillway itself could be fragile (like in the case of Oroville dam) and could undermine the dam if damaged.
If they evolve, say, from cats, they will share the same type-values: power, sex, love to children as all mammals. By token-values will be different as they will like not human children but kittens etc. An advance non-human civilization may be more similar to ours than we-now to Ancient Egyptian, as it would have more rational world models.
The article may reflect my immoralist view point that in almost all circumstances it is better to be alive than not.
Future torture is useless and thus unlikely. Let's look on humanity: as we mature, we tend to care more about other species that lived on Earth and of minority cultures. Torture for fun or for experiment is only for those who don't know how to get information or pleasure in other ways. It is unlikely that advance civilization will deliberately torture humans. Even if resurrected humans will not have full agency, they may have much better live than most people on Earth have now.
Reconstruction of the past is universally interesting. We have a mammoth resurrection project, a lot of archeological studies, Sentinel uncontacted tribe preservation program, etc - so we find a lot of value in studying past, preserving and reconstructing it, and I think it is natural for advanced civilizations.
The x-risks information will be vital for them before they get superintelligence (but humans could be resurrected after it). Imagine that Apollo program would find some data storage on the Moon: it will be one of the biggest scientific discoveries of all times. Some information could be useful for end-of-20th-century humanity, like estimation of the probability of natural pandemics or nuclear wars.
Past data is useful. Future civilization on Earth will get a lot of scientific data from other fields of knowledge: biology, geology, even some math problems may be solved by us which they still not solved. Moreover, they will get access to enormous amount of art, which may have fun value (or not).
The resurrection (on good conditions) here is a part of an acasual deal from our side, similar to Parfit's hitchhiker. They may not take their side of the deal, so there is a risk. Or they may do it much later, after they advance to interstellar civilization and will know that there is a minimal risk and cost for them. For example, if they give 0.0001 of all their resources to us, but colonise a whole galaxy, it is still 10 million stars under human control, or bilion bilions of human beings: much better than extinction.
TL;DR: if there is any value at human existence, it is reasonable to desire resurrection of humanity (under no-torture conditions) + they will get x-risks useful information on earlier stage (end-20th-century equivalent) than they will actually resurrect us (they may do it much later, only if this information was useful, thus closing the deal).
We could survive by preserving data about humanity (on the Moon or other places), which will be found by the next civilisation on Earth, and they will recreate humans (based on our DNA) and our culture.
May be they are also less detectable, so early warning systems will not catch them on early stages?
There is an idea of a multipandemic, that is several pandemics running simultaneously. This would significantly increase the probability of extinction.
Yes, natural catastrophes probabilities could be presented as frequentist probabilities, but some estimates are based on logical uncertainty of the claims like "AGI is possible".
Also, are these probabilities conditioned on "all possible prevention measures are taken"? If yes, they are final probabilities which can't be made lower.
Your estimates are presented as numerical values similar to probabilities. Is it actually probabilities and if yes, are they frequentist probabilities or Bayesian? And more generally: How we can define the "probability of end of the world"?