Dangers from our discoveries will pose the greatest long term risk.
Knowledge discovery cannot be known ahead of time, not even approximated.
Consider Andrew Wiles' solution to Fermat's Last Theorem - he was close to abandoning it, a lifelong obsession, the very morning that he solved it! That morning, Wiles's priors were the most accurate in the world. Not only that, since he was on the cusp of the solution, his priors should have been on the cusp of being correct. And yet...
"Wiles states that on the morning of 19 September 1994, he was on the verge of giving up and was almost resigned to accepting that he had failed... he was having a final look to try and understand the fundamental reasons for why his approach could not be made to work, when he had a sudden insight."
Just prior to his eureka moment, no one could have known better than Wiles about the probability of success, and yet the likelihood was opaque even to him.
It's not that Wiles was off, he was wildly off. Right when he was closest, he was perhaps most despaired.
The reason for this is that a good prediction requires at least a decent model, which means knowing all the inputs. David Deutsch's example in The Beginning of Infinity is Russian roulette - we know all the inputs, and so our predictions make sense.
But with predicting discovery, we have to leave a gap in our model because we do not have all the inputs. We can't call this gap "uncertainty" because we can have a measure of uncertainty. With Russian roulette, we know that a pistol won't fire every time, and we can estimate this on the range as well as by measuring tolerances in manufacturing etc. But when something is unknowable, we have no idea how big the gap is. Wiles himself didn't know if he was moments away or many years off - he was utterly blind to the size of the gap in his own likelihood model.
This is as it must be with all human events, because even mundane events are driven by discovery. If I have coffee or tea this morning depends on how I create an understanding of breakfast, my palate, whether I found a new blend of coffee in the store or got curious about the process of tea cultivation. We could tally up all my previous mornings and call it a probability estimate, but so can the astrologer create an estimate. Both are equally meaningless because neither can account for new discoveries.
I think the problem with longtermism is a conflation of uncertainty (which we can factor into our model) vs unknowability (which we cannot factor in).
We can predict the future state of the solar system simply based on measurements of past states and our understanding of gravity. Unless of course humans do something like shove asteroids out of earth's path or adjust Mars' orbit to be more habitable. In that case, we wouldn't find evidence for such alterations in any of our prior measurements.
AGI is another example - it is very similar to Fermat's Last Theorem. How big is the gap in our current understanding? Are we nearly there like Wiles on that morning? Or are we staring down a massive gap in our understanding of information theory or epistemology or physics, or all three? Until we cross the gap, its size is unknowable.
How about Malthus? His model didn't account for the role of discovery of industrialized fertilizer and crop breeding. How could he know ahead of time the size of these contributions?
Two last parts of this. 1) It's meaningless to even speak of these gaps in terms of size. We can't quantify the mental leap required for an insight. Even the phrase "mental leap" is misleading. Maybe "flash of insight" is better. We don't know much about this creative process, but it seems more akin to a change in perspective than the distance of a jump. The latter phrasing contributes to the confusion, since it suggests a kind of labor theory of discovery - X amount of work will produce a discovery of profundity Y.
2) The difficulty of a problem, such as Fermat's Last Theorem or landing on the moon, is itself an attractor, making it almost paradoxically MORE likely to be solved. "We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard."
Any prediction about the future of human events (such as nuclear war or discovery of AGI) must leave a gap for the role of human discovery, and we cannot know the size of that gap (size itself is meaningless in this context) prior to the discovery, not even close - so any such prediction is actually prophesy.
This was also anticipated by Popper's critique of historicism - "It is logically impossible to know the future course of history when that course depends in part on the future growth of scientific knowledge (which is unknowable in advance)."
I mistakenly included my response to another comment, I'm pasting it below.
Great point - Leo Szilard foresaw nuclear weapons and collaborated with Einstein to persuade FDR to start the Manhattan Project. Szilard would have done extremely well in a Tetlock scenario. However, this also conforms with my point - Szilard was able to successfully predict because he was privy to the relevant discoveries. The remainder of the task was largely engineering (again, not to belittle those discoveries). I think this also applies to superforecasters - they become like Szilard, learning of the relevant discoveries and then foreseeing the engineering steps.
Regarding sci-fi, Szilard appears to have been influenced by HG Wells The World Set Free in 1913. But HG Wells was not just a writer - he was familiar with the state of atomic physics and therefore many of the relevant discoveries - he even dedicated the book to an atomic scientist. And Wells's "atomic bombs" were lumps of a radioactive substance that issued energy from a chain reaction, not a huge stretch from what was already known at the time. It's pretty incredible that Szilard later is credited with foreseeing nuclear chain reactions in 1933 shortly after the discovery of neutrons, and he was likely influenced by Wells. So Wells is a great thinker, and this nicely illustrated how knowledge grows, by excellent guesses refined by criticism/experiment. But I don't think we are seeing knowledge of discoveries before they are discovered.
Szilard's prediction in 1939 is a lot different than a similar prediction in 1839. Any statement about weapons in 1839 is like Thomas Malthus's predictions in a state of utter ignorance and unknowability about the eventual discoveries relevant to his forecast (nitrogen fixation and genetic modification of crops).
And this is also the case with discoveries in the long term from now.
Objections to my post read to me like "but people have forecasted things shortly before they have appeared." True, but those forecasts already have much of the relevant discoveries already factored in, though largely invisible to non-experts.
Szilard must have seemed like a prophet to someone unfamiliar with the state of nuclear physics. You could understand a Tetlock who find these seeming prophets among us and declares that some amount of prophesy is indeed possible. But to Wells, Szilard was just making a reasonable step from Wells's idea, which was a reasonable step from earlier discoveries.
As for science fiction writers in general, that's interesting. Obviously, selection effects will be strong (stories that turn out true will become famous), and good science fiction writers are more familiar with the state of the science than others. And finally, it's one thing to make a great guess about the future. It's entirely different to quantify the likelihood of this guess - I doubt even Jules Verne would try to put a number on the likelihood that submarines would eventually be developed.