Dangers from our discoveries will pose the greatest long term risk.
Knowledge discovery cannot be known ahead of time, not even approximated.
Consider Andrew Wiles' solution to Fermat's Last Theorem - he was close to abandoning it, a lifelong obsession, the very morning that he solved it! That morning, Wiles's priors were the most accurate in the world. Not only that, since he was on the cusp of the solution, his priors should have been on the cusp of being correct. And yet...
"Wiles states that on the morning of 19 September 1994, he was on the verge of giving up and was almost resigned to accepting that he had failed... he was having a final look to try and understand the fundamental reasons for why his approach could not be made to work, when he had a sudden insight."
Just prior to his eureka moment, no one could have known better than Wiles about the probability of success, and yet the likelihood was opaque even to him.
It's not that Wiles was off, he was wildly off. Right when he was closest, he was perhaps most despaired.
The reason for this is that a good prediction requires at least a decent model, which means knowing all the inputs. David Deutsch's example in The Beginning of Infinity is Russian roulette - we know all the inputs, and so our predictions make sense.
But with predicting discovery, we have to leave a gap in our model because we do not have all the inputs. We can't call this gap "uncertainty" because we can have a measure of uncertainty. With Russian roulette, we know that a pistol won't fire every time, and we can estimate this on the range as well as by measuring tolerances in manufacturing etc. But when something is unknowable, we have no idea how big the gap is. Wiles himself didn't know if he was moments away or many years off - he was utterly blind to the size of the gap in his own likelihood model.
This is as it must be with all human events, because even mundane events are driven by discovery. If I have coffee or tea this morning depends on how I create an understanding of breakfast, my palate, whether I found a new blend of coffee in the store or got curious about the process of tea cultivation. We could tally up all my previous mornings and call it a probability estimate, but so can the astrologer create an estimate. Both are equally meaningless because neither can account for new discoveries.
I think the problem with longtermism is a conflation of uncertainty (which we can factor into our model) vs unknowability (which we cannot factor in).
We can predict the future state of the solar system simply based on measurements of past states and our understanding of gravity. Unless of course humans do something like shove asteroids out of earth's path or adjust Mars' orbit to be more habitable. In that case, we wouldn't find evidence for such alterations in any of our prior measurements.
AGI is another example - it is very similar to Fermat's Last Theorem. How big is the gap in our current understanding? Are we nearly there like Wiles on that morning? Or are we staring down a massive gap in our understanding of information theory or epistemology or physics, or all three? Until we cross the gap, its size is unknowable.
How about Malthus? His model didn't account for the role of discovery of industrialized fertilizer and crop breeding. How could he know ahead of time the size of these contributions?
Two last parts of this. 1) It's meaningless to even speak of these gaps in terms of size. We can't quantify the mental leap required for an insight. Even the phrase "mental leap" is misleading. Maybe "flash of insight" is better. We don't know much about this creative process, but it seems more akin to a change in perspective than the distance of a jump. The latter phrasing contributes to the confusion, since it suggests a kind of labor theory of discovery - X amount of work will produce a discovery of profundity Y.
2) The difficulty of a problem, such as Fermat's Last Theorem or landing on the moon, is itself an attractor, making it almost paradoxically MORE likely to be solved. "We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard."
Any prediction about the future of human events (such as nuclear war or discovery of AGI) must leave a gap for the role of human discovery, and we cannot know the size of that gap (size itself is meaningless in this context) prior to the discovery, not even close - so any such prediction is actually prophesy.
This was also anticipated by Popper's critique of historicism - "It is logically impossible to know the future course of history when that course depends in part on the future growth of scientific knowledge (which is unknowable in advance)."
Well, my basic opinion about forecasting is that probabilities don't inform the person receiving the forecast. Before you commit to weighting possible outcomes, you commit to at least two mutually exclusive futures, X and not X. So what you supply is a limitation on possible outcomes, either X or not X. At best, you're aware of mutually exclusive alternative and specific futures. Then you can limit what not X means to something specific, for example, Y. So now you can say, "The future will contain X or Y." That sort of analysis is enabled by your causal model. As your causal model improves, it becomes easier to supply a list of alternative future outcomes.
However, the future is not a game of chance, and there's no useful interpretation to supply meaningful weights to the future prediction of any specific outcome, unless the outcomes belong to a game of chance, where you're predicting rolls of a fair die, choice of a hand from a deck of cards, etc.
What's worse, that does not limit your feelings about what probabilities apply. Those feelings can seem real and meaningful because they let you talk about lists of outcomes and which you think are more credible.
As a forecaster, I might supply outcomes in a forecast that I consider less credible along with those that I consider more credible. but if you ask me which options I consider credible, I might offer a subset of the list. So in that way weights can seem valuable, because they let you distinguish which you think are more credible and which you can rule out. But the weights also obscure that information because they can scale that credibility in confusing ways.
For example, I believe in outcomes A or B, but I offer A at 30%, B at 30%, C at 20%, D at 10%, and E at 10%. Have I communicated what I intended with my weights, namely, that A and B are credible, that C is somewhat credible, but D and E are not? Maybe I could adjust A and B to 40% and 40%, but now I'm fiddling with the likelihoods of C, D, and E, when all I really mean to communicate is that I like A or B as outcomes and C as an alternate. My probabilities communicate more and differently than I intend. I could make it clear with A and B each at 48% or something, but really now I'm trying to pretend I know what the chances of C, D, and E are, when all I really know about them is that my causal model doesn't support their production much. I could go back and quantify that somehow, but information with which to do that is not available , so I have to pretend confidence in some estimation of the outcomes C, D, and E. My information is not useless, but it's not relevant to weighting all possible outcomes against each other. If I'm forced to provide weights for all the listed outcomes, then I'm forced to figure out how to communicate my analysis in terms of weights so that the audience for my forecast understands what I intend to mean.
In general, analyzing causal models that determine possible futures is a distinct activity from weighting those futures. The valuable information is in the causal models and in the selection of futures based on those models. The extra information on epistemic confidence is not useful and pretends more information than a forecaster likely has. I would go as far as two tiers of selections, just to qualify what I think my causal model implies,
"A or B, and if not those, then C, but not D or E".
Actually, I think someone reading my forecast with weights will just leave with that kind of information anyway. If they try to mathematically apply the weights I chose to communicate my tiers of selections, then they will be led astray, expecting precision when there wasn't any. They would do better to get details of the causal models involved and determine whether those have any merit, particularly in cases of:
so basically in all cases. What might distinguish superforecasters is not their grasp of probability or their ability to update bayesian priors or whatever, but rather the applicability of causal models they develop, and what those causal models emphasize as causes and consequences.
That's the background of my thinking, now here's how I think it relates to what you're saying:
If discoveries influence future outcomes in unknown ways, and your information is insufficient to predict all outcomes, then your causal model makes predictions that belong under an assumption of an open world. You are less useful as a predictor of outcomes and more useful as an supplier of possible outcomes. If we are both forecasting, and I supply outcomes A and B; you might supply outcomes C and D; someone else might supply E, F, and G; yet another person might supply H. Our forecasts run from A to H so far, and they are not exhaustive. As forecasters, our job becomes to create lists of plausible futures, not to select from predetermined lists.
I think this is appropriate to conditions where development of knowledge or inventions is a human choice. Any forecast will depend not only on what is plausible under some causal model, but also on what future people want to explore and how they explore it. Forecasts in that scenario can influence the future, so better that they supply options rather than weight them.