TL;DR: I'm curious what the most detailed or strongly-evidenced arguments are in favour of extinction risk eventually falling to extremely low levels.
An argument I often see goes to the effect of 'we have a lot of uncertainty about the future, and given that it seems hard to be >99% confident that humanity will last <1 billion years". As written, this seems like a case of getting anchored by percentages and failing to process just how long one billion years really is (weak supporting evidence for the latter is that I sometimes see eerily similar estimates for one million years...). Perhaps this is my finance background talking, but I can easily imagine a world where the dominant way to express probability is basis points and our go-to probability for 'very unlikely thing' was 1 bp rather than 1%, which is 100x smaller. Or we could have have a generic probability analogy to micromorts, which are 100x smaller still, etc. Yet such choices in language shouldn't be affecting our decisions or beliefs about the best thing to do.
On the object level, one type of event I'm allowed to be extremely confident about is a large conjunction of events; if I flip a fair coin 30 times, the chance of getting 30 heads is approximately one in a billion.
Humanity surviving for a long time has a similar property; if you think that civilisation has a 50% chance of making it through the next 10,000 years, then conditional on that a 50% chance of making it through the next 20,000 years, then 50% for the next 40,000 years, etc. (applying a common rule-of-thumb for estimating uncertain lifetimes, starting with the observation that civilisation has been around for ~10,000 years so far), then the odds of surviving a billion years come out somewhere between 1 in 2^16 and 1 in 2^17, AKA roughly 0.001%.
We could also try to estimate current extinction risk directly based on known risks. Most attempts I've seen at this suggest that 50% to make it through the next 10,000 years, AKA roughly 0.007% per year, is very generous. As I see it, this is because an object-level anlysis of the risks suggests they have rising, not falling as the Lindy rule would imply.
When I've expressed this point to people in the past, I tend to get very handwavy (non-numeric) arguments about how a super-aligned-AI could dramatically cut existential risk to the levels required; another way of framing the above is that, to envisage a plausible future where we then have >one billion years in expectation, annualised risk needs to get to <0.0000001% in that future. Another thought is that space colonization could make humanity virtually invincible. So partly I'm wondering if there's a better-developed version of these that accounts for the risks that would remain, or other routes to the same conclusion, since this assumption of a large-future-in-expectation seems critical to a lot of longtermist thought.
Some scattered thoughts (sorry for such a long comment!). Organized in order rather than by importance---I think the most important argument for me is the analogy to computers.
Thanks for the long comment, this gives me a much richer picture of how people might be thinking about this. On the first two bullets:
You say you aren't anchoring, in a world where we defaulted to expressing probability in 1/10^6 units called Ms I'm just left feeling like you would write "you should be hesitant to assign 999,999M+ probabilities without a good argument. The burden of proof gets stronger and stronger as you move closer to 1, and 1,000,000 is getting to be a big number.". So if it's not anchoring, what calculation or intuition is leading you to specifically 99% (or at least, something in that ballpark), and would similarly lead you to roughly 990,000M with the alternate language?
My reply to Max and your first bullet both give examples of cases in the natural world where probabilities of real future events would go way outside the 0.01% - 99.99% range. Conjunctions force you to have extreme confidence somewhere, the only question is where. If I try to steelman your claim, I think I end up with an idea that we should apply our extreme confidence to the thing inside the product due to correlated cause, rather than the thing outside; does that sound fair?
The rest I see as an attempt to justify the extreme confidences inside the product, and I'll have to think about more. The following are gut responses:
I'm not sure which step of this you get off the boat for
I'm much more baseline cynical than you seem to be about people's willingness and ability to actually try, and try consistently, over a huge time period. To give some idea, I'd probably have assigned <50% probability to humanity surviving to the year 2150, and <10% for the year 3000, before I came across EA. Whether that's correct or not, I don't think its wildly unusual among people who take climate change seriously*, and yet we almost certainly aren't doing enough to combat that as a society. This gives me little hope for dealing with <10% threats that will surely appear over the centuries, and as a result I found and continue to find the seemingly-baseline optimism of longtermist EA very jarring.
(Again, the above is a gut response as opposed to a reasoned claim.)
Applying the rule of thumb for estimating lifetimes to "the human species" rather than "intelligent life" seems like it's doing a huge amount of work.
Yeah, Owen made a similar point, and actually I was using civilisation rather than 'the human species', which is 20x shorter still. I honestly hadn't thought about intelligent life as a possible class before, and that probably is the thing from this conversation that has the most chance of changing how I think about this.
*"The survey from the Yale Program on Climate Change Communication found that 39 percent think the odds of global warming ending the human race are at least 50 percent. "
I roughly think that there simply isn't very strong evidence for this. I.e. I think it would be mistaken to have a highly resilient large credence in extinction risk eventually falling to ~0.0000001%, humanity or its descendants surviving for a billion years, or anything like that.
[ETA: Upon rereading, I realized the above is ambiguous. With "large" I was here referring to something stronger than "non-extreme". E.g. I do think it's defensible to believe that, e.g. "I'm like 90% confident that over the next 10 years my credence in information-based civilization surviving for 1 billion years won't fall below 0.1%", and indeed that's a statement I would endorse. I think I'd start feeling skeptical if someone claimed there is no way they'd update to a credence below 40% or something like that.]
I think this is one of several reasons for why the "naive case" for focusing on extinction risk reduction fails. (Another example of such a reason is the fact that, for most known hazards, collapse short of extinction seems way more likely than immediate extinction, that as a consequence most interventions affect both the probability of extinction and the probability and trajectory of various collapse scenarios, and that the latter effect might dominate but has unclear sign.)
I think the most convincing response is a combination of the following. Note, however, that the last two mostly argue that we should be longtermists despite the case for billion-year futures being shaky rather than defenses of that case itself.
That all being said, my views on this feel reasonably but not super resilient - like it's "only" 10% I'll have changed my mind about this in major ways in 2 years. I also think there is room for more work on how to best think about such questions (the Ord et al. paper is a great example), e.g. checking that this kind of reasoning doesn't "prove too much" or leads to absurd conclusions when applied to other cases.
Thanks for this. I won't respond to your second/third bullets; as you say it's not a defense of the claim itself, and while it's plausible to me that many conclusions go through on much shorter timelines, I still want to understand the basis for the actual arguments made as best I can. Not least because if I can't defend such arguments, then my personal pitches for longtermism (both to myself and to others) will not include them; they and I will focus on the next e.g. 10,000 years instead.
On your first bullet:
You are correct that within fixed models we can justifiably have extreme credences, e.g. for the probability of a specific result of 30 coin flips. However, I think the case for "modesty" - i.e. not ruling out very long futures - rests largely on model uncertainty...
...This insight that extremely low credences all-things-considered are often "forbidden" by model uncertainty is basically the point from Ord, Hillerbrand, & Sandberg (2008).
I'll go and read the paper you mention, but flagging that my coinflip example is more general than you seem to think. Probability theory has conjunctions even outside of simple fixed models, and it's the conjunction, not the fixed model, which is forcing you to have extreme credences. At best, we may be able to define a certain class of events where such credences are 'forbidden' (this could well be what the paper tries to do). We would then need to make sure that no such event can be expressed as a conjunction of a very large number of other such events.
Concretely, P(Humanity survives one billion years) is the product of one million probabilities of surviving each millenia, conditional on having survivied up to that point. As a result, we either need to set some of the intervening probabilities like P(Humanity survivies the next millenia | Humanity has survived to the year 500,000,000 AD) extremely high, or we need to set the overall product extremely low. Setting everything to the range 0.01% - 99.99% is not an option, without giving up on arithmetic or probability theory. And of course, I could break the product into a billion-fold conjunction where each component was 'survive the next year' if I wanted to make the requirements even more extreme.
Note I think it is plausible such extremes can be justified, since it seems like a version of humanity that has survived 500,000 millenia really should have excellent odds of surviving the next millenium. Indeed, I think that if you actually write out the model uncertainty argument mathematically, what ends up happening here is the fact that humanity has survivied 500,000 millenia is massive overwhelming Bayesian evidence that the 'correct' model is one of the ones that makes such a long life possible, allowing you to reach very extreme credences about the then-future. This is somewhat analagous to the intuitive extreme credence most people have that they won't die in the next second.
my coinflip example is more general than you seem to think. Probability theory has conjunctions even outside of simple fixed models, and it's the conjunction, not the fixed model, which is forcing you to have extreme credences. At best, we may be able to define a certain class of events where such credences are 'forbidden' (this could well be what the paper tries to do).
I agree with everything you say in your reply. I think I simply partly misunderstood the point you were trying to make and phrased part of my response poorly. In particular, I agree that extreme credences aren't 'forbidden' in general.
(Sorry, I think it would have been better if I had flagged that I had read your comment and written mine very quickly.)
I still think that the distinction between credence/probabilities within a model and credence that a model is correct are is relevant here, for reasons such as:
I acknowledge that I'm making somewhat vague claims here, and that in order to have anything close to a satisfying philosophical account of what's going on I would need to spell out what exactly I mean by "often" etc. (Because as I said I do agree that these claims don't always hold!)
Some fixed models also support macroscopic probabilities of indefinite survival: e.g. if in each generation each individual has a number of descendants drawn from a Poisson distribution with parameter 1.1, then there's a finite chance of extinction in each generation but these diminish fast enough (as the population gets enormous) that if you make it through an initial rocky period you're pretty much safe.
That model is clearly too optimistic because it doesn't admit crises with correlated problems across all the individuals in a generation. But then there's a question about how high is the unavoidable background rate of such crises (i.e. ones that remain even if you have a very sophisticated and well-resourced attempt to prevent them).
On current understanding I think the lower bounds for the rate of exogenous such events rely on things like false vacuum decay (and maybe GRBs while we're local enough), and those lower bounds are really quite low, so it's fairly plausible that the true rate is really low (though also plausible it's higher because there are risks that aren't observed/understood).
Bounding endogenous risk seems a bit harder to reason about. I think that you can give kind of fairytale/handwaving existence proofs of stable political systems (which might however be utterly horrific to us). Then it's at least sort of plausible that there would be systems which are simultaneously extremely stable and also desirable.
I won't respond to your second/third bullets; as you say it's not a defense of the claim itself, and while it's plausible to me that many conclusions go through on much shorter timelines, I still want to understand the basis for the actual arguments made as best I can. Not least because if I can't defend such arguments, then my personal pitches for longtermism (both to myself and to others) will not include them; they and I will focus on the next e.g. 10,000 years instead.
To be clear, this makes a lot of sense to me, and I emphatically agree that understanding the arguments is valuable independently from whether this immediately changes a practical conclusion.
One argument goes via something like the reference class of global autopoeitic information-processing systems: life has persisted since it started several billion years ago; multicellular life similarly; sexual selection similarly. Sure, species go extinct when they're outcompeted, but the larger systems they're part of have only continued to thrive.
The right reference class (on this story) is not "humanity as a mammalian species" but "information-based civilization as the next step in faster evolution". Then we might be quite optimistic about civilization in some meaningful sense continuing indefinitely (though perhaps not about particular institutions or things that are recognisably human doing so).
If I understand you correctly, the argument is not "autopoietic systems have persisted for billions of years" but more specifically "so far each new 'type' of such systems has persisted, so we should expect the most recent new type of 'information-based civilization' to persist as well".
This is an interesting argument I hadn't considered in this form.
(I think it's interesting because I think the case that it talks about a morally relevant long future is stronger than for the simple appeal to all autopoietic systems as a reference class. The latter include many things that are so weird - like eusocial insects, asexually reproducing organisms, and potentially even non-living systems like autocatalytic chemical reactions - that the argument seems quite vulnerable to the objection that knowing that "some kind of autopoietic system will be around for billions of years" isn't that relevant. We arguably care about something that, while more general than current values or humans as biological species, is more narrow than that.
[Tbc, I think there are non-crazy views that care at least somewhat about basically all autopoietic systems, but my impression is that the standard justification for longtermism doesn't want to commit itself to such views.])
However, I have some worries about survivorship bias: If there was a "failed major transition in evolution", would we know about it? Like, could it be that 2 billion years ago organisms started doing sphexual selection (a hypothetical form of reproduction that's as different from previous asexual reproduction as sexual reproduction but also different from the latter) but that this type of reproduction died out after 1,000 years - and similarly for sphexxual selection, sphexxxual selection, ... ? Such that with full knowledge we'd conclude the reverse from your conclusion above, i.e. "almost all new types of autopoietic systems died out soon, so we should expect information-based civilization to die out soon as well"?
(FWIW my guess is that the answer actually is "our understanding of the history of evolution is sufficiently good that together with broad priors we can rule out at least an extremely high number of such 'failed transitions'", but I'm not sure and so I wanted to mention the possible problem.)
If there were lots of failed major transitions in evolution, that would also update us towards there being a greater number of attempted transitions than we previously thought, which would in turn update us positively on information-based civilization emerging eventually, no? Or are you assuming that these would be too weird/different from homo sapiens such that we wouldn't share values enough?
Furthermore, sexual selection looks like a fairly simple and straightforward solution to the problem 'organisms with higher life expectancy don't evolve quickly enough', so it doesn't look like there's a lot of space left for any alternatives.
Here’s a relevant thread from ~5 years ago(!) when some people were briefly discussing points along these lines. I think it illustrates both some similar points and also offers some quick responses to them.
Please do hit see in context to see some further responses there!
And agree, I would also like to further understand the arguments here :)
To answer your linguistic objection directly, I think one reason/intuition I have for not trusting probabilities much above 99% or much below 1% is that the empirical rates for the reference class of "fairly decent forecaster considers a novel well-defined question for some time, and then becomes inside-view utterly confident in the result" has a failure rate likely between 0.1% and 5%.
For me personally, I think the rate is slightly under 1%, including from misreading a question (eg forgetting the "not") and not understanding the data source.
This isn't decisive (I do indeed say things like giving <0.1% for direct human extinction from nuclear war or climate change this century) but is sort of a weak outside view argument for why anchoring on 1%-99% is not entirely absurd, even if we lived in an epistemic environment where basis points or 1-millionths probabilities are the default expressions of uncertainty.
Put another way, I think if the best research on how humans think of probabilities to date for novel well-defined problems is Expert Political Judgement where political experts' "utter confidence" translates to a ~15% failure rate (and my personal anecdotal evidence lines up with the empirical results), I think I'd say something similar about 10-90% being range of "reasonable" probabilities even if we use percentage-point based language.
I agree with everything you say in your reply. I think I simply partly misunderstood the point you were trying to make and phrased part of my response poorly. In particular, I agree that extreme credences aren't 'forbidden' in general.
(Sorry, I think it would have been better if I had flagged that I had read your comment and written mine very quickly.)
I still think that the distinction between credence/probabilities within a model and credence that a model is correct are is relevant here, for reasons such as:
I acknowledge that I'm making somewhat vague claims here, and that in order to have anything close to a satisfying philosophical account of what's going on I would need to spell out what exactly I mean by "often" etc. (Because as I said I do agree that these claims don't always hold!)