I'm a researcher at GPI. I have broad interests, but often they involve the intersection between bounded rationality and longtermism. I also like dogs.
Happy to chat about global priorities research, Oxford or academia. I have some degree of skepticism about effective altruism. Happy to talk about that too.
I'm one of the editors of the book. Just wanted to confirm everything Toby and Pablo said. It's fully open-access, and about to be sent off for production. So while that doesn't give us a firm release date, realistically we're looking at early 2024 if we're lucky and ... not early 2024 if we're not lucky.
Yep! Honestly, I'm not good at technology -- how do I change without making all of my backlinks go dead? [Edit: Sorry, that's probably the wrong term. I mean: all of my blog posts link to other blog posts. Is there a way to transfer domains that preserves all of those links?]
Thanks Dan! As mentioned, to think that cumulative risk is below 1-(10^-8) is to make a fairly strong claim about per-century risk. If you think we're already there, that's great!
Bostrom was actually considering something slightly stronger: the prospect of reducing cumulative risk by a further 10^(-8) from wherever it is at currently. That's going to be hard even if you think that cumulative risk is already lower than I do. So for example, you can ask what changes you'd have to make to per-century risk to drop cumulative risk from N to r-(10^-8) for any r in [0,1). Honestly, that's a more general and interesting way to do the math here. The only reason I didn't do this is that (a) it's slightly harder, and (b) most academic readers will already find per-century risk of ~one-in-a-million relatively implausible, and (c) my general aim was to illustrate the importance of carefully distinguishing between per-century risk and cumulative risk.
It might be a good idea, in rough terms, to think of a constant hazard rate as an average across all centuries. I suspect that if the variance of risk across centuries is low-ish, this is a good idea, whereas if the variance of risk across centuries is high-ish, it's a bad idea. In particular, on a time of perils view, focusing on average (mean) risk rather than explicit distributions of risk across centuries will strongly over-value the future, since a future in which much of the risk is faced early on is lower-value than a future in which risk is spread out.
Strong declining trends in hazard rates induce a time-of-perils like structure, except that on some models they might make a bit weaker assumptions about risk than leading time of perils models do. At least one leading time of perils model (Aschenbrenner) has a declining hazard structure. In general, the question will be how to justify a declining hazard rate, given a standard story on which (a) technology drives risk, and (b) technology is increasing rapidly. I think that some of the arguments against the time of perils hypothesis made in my paper "Existential risk pessimism and the time of perils" against the time of perils hypothesis will be relevant here, whereas others may be less relevant, depending on your view.
In general, I'd like to emphasize the importance of arguing for views about future rates of existential risk. Sometimes effective altruists are very quick to produce models and assign probabilities to models. Models are good (they make things clear!) but they don't reduce the need to support models with arguments, and assignments of probability are not arguments, but rather statements in need of argument.
Thanks Vasco! Yes, as in my previous paper, though (a) most of the points I'm making get some traction against models in which the time of perils hypothesis is true, (b) they get much more traction if the Time of Perils is false.
For example, on the first mistake, the gap between cumulative and per-unit risk is lower if risk is concentrated in a few centuries (time of perils) whereas if it's spread across many centuries. And on the second mistake, the the importance of background risk is reduced if that background risk is going to be around for only a few centuries at a meaningful level.
I think that the third mistake (ignoring population dynamics) should retain much of its importance on time of perils models. Actually, it might be more important insofar as those models tend to give higher probability to large-population scenarios coming about. I'd be interested to see how the numbers work out here, though.
Good catch Eevee - thanks! I hadn't caught this when proofreading the upload on the website. (Not our operations team's fault. They've been absolutely slammed with conference and event organizing recently, and I pushed them to rush this paper out so it would be available online).
I really liked and appreciated both of your posts. Please keep writing them, and I hope that future feedback will be less sharp.
Yep - nailed it!