As a summer research fellow at FHI, I’ve been working on using economic theory to better understand the relationship between economic growth and existential risk. I’ve finished a preliminary draft; see below. I would be very interesting in hearing your thoughts and feedback!
Draft: leopoldaschenbrenner.com/xriskandgrowth
Abstract:
Technological innovation can create or mitigate risks of catastrophes—such as nuclear war, extreme climate change, or powerful artificial intelligence run amok—that could imperil human civilization. What is the relationship between economic growth and these existential risks? In a model of endogenous and directed technical change, with moderate parameters, existential risk follows a Kuznets-style inverted U-shape. This suggests we could be living in a unique “time of perils,” having developed technologies advanced enough to threaten our permanent destruction, but not having grown wealthy enough yet to be willing to spend much on safety. Accelerating growth during this “time of perils” initially increases risk, but improves the chances of humanity's survival in the long run. Conversely, even short-term stagnation could substantially curtail the future of humanity. Nevertheless, if the scale effect of existential risk is large and the returns to research diminish rapidly, it may be impossible to avert an eventual existential catastrophe.
I think that existential risk is still something that most governments aren't taking seriously. If major world governments had a model that contained a substantial probability of doom, there would be a Lot more funding. Look at the sort of funding anything and everything that might possibly help that happened in the cold war. I see this not taking it seriously as being caused by a mix of human psychology, and historical coincidence. I would not expect it to apply to all civilizations.