As a summer research fellow at FHI, I’ve been working on using economic theory to better understand the relationship between economic growth and existential risk. I’ve finished a preliminary draft; see below. I would be very interesting in hearing your thoughts and feedback!

Draft: leopoldaschenbrenner.com/xriskandgrowth

Abstract:
Technological innovation can create or mitigate risks of catastrophes—such as nuclear war, extreme climate change, or powerful artificial intelligence run amok—that could imperil human civilization. What is the relationship between economic growth and these existential risks? In a model of endogenous and directed technical change, with moderate parameters, existential risk follows a Kuznets-style inverted U-shape. This suggests we could be living in a unique “time of perils,” having developed technologies advanced enough to threaten our permanent destruction, but not having grown wealthy enough yet to be willing to spend much on safety. Accelerating growth during this “time of perils” initially increases risk, but improves the chances of humanity's survival in the long run. Conversely, even short-term stagnation could substantially curtail the future of humanity. Nevertheless, if the scale effect of existential risk is large and the returns to research diminish rapidly, it may be impossible to avert an eventual existential catastrophe.

Comments31
Sorted by Click to highlight new comments since: Today at 1:36 AM
Buck
3y75
0
0

I think Carl Shulman makes some persuasive criticisms of this research here :

My main issue with the paper is that it treats existential risk policy as the result of a global collective utility-maximizing decision based on people's tradeoffs between consumption and danger. But that is assuming away approximately all of the problem.

If we extend that framework to determine how much society would spend on detonating nuclear bombs in war, the amount would be zero and there would be no nuclear arsenals. The world would have undertaken adequate investments in surveillance, PPE, research, and other capacities in response to data about previous coronaviruses such as SARS to stop COVID-19 in its tracks. Renewable energy research funding would be vastly higher than it is today, as would AI technical safety. As advanced AI developments brought AI catstrophic risks closer, there would be no competitive pressures to take risks with global externalities in development either by firms or nation-states.

Externalities massively reduce the returns to risk reduction, with even the largest nation-states being only a small fraction of the world, individual politicians much more concerned with their term of office and individual careers than national-level outcomes, and individual voters and donors constituting only a minute share of the affected parties. And conflict and bargaining problems are entirely responsible for war and military spending, central to the failure to overcome externalities with global climate policy, and core to the threat of AI accident catastrophe.

If those things were solved, and the risk-reward tradeoffs well understood, then we're quite clearly in a world where we can have very low existential risk and high consumption. But if they're not solved, the level of consumption is not key: spending on war and dangerous tech that risks global catastrophe can be motivated by the fear of competitive disadvantage/local catastrophe (e.g. being conquered) no matter how high consumption levels are.

I agree with Carl; I feel like other commenters are taking this research as a strong update, as opposed to a simple model which I'm glad someone's worked through the details of but which we probably shouldn't use to influence our beliefs very much.

I read the paper (skipping almost all the math) and Philip Trammell's blog post. I'm not sure I understood the paper, and in any case I'm pretty confused about the topic of how growth influences x-risk, so I want to ask you a bunch of questions:

  1. Why do the time axes in many of the graphs span hundreds of years? In discussions about AI x-risk, I mostly see something like 20-100 years as the relevant timescale in which to act (i.e. by the end of that period, we will either go extinct or else build an aligned AGI and reach a technological singularity). Looking at Figure 7, if we only look ahead 100 years, it seems like the risk of extinction actually goes up in the accelerated growth scenario.

  2. What do you think of Wei Dai's argument that safe AGI is harder to build than unsafe AGI and we are currently putting less effort into the former, so slower growth gives us more time to do something about AI x-risk (i.e. slower growth is better)?

  3. What do you think of Eliezer Yudkowsky's argument that work for building an unsafe AGI parallelizes better than work for building a safe AGI, and that unsafe AGI benefits more in expectation from having more computing power than safe AGI, both of which imply that slower growth is better from an AI x-risk viewpoint?

  4. What do you think of Nick Bostrom's urn analogy for technological developments? It seems like in the analogy, faster growth just means pulling out the balls at a faster rate without affecting the probability of pulling out a black ball. In other words, we hit the same amount of risk but everything just happens sooner (i.e. growth is neutral).

  5. Looking at Figure 7, my "story" for why faster growth lowers the probability of extinction is this: The richer people are, the less they value marginal consumption, so the more they value safety (relative to consumption). Faster growth gets us sooner to the point where people are rich and value safety. So faster growth effectively gives society less time in which to mess things up (however, I'm confused about why this happens; see the next point). Does this sound right? If not, I'm wondering if you could give a similar intuitive story.

  6. I am confused why the height of the hazard rate in Figure 7 does not increase in the accelerated growth case. I think equation (7) for might be the cause of this, but I'm not sure. My own intuition says accelerated growth not only condenses along the time axis, but also stretches along the vertical axis (so that the area under the curve is mostly unaffected).

    As an extreme case, suppose growth halted for 1000 years. It seems like in your model, the graph for hazard rate would be constant at some fixed level, accumulating extinction probability during that time. But my intuition says the hazard rate would first drop near zero and then stay constant, because there are no new dangerous technologies being invented. At the opposite extreme, suppose we suddenly get a huge boost in growth and effectively reach "the end of growth" (near period 1800 in Figure 7) in an instant. Your model seems to say that the graph would compress so much that we almost certainly never go extinct, but my intuition says we do experience a lot of risk for extinction. Is my interpretation of your model correct, and if so, could you explain why the height of the hazard rate graph does not increase?

    This reminds me of the question of whether it is better to walk or run in the rain (keeping distance traveled constant). We can imagine a modification where the raindrops are motionless in the air.

Not the author but I think I understand the model so can offer my thoughts:

1. Why do the time axes in many of the graphs span hundreds of years? In discussions about AI x-risk, I mostly see something like 20-100 years as the relevant timescale in which to act (i.e. by the end of that period, we will either go extinct or else build an aligned AGI and reach a technological singularity). Looking at Figure 7, if we only look ahead 100 years, it seems like the risk of extinction actually goes up in the accelerated growth scenario.

The model is looking at general dynamics of risk from the production of new goods, and isn’t trying to look at AI in any kind of granular way. The timescales on which we see the inverted U-shape depend on what values you pick for different parameters, so there are different values for which the time axes would span decades instead of centuries. I guess that picking a different growth rate would be one clear way to squash everything into a shorter time. (Maybe this is pretty consistent with short/medium AI timelines, as they probably correlate strongly with really fast growth).

I think your point about AI messing up the results is a good one -- the model says that a boom in growth has a net effect to reduce x-risk because, while risk is increased in the short-term, the long-term effects cancel that out. But if AI comes in the next 50-100 years, then the long-term benefits never materialise.

2. What do you think of Wei Dai's argument that safe AGI is harder to build than unsafe AGI and we are currently putting less effort into the former, so slower growth gives us more time to do something about AI x-risk (i.e. slower growth is better)?

Sure, maybe there’s a lock-in event coming in the next 20-200 years which we can either

  • Delay (by decreasing growth) so that we have more time to develop safety features, or
  • Make more safety-focussed (by increasing growth) so it is more likely to lock in a good state

I’d think that what matters is resources (say coordination-adjusted-IQ-person-hours or whatever) spent on safety rather than time that could available to be spent on safety if we wanted. So if we’re poor and reckless, then more time isn’t necessarily good. And this time spent being less rich also might make other x-risks more likely. But that’s a very high level abstraction, and doesn’t really engage with the specific claim too closely so keen to hear what you think.

3. What do you think of Eliezer Yudkowsky's argument that work for building an unsafe AGI parallelizes better than work for building a safe AGI, and that unsafe AGI benefits more in expectation from having more computing power than safe AGI, both of which imply that slower growth is better from an AI x-risk viewpoint?

The model doesn’t say anything about this kind of granular consideration (and I don’t have strong thoughts of my own).

4. What do you think of Nick Bostrom's urn analogy for technological developments? It seems like in the analogy, faster growth just means pulling out the balls at a faster rate without affecting the probability of pulling out a black ball. In other words, we hit the same amount of risk but everything just happens sooner (i.e. growth is neutral).

In the model, risk depends on production of consumption goods, rather than the level of consumption technology. The intuition behind this is that technological ideas themselves aren’t dangerous, it’s all the stuff people do with the ideas that’s dangerous. Eg. synthetic biology understanding isn’t itself dangerous, but a bunch of synthetic biology labs producing loads of exotic organisms could be dangerous.

But I think it might make sense to instead model risk as partially depending on technology (as well as production). Eg. once we know how to make some level of AI, the damage might be done, and it doesn’t matter whether there are 100 of them or just one.

And the reason growth isn’t neutral in the model is that there are also safety technologies (which might be analogous to making the world more robust to black balls). Growth means people value life more so they spend more on safety.

5. Looking at Figure 7, my "story" for why faster growth lowers the probability of extinction is this: The richer people are, the less they value marginal consumption, so the more they value safety (relative to consumption). Faster growth gets us sooner to the point where people are rich and value safety. So faster growth effectively gives society less time in which to mess things up (however, I'm confused about why this happens; see the next point). Does this sound right? If not, I'm wondering if you could give a similar intuitive story.

Sounds right to me.

6. I am confused why the height of the hazard rate in Figure 7 does not increase in the accelerated growth case. I think equation (7) for δ_t might be the cause of this, but I'm not sure. My own intuition says accelerated growth not only condenses along the time axis, but also stretches along the vertical axis (so that the area under the curve is mostly unaffected).

The hazard rate does increase for the period that there is more production of consumption goods, but this means that people are now richer, earlier than they would have been so they value safety more than they would otherwise.

As an extreme case, suppose growth halted for 1000 years. It seems like in your model, the graph for hazard rate would be constant at some fixed level, accumulating extinction probability during that time. But my intuition says the hazard rate would first drop near zero and then stay constant, because there are no new dangerous technologies being invented. At the opposite extreme, suppose we suddenly get a huge boost in growth and effectively reach "the end of growth" (near period 1800 in Figure 7) in an instant. Your model seems to say that the graph would compress so much that we almost certainly never go extinct, but my intuition says we do experience a lot of risk for extinction. Is my interpretation of your model correct, and if so, could you explain why the height of the hazard rate graph does not increase?

Hmm yeah, this seems like maybe the risk depends in part on the rate of change of consumption technologies - because if no new techs are being discovered, it seems like we might be safe from anthropogenic x-risk.

But, even if you believe that the hazard rate would decay in this situation, maybe what's doing the work is that you're imagining that we're still doing a lot of safety research, and thinking about how to mitigate risks. So that the consumption sector is not growing, but the safety sector continues to grow. In the existing model, the hazard rate could decay to zero in this case.

I guess I'm also not sure if I share the intuition that the hazard rate would decay to zero. Sure, we don't have the technology right now to produce AGI that would constitute an existential risk but what about eg. climate change, nuclear war, biorisk, narrow AI systems being used in really bad ways? It seems plausible to me that if we kept our current level of technology and production then we'd have a non-trivial chance each year of killing ourselves off.

What's doing the work for you? Do you think the probability of anthropogenic x-risk with our current tech is close to zero? Or do you think that it's not but that if growth stopped we'd keep working on safety (say developing clean energy, improving relationships between US and China etc.) so that we'd eventually be safe?

What's doing the work for you? Do you think the probability of anthropogenic x-risk with our current tech is close to zero? Or do you think that it's not but that if growth stopped we'd keep working on safety (say developing clean energy, improving relationships between US and China etc.) so that we'd eventually be safe?

I think the first option (low probability of x-risk with current technology) is driving my intuition.

Just to take some reasonable-seeming numbers (since I don't have numbers of my own): in The Precipice, Toby Ord estimates ~19% chance of existential catastrophe from anthropogenic risks within the next 100 years. If growth stopped now, I would take out unaligned AI and unforeseen/other (although "other" includes things like totalitarian regimes so maybe some of the probability mass should be kept), and would also reduce engineered pandemics (not sure by how much), which would bring the chance down to 0.3% to 4%. (Of course, this is a naive analysis since if growth stopped a bunch of other things would change, etc.)

My intuitions depend a lot on when growth stopped. If growth stopped now I would be less worried, but if it stopped after some dangerous-but-not-growth-promoting technology was invented, I would be more worried.

but what about eg. climate change, nuclear war, biorisk, narrow AI systems being used in really bad ways?

I'm curious what kind of story you have in mind for current narrow AI systems leading to existential catastrophe.

So you think the hazard rate might go from around 20% to around 1%? That's still far from zero, and with enough centuries with 1% risk we'd expect to go extinct.

I don't have any specific stories tbh, I haven't thought about it (and maybe it's just pretty implausible idk).

So you think the hazard rate might go from around 20% to around 1%?

I'm not attached to those specific numbers, but I think they are reasonable.

That's still far from zero, and with enough centuries with 1% risk we'd expect to go extinct.

Right, maybe I shouldn't have said "near zero". But I still think my basic point (of needing to lower the hazard rate if growth stops) stands.

Hey, thanks for engaging with this, and sorry for not noticing your original comment for so many months. I agree that in reality the hazard rate at t depends not just on the level of output and safety measures maintained at t but also on "experiments that might go wrong" at t. The model is indeed a simplification in this way.

Just to make sure something's clear, though (and sorry if this was already clear): Toby's 20% hazard rate isn't the current hazard rate; it's the hazard rate this century, but most of that is due to developments he projects occurring later this century. Say the current (instantaneous) hazard rate is 1% per century; my guess is that most of this consists of (instantaneous) risk imposed by existing stockpiles of nuclear weapons, existing climate instability, and so on, rather than (instantaneous) risk imposed by research currently ongoing. So if stopping growth would lower the hazard rate, it would be a matter of moving from 1% to 0.8% or something, not from 20% to 1%.

So if stopping growth would lower the hazard rate, it would be a matter of moving from 1% to 0.8% or something, not from 20% to 1%.

Can you say how you came up with the "moving from 1% to 0.8%" part? Everything else in your comment makes sense to me.

I'm just putting numbers to the previous sentence: "Say the current (instantaneous) hazard rate is 1% per century; my guess is that most of this consists of (instantaneous) risk imposed by existing stockpiles of nuclear weapons, existing climate instability, and so on, rather than (instantaneous) risk imposed by research currently ongoing."

If "most" means "80%" there, then halting growth would lower the hazard rate from 1% to 0.8%.

I thought this was one of the most exciting pieces of research I've seen in the last few years. It also makes me really eager to see GPI hopefully making more work in a similar vein happen.

[Disclaimer: I co-organized the summer research fellowship, as part of which Leopold worked on this research, though I didn't supervise him.]

As the one who supervised him, I too think it's a super exciting and useful piece of research! :)

I also like that its setup suggests a number of relatively straightforward extensions for other people to work on. Three examples:

  • Comparing (1) the value of an increase to B (e.g. a philanthropist investing / subsidizing investment in safety research) and (2) the value of improved international coordination (moving to the "global impatient optimum" from a "decentralized allocation" of x-risk mitigation spending at, say, the country level) to (3) a shock to growth and (4) a shock to the "rate of pure time preference" on which society chooses to invest in safety technology. (The paper currently just compares (3) and (4).)
  • Seeing what happens when you replace the N^(epsilon - beta) term in the hazard function with population raised to a new exponent, say N^(mu), to allow for some risky activities and/or safety measures whose contribution to existential risk depends not on the total spent on them but on the amount per capita spent on them, or something in between.
  • Seeing what happens when you use a different growth model--in particular, one that doesn't depend on population growth.

Yes, great paper and exciting work. Here are some further questions I'd be interested in (apologies if they result from misunderstanding the paper - I've only skimmed it once).

1) I'd love to see more work on Phil's first bullet point above.

Would you guess that due to the global public good problem and impatience, that people with a low rate of pure rate of time preference will generally believe society is a long way from optimal allocation to safety, and therefore that increasing investment in safety is currently much higher impact than increasing growth?


2) What would be the impact of uncertainty about the parameters be? Should we act as if we're generally in the eta > beta (but not much greater) regime, since that's where altruists could have the most impact?


3) You look at the chance of humanity surviving indefinitely - but don't we care more about something like the expected number of lives?

Might we be in the eta >> beta regime, but humanity still have a long future in expectation (i.e. tens of millions of years rather than billions). It might then still be very valuable to further extend the lifetime of civilisation, even if extinction is ultimately inevitable.

Or are there regimes where focusing on helping people in the short-term is the best thing to do?

Would looking at expected lifetime rather than probability of making it have other impacts on the conclusions? e.g. I could imagine it might be worth trading acceleration for a small increase in risk, so long as it allows more people to live in the interim in expectation.



Hi Ben, thanks for your kind words, and so sorry for the delayed response. Thanks for your questions!

  1. Yes, this could definitely be the case. In terms of what the most effective intervention is, I don’t know. I agree that more work on this would be beneficial. One important consideration would be what intervention has the potential to raise the level of safety in the long run. Safety spending might only lead to a transitory increase in safety, or it could enable R&D that improves improves the level of safety in the long run. In the model, even slightly faster growth for a year means people are richer going forward forever, which in turn means people are willing to spend more on safety forever.

  2. At least in terms of thinking about the impact of faster/slower growth, it seemed like the eta > beta case was the one we should focus on as you say (and this is what I do in the paper). When eta < beta, growth was unambiguously good; when eta >> beta, existential catastrophe was inevitable.

  3. In terms of expected number of lives, it seems like the worlds in which humanity survives for a very long time are dramatically more valuable than any world in which existential catastrophe is inevitable. Nevertheless, I want to think more about potential cases where existential catastrophe might be inevitable, but there could still be a decently long future ahead. In particular, if we think humanity’s “growth mode” might change at some stage in the future, the relevant consideration might be the probability of reaching that stage, which could change the conclusions.

Thank you for your kind words!

I think this research into x-risk & economic growth is a good contribution to patient longtermism.  I also think that integrating thoughts on economic growth more deeply into EA holds a lot of promise -- maybe models like this one could someday form a kind of "medium-termist" bridge between different cause areas, creating a common prioritization framework.  For both of these reasons I think this post is worth of inclusion in the decadal review.

The question of whether to be for or against economic growth in general is perhaps not the number-one most pressing dilemma in EA (since everyone agrees that differential technology development into x-risk-reducing areas is very important), but it is surely up there, since it's such a big-picture question that affects so many decisions.  Other than X-risk concerns, economic growth obviously looks attractive -- both in the developing world where it's a great way to make the world's poorest people more prosperous, or in the first world where the causes championed by "progress studies" promise to create a prosperous and more dynamic society where people can live better lives.  But of course, by longtermist lights, how fast we get to the future is less important than making sure we get there at all.  So, in the end, what to do about influencing economic growth?  Leopold's work is probably just a starting point for this huge and perhaps unanswerable set of questions.  But it's a good start -- I'm not sure if I want more economic growth or not, but I definitely want more posts like these tackling the question.

For the decadal review, rather than the literal text of this post (which merely refers to the pdf) or the comprehensive 100-page pdf itself, I'd suggest including Leopold's "Works in Progress" article summarizing his research.

I think this is an extremely impressive piece of work in economics proper not to mention a substantial contribution to longtermism research. Nice going.

Thanks Zach!

Neat paper. One reservation I have (aside from whether x-risk depends on aggregate consumption or on tech/innovation, which has already been brought up) is the assumption of the world allocating resources optimally (if impatiently). I don't know if mere underinvestment in safety would overturn the basic takeaways here, but my worry is more that a world with competing nation-states or other actors could have competitive dynamics that really change things.

Thanks very much for writing this, I found it really interesting. I like the way you follow the formalism with many examples.

I have a very simple question, probably due to my misunderstanding - looking at your simulations, you have the fraction of workers and scientists working on consumption going asymptotically to zero, but the terminal growth rate of consumption is positive. Is this a result of consumption economies of scale growing fast enough to offset the decline in worker fraction?

Thanks!

Regarding your question, yes, you have the right idea. Growth of consumption per capita is growth in consumption technology plus growth in consumption work per capita — thus, while the fraction of workers in the consumption sector declines exponentially, consumption technology grows (due to increasing returns) quickly enough to offset that. This yields positive asymptotic growth of consumption per capita overall (on the specific asymptotic paths you are referring to). Note that the absolute total number of people working consumption *research* is still increasing on the asymptotic path: while the fraction of scientists in the consumption sector declines exponentially, there is still overall population growth. This yields the asymptotic growth in consumption technology (but this growth is slower than what would be feasible, since scientists are being shifted away from consumption). Does that make sense?

[anonymous]5y4
0
0

This sounds really cool. Will have to read properly later. How would you recommend a time pressured reader to go through this? Are you planning a summary?

Still no summary of the paper as a whole, but if you're interested, I just wrote a really quick blog post which summarizes one takeaway. https://philiptrammell.com/blog/45

Thanks. I generally try to explain the intuition of what is going on in the body of the text—I would recommend focusing on that rather than on the exact mathematical formulations. I am not planning to write a summary at the moment, sorry.

This post was awarded an EA Forum Prize; see the prize announcement for more details.

My notes on what I liked about the post, from the announcement:

While Leopold’s paper was written away from the Forum, his taking the time to publish it and ask for feedback made him eligible for the Prize; it was also good to see his detailed replies to questions from other users.

Meanwhile, the paper itself is well-formatted (in the standard style of many economics papers) and seems easy to follow. I’d expect an economist, or someone else familiar with the field’s mathematical background, to be able to track Leopold’s points and, if they disagree with anything, to be able to pin down where his argument goes awry.

Note: Since the Prize committee won’t always have domain expertise for posts on technical topics, we generally care more about structure, organization, and clarity than whether we think an author’s conclusion is correct. A really good Forum post is one that explains itself point-by-point, such that critics and supporters alike can engage more effectively with the author.

I'm getting site not secure errors on all 4 browsers for the draft. Could you please make it more accessible?

Sorry to hear that! I’m not sure why it’s doing that—it’s just hosted on Github. Try this direct link: https://leopoldaschenbrenner.github.io/xriskandgrowth/ExistentialRiskAndGrowth050.pdf

Thanks - it worked!

I think that existential risk is still something that most governments aren't taking seriously. If major world governments had a model that contained a substantial probability of doom, there would be a Lot more funding. Look at the sort of funding anything and everything that might possibly help that happened in the cold war. I see this not taking it seriously as being caused by a mix of human psychology, and historical coincidence. I would not expect it to apply to all civilizations.

That's an interesting and the little i skimmed was somewhat straight forward if you can get through the dialect or notation, which is standard in econ papers ---which i'd call neoclassical. ( I got up to about page 20 -- discussions of effects of scientists/workers switching to safety production rather than consumption production).

This raises to me a few issues. you have probably seen https://arxiv.org/abs/1410.5787 Given debates about risks of other envirojmental risks like GMOs and nuclear energy, its even unclear what is 'safety' or 'precautionary' versus consumption production. Its also unclear how much 'science can come to the rescue' (discussed many places like AAAS).

There are also the behavioral issues---even if your model (like the Kuznets curve) is basically correct and one can calculate 'effectively altruistic' policies, whether they will be supported by the public/government and entice scientists and other workers to switch to 'green jobs' (whether technical or, say organic farming ) is a sociopolitical issue.

(Its possible other sorts of models, or variants of yours using some behavioral data, might be able to assess both effects of policies as you do, and include factors describing the plausibility they will be adopted. (I googled you at Columbia and see you also studied public opinion spread via Twitter, etc. and that gives ideas about dynamics of behavioral variables. Presumably these are already implicit in your various parameters beta, epsilon, etc. I guess these are also implicit in the discount factors discussed by Nordhaus and others--but they may have their own dynamics, rather than being constants. )

Alot of current climate activists promote 'degrowth' and lifestyle change (diet, transport, etc.) (eg extinction rebellion) , partly because because they think that maybe more important than growth, and don't trust growth will be applied to 'safety' rather than activities that contribute to AGW risks. Also many of them don't trust economic models, and many if not most people do understand them much (I can can only get a rough understanding partly because going through the math details is both often beyond my competency, and I have other things to do (i'm trying to sketch more simple models that attempt to catch the main ideas which might be comprehensible to and useful for a wider audience. ) As noted, a variant of your model could probably include some of these sociopolitical issues.)

Anyway, thought provoking paper.

Curated and popular this week
Relevant opportunities