Hide table of contents

The basic argument

A somewhat common goal in EA (more common elsewhere) is to accelerate the human trajectory, by promoting things such as economic growth, tech R&D, or general population growth. One could presume that doing this could accelerate exponential economic growth throughout the very long-run future, which could easily accumulate to huge improvements in welfare.

But the returns to economic growth have historically been super-exponential, so our global economic trend points clearly towards a theoretical economic singularity within the 21st century.[1] This is not contradicted by the ‘inside view’: while there are some indications that our society is currently stagnating,[2] there are also conceivable opportunities for explosive progress in the medium run.[3]

This does not mean we will actually reach infinite economic growth. In practice, a real singularity is presumably impossible. But the forecast does suggest that we will see an acceleration towards extraordinary economic growth until we reach the limits of things like nonrenewable resources, thermodynamics, or lightspeed (call it a 'transition'), assuming that we don’t suffer a globally catastrophic event first. So, counterfactually growing the economy today has an impact of the following form. First, it makes the economy larger in 2025, much larger in 2030, and so on. Then, near the point of transition, the returns approach extremely high values: a billion dollars of growth today might make the economy a trillion dollars larger in 2045. Then our early economic growth moves the actual date of transition closer, so it has astronomical returns for a short period of time (2046). However, there is little change afterwards; whether or not we had grown the economy in 2020, humanity would still spend 2047-3000++ (possibly more than 10^20 years[4]) under the constraints of fundamental limits. Of course, this period might not last long if we face many existential risks. But the known natural existential risks, like supernovas and asteroids, are generally very rare and/or avoidable for very advanced spacefaring civilizations. A bigger question is whether internal conflicts would lead to ruin. (I've seen some arguments by Robin Hanson and others that warfare and competition will continue indefinitely, but don't recall any links or titles.) And the Doomsday Argument implies that civilization probably won’t last long, though it is debatable. There is some debate over the nature of the probabilistic argument, and we might be succeeded by posthuman descendants who don’t fit within our reference class. Or we might cure aging, stop reproducing and live indefinitely as a small civilization of billions or tens of billions of people (assuming the Doomsday Argument works through the self-sampling assumption, rather than through the self-indication assumption).

Regardless, a very long future for civilization is likely enough that it greatly outweighs improvements in economic welfare for 2020-2047 (or whatever the dates will be). Therefore, while super-exponential-growth implies that growing the economy produces huge returns in the medium run, it also means that growing the economy is unimportant compared to safeguarding the overall trajectory of the future.

One argument for growth is to maximize the area of space that we can colonize. The ultimate upper bound on our future trajectory is lightspeed. Due to the expansion of the universe, each moment that we delay in colonizing the universe means that more matter goes beyond the ultimate volume of everything that we can ever reach. The volume is theoretically equivalent to the Hubble Volume but is realistically something smaller given that light speed can never be reached. However, it is my informal understanding that the loss from delay is quite small.[5]

If you do some napkin math based on these assumptions, economic growth is not merely less important than reducing existential risk, it simply seems negligible by comparison.

Other considerations

There are still some reasons to care substantively about economic growth. First, model uncertainty. Maybe the world really is settling into steady-state economic growth, despite Roodman’s conclusions. Then accelerating it could cause very large counterfactual accumulation in the long run.

Also, counterfactual economic growth, with its associated technological progress, can affect existential risk in the medium run. Economic growth certainly helps against naturally occurring total existential risks, like huge asteroids striking the planet, but such risks have extremely low base rates and some are even predictably absent in the medium run. Economic growth has ambiguous impacts on existential risk from smaller naturally occurring catastrophes, like the Yellowstone supervolcano or a small asteroid; it gives us greater power to deal with these threats but may also increase the risk that our society becomes so complex that it suffers a cascading failure from one of them. Still though, these risks have very low base rates and so are very unlikely in the medium run.

Anthropogenic existential risks are associated with economic growth in the first place, so it usually seems like accelerating growth doesn’t change the chance that they kill us, it just brings them closer. Just as growth hastens the days when we create existential risks, it also hastens the days that we create solutions to them.

Differential moral/sociopolitical progress relative to economic growth can change the picture. Rapid economic growth might outstrip the progress in our noneconomic capacities. Of course, moral, social and political changes are largely driven by economic growth, but they also seem to be exogenous to some extent. Such changes may be important for addressing anthropogenic existential risks, in which case slowing down the economy would reduce existential risk. Conversely, it’s possible that exogenous moral and political change produces existential risks and that accelerating economic growth helps us avoid those, but this possibility seems much less likely, considering the track record of human civilization.

At the end of the day, accelerating the human trajectory, including accelerating global economic growth, just cannot be substantiated as a policy priority. It likely has medium term benefits and might have some long run benefits, but the risk of hastening medium-term anthropogenic existential risk is too worrying.

[1] An endogenous model of gross world product using stochastic calculus developed by David Roodman at Open Philanthropy Project: https://www.openphilanthropy.org/blog/modeling-human-trajectory

[2] See the first part of this compilation of Peter Thiel’s views, which has many anecdotes about the ways that progress has stagnated, though it is limited by having a mostly US-centric point of view: https://docs.google.com/document/d/1zao_AyBhNb8TPWrQqgXn5NzNAgfEqzTIaFYos7wdqGI/edit#

[3] AGI, bioengineering, web-based circumvention of exclusionary economic institutions (mainly, efficient remote work to circumvent barriers against immigration and urban growth), fusion power, geoengineering, undersea/lunar/asteroid mining.

[4] Anders Sandberg talk on grand futures: https://www.youtube.com/watch?v=Y6kaOgjY7-E I am referencing the end of the ‘degenerate era’ of dying cool stars that he discusses a bit over an hour in, but this is pretty arbitrary; he describes orders-of-magnitude longer subsequent eras that might be liveable.

[5] Comments by Ryan Carey.

36

0
0

Reactions

0
0

More posts like this

Comments9
Sorted by Click to highlight new comments since: Today at 4:02 AM

This seems like a retread of Bostrom's argument that, despite astronomical waste, x-risk reduction is important regardless of whether it comes at the cost of growth. Does any part of this actually rely on Roodman's superexponential growth? It seems like it would be true for almost any growth rates (as long as it doesn't take like literally billions or hundreds of billions of years to reach the steady state).

I'm pretty confident that accelerating exponential and never-ending growth would be competitive with reducing x-risk. That was IMO the big flaw with Bostrom's argument (until now). If that's not intuitive let me know and I'll formalize a bit

In case you missed it, Leopold Aschenbrenner wrote a paper on economic growth and existential risks, suggesting that future investments in prevention efforts might be a key variable that may in the long run offset increased risks due to increasing technological developments.

https://forum.effectivealtruism.org/posts/xh37hSqw287ufDbQ7/existential-risk-and-economic-growth-1

Thanks! Great find. I'm not sure if I trust the model tho.

Nice post! Meta: footnote links are broken, and references to [1] and [2] aren't in the main body.

Also could [8] be referring to this post? It only touches on your point though:

Defensive consideration also suggest that they’d need to maintain substantial activity to watch for and be ready to respond to attacks.

Thanks, fixed. No that's not the post I'm thinking of.

Updates to this: 

Nordhaus paper argues that we don't appear to be approaching a singularity. Haven't read it. Would like to see someone find the crux of the differences with Roodman.

Blog 'Outside View' with some counterarguments to my view:

Thus, the challenge of building long term historical GDP data means we should be quite skeptical about turning around and using that data to predict future growth trends. All we're really doing is extrapolating the backwards estimates of some economists forwards. The error bars will be very large.

Well, Roodman tests for this in his paper, see 5.2, and finds that systematic moderate overestimation or underestimation only changes the expected explosion date by +/- 4 years.

I guess things could change more if  the older values are systematically misestimated differently from more recent values? If very old estimates are all underestimates but recent estimates are not, then that could delay the projection further. Also, maybe he should test for more extreme magnitudes of misestimation. But based on the minor extent to which his other tests changed the results, I doubt this one would make much difference either.

But if it's possible, or even intuitive, that specific institutions fundamentally changed how economic growth occurred in the past, then it may be a mistake to model global productivity as a continuous system dating back thousands of years. In fact, if you took a look at population growth, a data set that is also long-lived and grows at a high rate, the growth rate fundamentally changed over time. Given the magnitude of systemic economic changes of the past few centuries, modeling the global economy as continuous from 10,000 BCE to now may not give us good predictions. The outside view becomes less useful at this distance.

Fair, but at the same time, this undercuts the argument that we should prioritize economic growth as something that will yield social dividends indefinitely into the future. If our society has fundamentally transformed so that marginal economic growth in 1000 BC makes little difference to our lives, then it seems likely that marginal economic growth today will make little difference to our descendants in 2500 AD.

It's possible that we've undergone discontinuous shifts in the past but will not in the future. Just seems unlikely.

Thanks for writing this. I'd love to see your napkin math

Assume that a social transition is expected in 40 years and the post transition society has 4x times as much welfare as a pre-transition society. Also assume that society will last for 1000 more years.

Increasing the rate of economic growth by a few percent might increase our welfare pre-transition by 5% and move up the transition by 2 years.

Then the welfare gain of the economic acceleration is (0.05*35)+(3*2)=8.

Future welfare without the acceleration is 40+(4*1000)=4040, so a gain of 8 is like reducing 0.2% existential risk.

Obviously the numbers are almost arbitrary but you should see the concepts at play.

Then if you think about a longer run future then the tradeoff becomes very different, with existential risk being far more important.

If society lasts for 1 million more years then the equivalent is 0.0002% X-risk.

More from kbog
Curated and popular this week
Relevant opportunities