I've cross-posted the beginning of this post so people can see what it's about, but I haven't reformatted the entire post. To see the full version, visit the link above.
In arriving at our funding priorities—including criminal justice reform, farm animal welfare, pandemic preparedness, health-related science, and artificial intelligence safety—Open Philanthropy has pondered profound questions. How much should we care about people who will live far in the future? Or about chickens today? What events could extinguish civilization? Could artificial intelligence (AI) surpass human intelligence?
One strand of analysis that has caught our attention is about the pattern of growth of human society over many millennia, as measured by number of people or value of economic production. Perhaps the mathematical shape of the past tells us about the shape of the future. I dug into that subject. A draft of my technical paper is here. (Comments welcome.) In this post, I’ll explain in less technical language what I learned.
It’s extraordinary that the larger the human economy has become—the more people and the more goods and services they produce—the faster it has grown on average. Now, especially if you’re reading quickly, you might think you know what I mean. And you might be wrong, because I’m not referring to exponential growth. That happens when, for example, the number of people carrying a virus doubles every week. Then the growth rate (100% increase per week) holds fixed. The human economy has grown super-exponentially. The bigger it has gotten, the faster it has doubled, on average. The global economy churned out $74 trillion in goods and services in 2019, twice as much as in 2000.1 Such a quick doubling was unthinkable in the Middle Ages and ancient times. Perhaps our earliest doublings took millennia.
If global economic growth keeps accelerating, the future will differ from the present to a mind-boggling degree. The question is whether there might be some plausibility in such a prospect. That is what motivated my exploration of the mathematical patterns in the human past and how they could carry forward. Having now labored long on the task, I doubt I’ve gained much perspicacity. I did come to appreciate that any system whose rate of growth rises with its size is inherently unstable. The human future might be one of explosion, perhaps an economic upwelling that eclipses the industrial revolution as thoroughly as it eclipsed the agricultural revolution. Or the future could be one of implosion, in which environmental thresholds are crossed or the creative process that drives growth runs amok, as in an AI dystopia. More likely, these impulses will mix.
I now understand more fully a view that shapes the work of Open Philanthropy. The range of possible futures is wide. So it is our task as citizens and funders, at this moment of potential leverage, to lower the odds of bad paths and raise the odds of good ones.
The impression I get from a (I admit relatively casual) look is that you are saying something along the following lines:
1) there is a big mystery concerning the fact that the rate of growth has been accelerating,
2) you will introduce a novel tool to explain that fact, which is stochastic calculus,
3) using this tool, you arrive at the conclusion that infinite explosion will occur before 2047 with 50% probability.
For starters, as you point out if we read you sufficiently carefully, there is no big mystery in the fact that the rate of growth of humanity has been super-exponential. This can be simply explained by assuming that innovation is an important component of the growth rate, and the amount of innovation effort itself is not constant, but grows with the size of the population, maybe in proportion to this size. So if you decide that this is your model of the world, and that the growth rate is proportional to innovation effort, then you write down some simple math and you conclude that infinite explosion will occur at some point in the near future. This has been pointed numerous times. For instance, as you point out (if we read you carefully), Michael Kremer (1993) checked that, going back as far as a million years ago, the idea that population growth rate is roughly proportional to (some positive power of the) population size gives you a good fit with the data up to maybe a couple of centuries ago. And then we know that the model stops to work, because for some reason at some level of income people stop to transform economic advancement into having more children. I don't think we should ponder for long about the fact that a model that matched well past data stopped to work at some point. This seems to me to be the natural fate of models of early growth of anything. So instead of speculating about this, Kremer adjusts his model to make it more realistic.
It is of course legitimate to argue that human progress over recent times is not best captured by population size, and that maybe gross world product is a better measure. For this measure, we have less direct evidence that a slowdown of the "naive model" is coming (By "naive model" I mean the model in which you just fit growth with a power law, without any further adjustment). Altough I do find works such as this or this quite convincing that future trends will be slower than what the "naive" model would say.
After reading a (very small) bit of your technical paper, my sense is that your main contribution is that you fixed a small inconsistency in how we go about estimating the parameters of the "naive model". I don't deny that this is a useful technical contribution, but I believe that this is what it is: a technical contribution. I don't think that it brings any new insight into questions such as, for instance, whether or not there will indeed be a near-infinite explosion of human development in the near future.
I am not comfortable with the fact that, in order to convey the idea of introducing randomness into the "naive model", you invoke "E = mc2", the introduction of calculus by Newton and Leibnitz, the work of Nobel prize winners, or the fact that "you experienced something like what [this Nobel prize winner] experienced, except for the bits about winning a Nobel". Introducing some randomness into a model is, in my opinion, a relatively common thing to do. That is, once we have a deterministic model that we find relatively plausible and that we want to refine somewhat.
From Vox's Future Perfect newsletter:
The latest edition of the Alignment Newsletter includes a good summary of Roodman's post, as well as brief comments by Nicholas Joseph and Rohin Shah:
I really enjoyed the blogpost, and think it's really valuable work, but have been somewhat dismayed to see virtually no discussion of the final part of the post, which is the first time the author attempts to include an admittedly rough term describing finite resources in the model. It... does not go well.
Given a lot of us are worried about x-risk, this seems to urgently merit further study.