I've cross-posted the beginning of this post so people can see what it's about, but I haven't reformatted the entire post. To see the full version, visit the link above. 

In arriving at our funding priorities—including criminal justice reform, farm animal welfare, pandemic preparedness, health-related science, and artificial intelligence safety—Open Philanthropy has pondered profound questions. How much should we care about people who will live far in the future? Or about chickens today? What events could extinguish civilization? Could artificial intelligence (AI) surpass human intelligence?

One strand of analysis that has caught our attention is about the pattern of growth of human society over many millennia, as measured by number of people or value of economic production. Perhaps the mathematical shape of the past tells us about the shape of the future. I dug into that subject. A draft of my technical paper is here. (Comments welcome.) In this post, I’ll explain in less technical language what I learned.

It’s extraordinary that the larger the human economy has become—the more people and the more goods and services they produce—the faster it has grown on average. Now, especially if you’re reading quickly, you might think you know what I mean. And you might be wrong, because I’m not referring to exponential growth. That happens when, for example, the number of people carrying a virus doubles every week. Then the growth rate (100% increase per week) holds fixed. The human economy has grown super-exponentially. The bigger it has gotten, the faster it has doubled, on average. The global economy churned out $74 trillion in goods and services in 2019, twice as much as in 2000.1 Such a quick doubling was unthinkable in the Middle Ages and ancient times. Perhaps our earliest doublings took millennia.

If global economic growth keeps accelerating, the future will differ from the present to a mind-boggling degree. The question is whether there might be some plausibility in such a prospect. That is what motivated my exploration of the mathematical patterns in the human past and how they could carry forward. Having now labored long on the task, I doubt I’ve gained much perspicacity. I did come to appreciate that any system whose rate of growth rises with its size is inherently unstable. The human future might be one of explosion, perhaps an economic upwelling that eclipses the industrial revolution as thoroughly as it eclipsed the agricultural revolution. Or the future could be one of implosion, in which environmental thresholds are crossed or the creative process that drives growth runs amok, as in an AI dystopia. More likely, these impulses will mix.

I now understand more fully a view that shapes the work of Open Philanthropy. The range of possible futures is wide. So it is our task as citizens and funders, at this moment of potential leverage, to lower the odds of bad paths and raise the odds of good ones.

(Read the rest of this post.)




Sorted by Click to highlight new comments since:

The impression I get from a (I admit relatively casual) look is that you are saying something along the following lines:

1) there is a big mystery concerning the fact that the rate of growth has been accelerating,

2) you will introduce a novel tool to explain that fact, which is stochastic calculus,

3) using this tool, you arrive at the conclusion that infinite explosion will occur before 2047 with 50% probability.

For starters, as you point out if we read you sufficiently carefully, there is no big mystery in the fact that the rate of growth of humanity has been super-exponential. This can be simply explained by assuming that innovation is an important component of the growth rate, and the amount of innovation effort itself is not constant, but grows with the size of the population, maybe in proportion to this size. So if you decide that this is your model of the world, and that the growth rate is proportional to innovation effort, then you write down some simple math and you conclude that infinite explosion will occur at some point in the near future. This has been pointed numerous times. For instance, as you point out (if we read you carefully), Michael Kremer (1993) checked that, going back as far as a million years ago, the idea that population growth rate is roughly proportional to (some positive power of the) population size gives you a good fit with the data up to maybe a couple of centuries ago. And then we know that the model stops to work, because for some reason at some level of income people stop to transform economic advancement into having more children. I don't think we should ponder for long about the fact that a model that matched well past data stopped to work at some point. This seems to me to be the natural fate of models of early growth of anything. So instead of speculating about this, Kremer adjusts his model to make it more realistic.

It is of course legitimate to argue that human progress over recent times is not best captured by population size, and that maybe gross world product is a better measure. For this measure, we have less direct evidence that a slowdown of the "naive model" is coming (By "naive model" I mean the model in which you just fit growth with a power law, without any further adjustment). Altough I do find works such as this or this quite convincing that future trends will be slower than what the "naive" model would say.

After reading a (very small) bit of your technical paper, my sense is that your main contribution is that you fixed a small inconsistency in how we go about estimating the parameters of the "naive model". I don't deny that this is a useful technical contribution, but I believe that this is what it is: a technical contribution. I don't think that it brings any new insight into questions such as, for instance, whether or not there will indeed be a near-infinite explosion of human development in the near future.

I am not comfortable with the fact that, in order to convey the idea of introducing randomness into the "naive model", you invoke "E = mc2", the introduction of calculus by Newton and Leibnitz, the work of Nobel prize winners, or the fact that "you experienced something like what [this Nobel prize winner] experienced, except for the bits about winning a Nobel". Introducing some randomness into a model is, in my opinion, a relatively common thing to do. That is, once we have a deterministic model that we find relatively plausible and that we want to refine somewhat.

From Vox's Future Perfect newsletter:

One of the earliest editions of this newsletter was a barely disguised letter of appreciation about David Roodman, a senior adviser at the Open Philanthropy Project with the unusual job description of compiling evidence on big, broad, important issues.


In the past, that’s meant conducting highly rigorous reviews of the data on questions like “Do alcohol taxes save lives?” or “Does releasing people from prison increase crime?” — up to and including breaking down the component studies of those reviews to see if their methodology holds up to scrutiny.


Roodman’s latest project is even bigger: He’s just released a new draft paper with the title “Modeling the Human Trajectory.” The very modest goal is to model how human economies have evolved since 10,000 BCE. You know, easy stuff.


You can read Roodman’s topline conclusions in this blog post, but unlike his previous work, this isn’t really a paper where the conclusions are the most important part.


Debate on social scientific questions is never “settled,” but Roodman’s past papers were useful because they came to relatively assured conclusions that it would take considerable subsequent research to overturn.


Reading his crime paper, I came away with a strong belief that we can reduce incarceration in the US without increasing crime. Reading his alcohol tax paper, I came away with a strong belief that raising alcohol taxes will save lives, and cutting alcohol taxes will cause unnecessary deaths.


Reading his paper on the “human trajectory,” I came away with the belief that … human history is a beautiful mystery that we are only just beginning to unravel.


Roodman first tries to see, descriptively, how the growth rate of the world economy has evolved from 10,000 BCE to the present. He finds that it's best represented by a power law distribution: that is, the economy hasn't grown at a steady rate, accumulating to an exponential growth explosion, but instead the rate at which the world economy has grown has itself increased as the economy has gotten bigger.


If we follow this trajectory, that implies infinite or near-infinite growth over the next few thousand years. These are the kinds of projections that led environmentally minded economists and "systems thinkers" in the 1960s and '70s to project some kind of bust; you can't have infinite growth on a finite planet, as the slogan goes.


Those projections haven't come to fruition yet, but that doesn't mean the intuition is wrong, Roodman argues, asking: "What should we make of the fact that good models of the past project an impossible future?"


One possibility he explores is that this future is not in fact impossible. If, as the endogenous growth theory of economists like Paul Romer and Charles Jones has suggested in recent decades, technology, not raw materials, is the main driver of economic growth in the long run, maybe there isn’t a limit. “Technology” is just a term of art we use for human ideas and inventiveness, which doesn’t obey the restrictions of scarcity that apply to coal or rare-earth metals.


I am condensing a really dense and fascinating blog post about an even denser and more fascinating white paper that I absolutely do not have the math skills to fully grok, so forgive me if some is lost in translation. But I really encourage you to dive into Roodman's post. He doesn't arrive at a firm prediction of the future, naturally, but instead two much more modest observations:


“First, if the patterns of history continue, then some sort of economic explosion will take place again, the most plausible channel being AI. It wouldn’t reach infinity, but it could be big. Second, and more generally, I take the propensity for explosion as a sign of instability in the human trajectory. Gross world product, as a rough proxy for the scale of the human enterprise, might someday spike or plunge or follow complicated paths in between. The projections of explosion should be taken as indicators of the long-run tendency of the human system to diverge.”


My takeaway from the piece is arguably tautological but I think still useful: The difference between our best possible future and our worst possible future could be quite literally infinite. It could be the difference between civilization ending and the kind of abundance only dreamed of in science fiction.


That makes anything we can do to shape that long-term trajectory of humanity almost indescribably important. The hard part is finding out what, if anything, can reliably make that trajectory better.


—Dylan Matthews

The latest edition of the Alignment Newsletter includes a good summary of Roodman's post, as well as brief comments by Nicholas Joseph and Rohin Shah:

Modeling the Human Trajectory (David Roodman) (summarized by Nicholas): This post analyzes the human trajectory from 10,000 BCE to the present and considers its implications for the future. The metric used for this is Gross World Product (GWP), the sum total of goods and services produced in the world over the course of a year.
Looking at GWP over this long stretch leads to a few interesting conclusions. First, until 1800, most people lived near subsistence levels. This means that growth in GWP was primarily driven by growth in population. Since then population growth has slowed and GWP per capita has increased, leading to our vastly improved quality of life today. Second, an exponential function does not fit the data well at all. In an exponential function, the time for GWP to double would be constant. Instead, GWP seems to be doubling faster, which is better fit by a power law. However, the conclusion of extrapolating this relationship forward is extremely rapid economic growth, approaching infinite GWP as we near the year 2047.
Next, Roodman creates a stochastic model in order to analyze not just the modal prediction, but also get the full distribution over how likely particular outcomes are. By fitting this to only past data, he analyzes how surprising each period of GWP was. This finds that the industrial revolution and the period after it was above the 90th percentile of the model’s distribution, corresponding to surprisingly fast economic growth. Analogously, the past 30 years have seen anomalously lower growth, around the 25th percentile. This suggests that the model's stochasticity does not appropriately capture the real world -- while a good model can certainly be "surprised" by high or low growth during one period, it should probably not be consistently surprised in the same direction, as happens here.
In addition to looking at the data empirically, he provides a theoretical model for how this accelerating growth can occur by generalizing a standard economic model. Typically, the economic model assumes technology is a fixed input or has a fixed rate of growth and does not allow for production to be reinvested in technological improvements. Once reinvestment is incorporated into the model, then the economic growth rate accelerates similarly to the historical data.
Nicholas's opinion: I found this paper very interesting and was quite surprised by its results. That said, I remain confused about what conclusions I should draw from it. The power law trend does seem to fit historical data very well, but the past 70 years are fit quite well by an exponential trend. Which one is relevant for predicting the future, if either, is quite unclear to me.
The theoretical model proposed makes more sense to me. If technology is responsible for the growth rate, then reinvesting production in technology will cause the growth rate to be faster. I'd be curious to see data on what fraction of GWP gets reinvested in improved technology and how that lines up with the other trends.
Rohin’s opinion: I enjoyed this post; it gave me a visceral sense for what hyperbolic models with noise look like (see the blog post for this, the summary doesn’t capture it). Overall, I think my takeaway is that the picture used in AI risk of explosive growth is in fact plausible, despite how crazy it initially sounds. Of course, it won’t literally diverge to infinity -- we will eventually hit some sort of limit on growth, even with “just” exponential growth -- but this limit could be quite far beyond what we have achieved so far. See also this related post.

I really enjoyed the blogpost, and think it's really valuable work, but have been somewhat dismayed to see virtually no discussion of the final part of the post, which is the first time the author attempts to include an admittedly rough term describing finite resources in the model. It... does not go well.

Given a lot of us are worried about x-risk, this seems to urgently merit further study.

Curated and popular this week
Relevant opportunities