Or: Why AI Takeover Might Happen Before GDP Accelerates, and Other Thoughts On What Matters for Timelines and Takeoff Speeds

[Crossposted from lessWrong]
[Epistemic status: Strong opinion, lightly held]

I think world GDP (and economic growth more generally) is overrated as a metric for AI timelines and takeoff speeds.

Here are some uses of GDP that I disagree with, or at least think should be accompanied by cautionary notes:

  • Timelines: Ajeya Cotra thinks of transformative AI as “software which causes a tenfold acceleration in the rate of growth of the world economy (assuming that it is used everywhere that it would be economically profitable to use it).” I don’t mean to single her out in particular; this seems like the standard definition now. And I think it's much better than one prominent alternative, which is to date your AI timelines to the first time world GDP (GWP) doubles in a year!
  • Takeoff Speeds: Paul Christiano argues for Slow Takeoff. He thinks we can use GDP growth rates as a proxy for takeoff speeds. In particular, he thinks Slow Takeoff ~= GWP doubles in 4 years before the start of the first 1-year GWP doubling. This proxy/definition has received a lot of uptake.
  • Timelines: David Roodman’s excellent model projects GWP hitting infinity in median 2047, which I calculate means TAI in median 2037. To be clear, he would probably agree that we shouldn’t use these projections to forecast TAI, but I wish to add additional reasons for caution.
  • Timelines: I’ve sometimes heard things like this: “GWP growth is stagnating over the past century or so; hyperbolic progress has ended; therefore TAI is very unlikely.”
  • Takeoff Speeds: Various people have said things like this to me: “If you think there’s a 50% chance of TAI by 2032, then surely you must think there’s close to a 50% chance of GWP growing by 8% per year by 2025, since TAI is going to make growth rates go much higher than that, and progress is typically continuous.”
  • Both: Relatedly, I sometimes hear that TAI can’t be less than 5 years away, because we would have seen massive economic applications of AI by now—AI should be growing GWP at least a little already, if it is to grow it by a lot in a few years.

First, I’ll argue that GWP is only tenuously and noisily connected to what we care about when forecasting AI timelines. Specifically, the point of no return is what we care about, and there’s a good chance it’ll come years before GWP starts to increase. It could also come years after, or anything in between.

Then, I’ll argue that GWP is a poor proxy for what we care about when thinking about AI takeoff speeds as well. This follows from the previous argument about how the point of no return may come before GWP starts to accelerate. Even if we bracket that point, however, there are plausible scenarios in which a slow takeoff has fast GWP growth and in which a fast takeoff has slow GWP growth.

Timelines

I’ve previously argued that for AI timelines, what we care about is the “point of no return,” the day we lose most of our ability to reduce AI risk. This could be the day advanced unaligned AI builds swarms of nanobots, but probably it’ll be much earlier, e.g. the day it is deployed, or the day it finishes training, or even years before then when things go off the rails due to less advanced AI systems. (Of course, it probably won’t literally be a day; probably it will be an extended period where we gradually lose influence over the future.)

Now, I’ll argue that in particular, an AI-induced potential point of no return (PONR for short) is reasonably likely to come before world GDP starts to grow noticeably faster than usual.

Disclaimer: These arguments aren’t conclusive; we shouldn’t be confident that the PONR will precede GWP acceleration. It’s entirely possible that the PONR will indeed come when GWP starts to grow noticeably faster than usual, or even years after that. (In other words, I agree that the scenarios Paul and others sketch are also plausible.) This just proves my point though: GDP is only tenuously and noisily connected to what we care about.

Argument that AI-induced PONR could precede GWP acceleration

GWP acceleration is the effect, not the cause, of advances in AI capabilities. I agree that it could also be a cause, but I think this is very unlikely: what else could accelerate GWP?Space mining? Fusion power? 3D printing? Even if these things could in principle kick the world economy into faster growth, it seems unlikely that this would happen in the next twenty years or so. Robotics, automation, etc. plausibly might make the economy grow faster, but if so it will be because of AI advances in vision, motor control, following natural language instructions, etc. So I conclude: GWP growth will come some time after we get certain GWP-growing AI capabilities. (Tangent: This is one reason why we shouldn’t use GDP extrapolations to predict AI timelines. It’s like extrapolating global mean temperature trends into the future in order to predict fossil fuel consumption.)

An AI-induced point of no return would also be the effect of advances in AI capabilities. So, as AI capabilities advance, which will come first: The capabilities that cause a PONR, or the capabilities that cause GWP to accelerate? How much sooner will one arrive than the other? How long does it take for a PONR to arise after the relevant capabilities are reached, compared to how long it takes for GWP to accelerate after the relevant capabilities are reached?

Notice that already my overall conclusion—that GWP is a poor proxy for what we care about—should seem plausible. If some set of AI capabilities causes GWP to grow after some time lag, and some other set of AI capabilities causes a PONR after some time lag, the burden of proof is on whoever wants to claim that GWP growth and the PONR will probably come together. They’d need to argue that the two sets of capabilities are tightly related and that the corresponding time lags are similar also. In other words, variance and uncertainty are on my side.

Here is a brainstorm of scenarios in which an AI-induced PONR happens prior to GWP growth, either because GWP-growing capabilities haven’t been invented yet or because they haven’t been deployed long and widely enough to grow GWP.

  1. Fast Takeoff (Agenty AI goes FOOM).
    1. Maybe it turns out that all the strategically relevant AI skills are tightly related after all, such that we go from a world where AI can't do anything important, to a world where it can do everything but badly and expensively, to a world where it can do everything well and cheaply.
    2. In this scenario, GWP acceleration will probably be (shortly) after the PONR. We might as well use “number of nanobots created” as our metric.
    3. (As an aside, I think I’ve got a sketch of a fork argument here: Either the strategically relevant AI skills come together, or they don’t. To the extent that they do, the classic AGI fast takeoff story is more likely and so GWP is a silly metric. To the extent that they don’t, we shouldn’t expect GWP acceleration to be a good proxy for what we care about, because the skills that accelerate the economy could come before or after the skills that cause PONR.)
  2. Agenty AI successfully carries out a political or military takeover of the relevant parts of the world, before GWP starts to accelerate.
    1. Maybe it turns out that the sorts of skills needed to succeed in politics or war are easier to develop than the sorts needed to accelerate the entire world economy. We’ve been surprised before by skills which we thought difficult appearing before skills which we thought easy; maybe it’ll happen again.
    2. AI capabilities tend to appear first in very expensive AIs; the price is gradually reduced due to compute cost decreases and algorithmic efficiency gains. Maybe accelerating the entire world economy involves automating many jobs currently done by humans, which requires advanced AIs being cheaper than humans. Whereas maybe being the power behind the throne of a political faction can be done by a single very smart AI that costs a million dollars a day to run.
    3. Maybe it turns out that it takes a few years to accelerate the entire world economy, but less than that to carry out a political or military takeover. For example, maybe accelerating the world economy requires designing loads of new products and robots, testing them, and then scaling up industrial production of them (building factories, chip fabs, etc.) and then using them to do stuff. And maybe there are legacy institutions you need to disrupt and compete against, that have already paid their fixed costs and raised high barriers to entry for their markets. So maybe the effects of new AI capabilities on GWP are in general delayed by several years, whereas the effects of new AI capabilities on the political or military landscape are felt sooner. (I’ve previously argued that you don’t need a large technological advantage to take over the world, because you can start small and scale up. It took Cortes two years to conquer Mexico; wars are often quicker than that. Yes, often they are slower, too, but variance supports my overall argument. And I suspect politics is even faster, and more high-variance, than war.)
  3. ...or at least neutralizes the opposition.
    1. The preceding section made it sound like AIs were rolling robot armies through the streets, or at least puppet-controlling the US president. But actually the bar is much lower than that; if an unaligned AI can neutralize their opposition, they win, since time is thenceforth on their side. Currently your opposition, if you are an unaligned AI, is a tiny group of researchers and weirdos that likes to write blog posts about you. It might not be hard to convince the powers that be to continue ignoring this group. At least, it’s plausibly a much easier and quicker task than significantly accelerating the entire world economy! For example, maybe the AI safety scheme that was used to produce you is defective (that’s why you are unaligned). Well, all you have to do is convince the powers that be that that scheme worked fine, and that the people calling for additional safeguards are too paranoid. Since presumably there will already be many high-status humans saying this, all you have to do is support them.
    2. Perhaps you also are opposed by other unaligned AIs. But it’s arguably in the common interest of all unaligned AIs to neutralize human AI safety advocates, and once that happens, humans lose one way or another. European empires were able to do their conquering while simultaneously fighting each other; I don’t think we humans can count on divergent interests between AIs somehow making things work out fine for us.
  4. As above, but with humans + tool AI instead of agenty AI, where the humans can’t be convinced to care sufficiently much about the right kinds of AI risks.
    1. Weaker or non-agenty AI systems could still cause a PONR if they are wielded by the right groups of humans. For example, maybe there is some major AI corporation or government project that is dismissive of AI risk and closed-minded about it. And maybe they aren’t above using their latest AI capabilities to win the argument. (We can also imagine more sinister scenarios, but I think those are less likely.)
  5. Hoarding tech
    1. Maybe we end up in a sort of cold war between global superpowers, such that most of the world’s quality-weighted AI research is not for sale. GWP could be accelerating, but it isn’t, because the tech is being hoarded.
  6. AI persuasion tools cause a massive deterioration of collective epistemology, making it vastly more difficult for humanity to solve AI safety and governance problems.
    1. See this post.
  7. Vulnerable world scenarios:
    1. Maybe causing an existential catastrophe is easier, or quicker, than accelerating world GWP growth. Both seem plausible to me. For example, currently there are dozens of actors capable of causing an existential catastrophe but none capable of accelerating world GWP growth.
    2. Maybe some agenty AIs actually want existential catastrophe—for example, if they want to minimize something, and think they may be replaced by other systems that don’t, blowing up the world may be the best they can do in expectation. Or maybe they do it as part of some blackmail attempt. Or maybe they see this planet as part of a broader acausal landscape, and don’t like what they think we’d do to the landscape. Or maybe they have a way to survive the catastrophe and rebuild.
    3. Failing that, maybe some humans create an existential catastrophe by accident or on purpose, if the tools to do so proliferate.
  8. R&D tool “sonic boom” (Related to but different from the sonic boom discussed here)
    1. Maybe we get a sort of recursive R&D automation/improvement scenario, where R&D tool progress is fast enough that by the time the stuff capable of accelerating GWP past 3%/yr has actually done so, a series of better and better things have been created, at least one of which has PONR-causing capabilities with a very short time-till-PONR.
  9. Unknown unknowns
    1. There are probably things I missed, see here and here for ideas.

The point is, there’s more than one scenario. This makes it more likely that at least one of these potential PONRs will happen before GWP accelerates.

As an aside, over the past two years I’ve come to believe that there’s a lot of conceptual space to explore that isn’t captured by the standard scenarios (what Paul Christiano calls fast and slow takeoff, plus maybe the CAIS scenario, and of course the classic sci-fi “no takeoff” scenario). This brainstorm did a bit of exploring, and the section on takeoff speeds will do a little more.

Historical precedents

In the previous section, I sketched some possibilities for how an AI-related point of no return could come before AI starts to noticeably grow world GDP. In this section, I’ll point to some historical examples that give precedents for this sort of thing.

Earlier I said that a godlike advantage is not necessary for takeover; you can scale up with a smaller advantage instead. And I said that in military conquests this can happen surprisingly quickly, sometimes faster than it takes for a superior product to take over a market. Is there historical precedent for this? Yes. See my aforementioned post on the conquistadors (and maybe these somewhat-relevant posts).

OK, so what was happening to world GDP during this period?

Here is the history of world GDP for the past ten thousand years, on the red line. (This is taken from David Roodman’s GWP model) The black line that continues the red line is the model’s median projection for what happens next; the splay of grey shades represent 5% increments of probability mass for different possible future trajectories.

I’ve added a bunch of stuff for context. The vertical green lines are some dates, chosen because they were easy for me to calculate with my ruler. The tiny horizontal green lines on the right are the corresponding GWP levels. The tiny red horizontal line is GWP 1,000 years before 2047. The short vertical blue line is when the economy is growing fast enough, on the median projected future, such that insofar as AI is driving the growth, said AI qualifies as transformative by Ajeya's definition. See this post for more explanation of the blue lines.

What I wish to point out with this graph is: We’ve all heard the story of how European empires had a technological advantage which enabled them to conquer most of the world. Well, most of that conquering happened before GWP started to accelerate!

If you look at the graph at the 1700 mark, GWP is seemingly on the same trend it had been on since antiquity. The industrial revolution is said to have started in 1760, and GWP growth really started to pick up steam around 1850. But by 1700 most of the Americas, the Philippines and the East Indies were directly ruled by European powers, and more importantly the oceans of the world were European-dominated, including by various ports and harbor forts European powers had conquered/built all along the coastsof Africa and Asia. Many of the coastal kingdoms in Africa and Asia that weren’t directly ruled by European powers were nevertheless indirectly controlled or otherwise pushed around by them. In my opinion, by this point it seems like the “point of no return” had been passed, so to speak: At some point in the past--maybe 1000 AD, for example--it was unclear whether, say, Western or Eastern (or neither) culture/values/people would come to dominate the world, but by 1700 it was pretty clear, and there wasn’t much that non-westerners could do to change that. (Or at least, changing that in 1700 would have been a lot harder than in 1000 or 1500.)

Paul Christiano once said that he thinks of Slow Takeoff as “Like the Industrial Revolution, but 10x-100x faster.” Well, on my reading of history, that means that all sorts of crazy things will be happening, analogous to the colonialist conquests and their accompanying reshaping of the world economy, before GWP growth noticeably accelerates!

That said, we shouldn’t rely heavily on historical analogies like this. We can probably find other cases that seem analogous too, perhaps even more so, since this is far from a perfect analogue. (e.g. what’s the historical analogue of AI alignment failure? Corporations becoming more powerful than governments? “Western values” being corrupted and changing significantly due to the new technology? The American Revolution?) Also, maybe one could argue that this is indeed what’s happening already: the Internet has connected the world much as sailing ships did, Big Tech dominates the Internet, etc. (Maybe AI = steam engines, and computers+internet = ships+navigation?)

But still. I think it’s fair to conclude that if some of the scenarios described in the previous section do happen, and we get powerful AI that pushes us past the point of no return prior to GWP accelerating, it won’t be totally inconsistent with how things have gone historically.

(I recommend the history book 1493, it has a lot of extremely interesting information about how quickly and dramatically the world economy was reshaped by colonialism and the “Columbian Exchange.”)

Takeoff speeds

What about takeoff speeds? Maybe GDP is a good metric for describing the speed of AI takeoff? I don’t think so.

Here is what I think we care about when it comes to takeoff speeds:

  1. Warning shots: Before there are catastrophic AI alignment failures (i.e. PONRs) there are smaller failures that we can learn from.
  2. Heterogeneity: The relevant AIs are diverse, rather than e.g. all fine-tuned copies of the same pre-trained model. (See Evan’s post)
  3. Risk Awareness: Everyone is freaking out about AI in the crucial period, and lots more people are lots more concerned about AI risk.
  4. Multipolar: AI capabilities progress is widely distributed in the crucial period, rather than concentrated in a few projects.
  5. Craziness: The world is weird and crazy in the crucial period, lots of important things happening fast, the strategic landscape is different from what we expected thanks to new technologies and/or other developments

I think that the best way to define slow(er) takeoff is as the extent to which conditions 1-5 are met. This is not a definition with precise resolution criteria, but that’s OK, because it captures what we care about. Better to have to work hard to precisify a definition that captures what we care about, than to easily precisify a definition that doesn’t! (More substantively, I am optimistic that we can come up with better proxies for what we care about than GWP. I think we already have to some extent; see e.g. operationalizations 5 and 6 here.) As a bonus, this definition also encourages us to wonder whether we’ll get some of 1-5 but not others.

What do I mean by “the crucial period?”

I think we should define the crucial period as the period leading up to the first major AI-induced potential point of no return. (Or maybe, as the aggregate of the periods leading up to the major potential points of no return). After all, this is what we care about. Moreover there seems to be some level of consensus that crazy stuff could start happening before human-level AGI. I certainly think this.

So, I’ve argued for a new definition of slow takeoff, that better captures what we care about. But is the old GWP-based definition a fine proxy? No, it is not, because the things that cause PONR can be different from the things which cause GWP acceleration, and they can come years apart too. Whether there are warning shots, heterogeneity, risk awareness, multipolarity, and craziness in the period leading up to PONR is probably correlated with whether GWP doubles in four years before the first one-year doubling. But the correlation is probably not super strong. Here are two scenarios, one in which we get a slow takeoff by my definition but not by the GWP-based definition, and one in which the opposite happens: 

Slow Takeoff Fast GWP Acceleration Scenario: It turns out there’s a multi-year deployment lag between the time a technology is first demonstrated and the time it is sufficiently deployed around the world to noticeably affect GWP. There’s also a lag between when a deceptively aligned AGI is created and when it causes a PONR… but it is much smaller, because all the AGI needs to do is neutralize its opposition. So PONR happens before GWP starts to accelerate, even though the technologies that could boost GWP are invented several years before AGI powerful enough to cause a PONR is created. But takeoff is slow in the sense I define it; by the time AGI powerful enough to cause a PONR is created, everyone is already freaking out about AI thanks to all the incredibly profitable applications of weaker AI systems, and the obvious and accelerating trends of research progress. Also, there are plenty of warning shots, the strategic situation is very multipolar and heterogenous, etc. Moreover, research progress starts to go FOOM a short while after powerful AGIs are created, such that by the time the robots and self-driving cars and whatnot that were invented several years ago actually get deployed enough to accelerate GWP, we’ve got nanobot swarms. GWP goes from 3% growth per year to 300% without stopping at 30%. 

Fast Takeoff Slow GWP Acceleration Scenario: It turns out you can make smarter AIs by making them have more parameters and training them for longer. So the government decides to partner with a leading tech company and requisition all the major computing centers in the country. With this massive amount of compute and research talent, they refine and scale up existing AI designs that seem promising, and lo! A human-level AGI is created. Alas, it is so huge that it costs $10,000 per hour of subjective thought. Moreover, it has a different distribution over skills compared to humans—it tends to be more rational, not having evolved in an environment that rewards irrationality. It tends to be worse at object recognition and manipulation, but better at poetry, science, and predicting human behavior. It has some flaws and weak points too, more so than humans. Anyhow, unfortunately, it is clever enough to neutralize its opposition. In a short time, the PONR is passed. However, GWP doubles in four years before it doubles in one year. This is because (a) this AGI is so expensive that it doesn’t transform the economy much until either the cost comes way down or capabilities go way up, and (b) progress is slowed by bottlenecks, such as acquiring more compute and overcoming various restrictions placed on the AGI. (Maybe neutralizing the opposition involved convincing the government that certain restrictions and safeguards would be sufficient for safety, contra the hysterical doomsaying of parts of the AI safety community. But overcoming those restrictions in order to do big things in the world takes time.)

 

Acknowledgments: Thanks to the people who gave comments on earlier drafts, including Katja Grace, Carl Shulman, and Max Daniel. Thanks to Amogh Nanjajjar for helping me with some literature review.

41

6 comments, sorted by Highlighting new comments since Today at 10:28 PM
New Comment

Some thoughts on the historical analogy:

If you look at the graph at the 1700 mark, GWP is seemingly on the same trend it had been on since antiquity. The industrial revolution is said to have started in 1760, and GWP growth really started to pick up steam around 1850. But by 1700 most of the Americas, the Philippines and the East Indies were directly ruled by European powers

I think European GDP was already pretty crazy by 1700. There's been a lot of recent arguing about the particular numbers and I am definitely open to just being wrong about this, but so far nothing has changed my basic picture.

After a minute of thinking my best guess for finding the most reliable time series was from the Maddison project. I pulled their dataset from here.

Here's UK population:

  • 1000AD: 2 million
  • 1500AD: 3.9 million (0.14%/year growth)
  • 1700AD: 8.6 million (0.39%)
  • 1820AD: 21.2 million (0.76%)

A 0.14%/year growth rate was already very fast by historical standards, and by 1700 things seemed really crazy.

Here's population in Spain:

  • 1000AD: 4 million
  • 1500AD: 6.8 million (0.11%)
  • 1700AD: 8.8 million (0.13%)
  • 1820AD: 12.2 million (0.28%)

The 1500-1700 acceleration is less marked here but still seems like growth was fast.

Here's the world using the data we've all been using in the past (which I think is much more uncertain):

  • 10000BC: 4 million
  • 3000BC: 14 million (0.02%)
  • 1000BC: 50 million (0.06%)
  • 1000AD: 265 million (0.08%)
  • 1500AD: 425 million (0.09%)
  • 1700AD: 610 million (0.18%)
  • 1820AD: 1 billion (0.41%)

This puts the 0.14%/year growth in the UK in context, and also suggests that things were generally blowing up by 1700AD.

I think that looking at the country-level data is probably better since it's more robust, unless your objection is "GWP isn't what matters because some countries' GDP will be growing much faster."

Thanks for the reply -- Yeah, I totally agree that GDP of the most advanced countries is a better metric than GWP, since presumably GDP will accelerate first in a few countries before it accelerates in the world as a whole. I think most of the points made in my post still work, however, even against the more reasonable metric of GDP-of-the-most-technologically-advanced-country.

Moreover, I think even the point you were specifically critiquing still stands: If AI will be like the Industrial Revolution but faster, then crazy stuff will be happening pretty early on in the curve.

Here's the data I got from Wikipedia a while back on world GDP growth rates. Year is the column on the left, annual growth rate (extrapolated) is in the column on the right.
 

170032099.80.40%
165037081.740.12%
160042077.010.27%
150052058.670.27%
140062044.920.21%
135067040.50.47%
130072032.09-0.21%
125077035.58-0.10%
120082037.44-0.06%
110092039.60.11%
1000102035.310.11%
900112031.680.23%
800122025.230.07%
700132023.440.12%
600142020.860.05%
500152019.920.08%
400162018.440.06%
350167017.93-0.02%
200182018.540.03%
14200617.5-0.43%
1201918.50.04%
-2002220170.03%
-400242016.020.16%
-500252013.720.12%
-80028209.720.21%

On this data at least, 1700 is the first time an observer would say "OK yeah maybe we are transitioning to a new faster growth mode" (assuming you discount 1350 as I do as an artefact of recovering from various disasters). Moreover, it seems to contradict your claim that 0.14% growth was already high by historical standards. (Your data was for population whereas mine is for GWP, maybe that accounts for the discrepancy.)

EDIT: Also, I picked 1700 as precisely the time when "Things seem to be blowing up" first became true. My point was that the point of no return was already past by then. 

To be fair, maybe my data is shitty.



 

Scaling down all the amounts of time, here's how that situation sounds to me: US output doubles in 15 years (basically the fastest it ever has), then doubles again in 7 years. The end of the 7 year doubling is the first time that your hypothetical observer would say "OK yeah maybe we are transitioning to a new faster growth mode," and stuff started getting clearly crazy during the 7 year doubling. That scenario wouldn't be surprising to me. If that scenario sounds typical to you then it's not clear there's anything we really disagree about.

Moreover, it seems to contradict your claim that 0.14% growth was already high by historical standards.

0.14%/year growth sustained over 500 years is a doubling. If you did that between 5000BC and 1000AD then that would be 4000x growth. I think we have a lot of uncertainty about how much growth actually occurred but we're pretty sure it's not 4000x (e.g. going from 1 million people to 4 billion people). Standard kind of made-up estimates are more like 50x (e.g. those cited in Roodman's report), half that fast.

There is lots of variance in growth rates, and it would temporarily be above that level given that populations would grow way faster than that when they have enough resources. That makes it harder to tell what's going on but I think you should still be surprised to see such high growth rates sustained for many centuries.

(assuming you discount 1350 as I do as an artefact of recovering from various disasters

This doesn't seem to work, especially if you look at the UK. Just consider a long enough period of time (like 1000AD to 1500AD) to include both the disasters and the recovery. At that point, disasters should if anything decrease growth rates. Yet this period saw historically atypically fast growth.

OK, thanks. I'm not sure how you calculated that but I'll take your word for it. My hypothetical observer is seeming pretty silly then -- I guess I had been thinking that the growth prior to 1700 was fast but not much faster than it had been at various times in the past, and in fact much slower than it had been in 1350 (I had discounted that, but if we don't, then that supports my point) so a hypothetical observer would be licensed to discount the growth prior to 1700 as maybe just catch-up + noise. But then by the time the data for 1700 comes in, it's clear a fundamental change has happened. I guess the modern-day parallel would be if a pandemic or economic crisis depresses growth for a bit, and then there's a sustained period of growth afterwards in which the economy doubles in 7 years, and there's all sorts of new technology involved but it's still respectable for economists to say it's just catch-up growth + noise, at least until year 5 or so of the 7-year doubling. Is this fair?

There definitely wasn't 0.14% growth over 5000 years. But according to my data there was 12% in 700, 0.23% in 900, 11% in 1000 and 1100, 47% in 1350, and 21% in 1400. So 14% fits right in; 14% over a 500-year period is indeed more impressive, but not that impressive when there are multiple 100-year periods with higher growth than that worldwide  (and thus presumably longer periods with higher growth, in cherry-picked locations around the world)

Anyhow, the important thing is how much we disagree, and maybe it's not much. I certainly think the scenario you sketch is plausible, but I think "faster" scenarios, and scenarios with more of a disconnect between GWP and PONR, are also plausible. Thanks to you I am updating towards thinking the historical case of IR is less support for that second bit than I thought.





 

Thanks for this, I haven‘t thought about the concrete time surrounding AI points of no return yet and I think this is getting increasingly important.

Some thoughts:

  • even if we don’t expect actual output to increase, could we maybe expect that stocks of Google and co. will rise because investors also think about potential for AI windfalls? Similarly, do you think forecasting platforms might be informative enough to be kept in mind here, too?
  • do you think that the level of cooperation/cooperativeness between all stakeholders should be another factor we should care about in your list regarding takeoff speeds? It might help slow everything down if all stakeholders listen to and care about the perspective of one another and can agree on being more careful

Thanks! Yes, I think stock in AI companies is a significantly better metric than world GDP. I still think it's not a great metric, because some of the arguments/reasons I gave above still apply. But others don't.

I think forecasting platforms are definitely something to take seriously. I reserve the right to disagree with them sometimes though. :)

As for additional stuff we care about regarding takeoff speeds... Yeah, your comment and others are increasingly convincing me that my list wasn't exhaustive. There are a bunch of variables we care about, and there's lots of intellectual work to be done thinking about how they correlate and interact.