Hide table of contents

tl;dr: AGI takeoff could either start by agent self-improvement (intelligence explosion) or by duplication and acquiring more resources ("quantity explosion"). Model from economic growth theory may shed light on what's more likely. 

guess the prompt

Main idea

In Galor & Weil's Unified Growth Theory (UGT), a model is given to explain the drivers for economic growth throughout history. The main insight of the model is that until relatively recently it was more worthwhile to have more kids than to invest in their education. This led to a transition from a Malthusian equilibrium to a modern growth equilibrium dominated by technological growth.

AGI takeoff can perhaps be modeled roughly similarly to help us understand the early days of an intelligence explosion. Rather than trying to understand takeoff timelines, we can try to understand whether takeoff is more likely to be initially driven by intelligence or quantity explosion.

What could a quantity explosion look like? Say OpenMind has developed a powerful AI agent with broadly general human-level intelligence. It's not yet superintelligent, but it's smart enough to be able to improve itself, acquire resources, and understand its own situation. If it is power-seeking, it will try to acquire more resources, and it will try to improve itself. Improving itself may require getting access to a massive amount of (a specific type of) compute and data, and has risks of value-drift or breaking in some way, so the agent decides to first try and duplicate itself many times across the internet. If it can do this, it will have a large number of copies of itself, which can then work together to acquire resources and improve itself.

This is a "quantity explosion" scenario, where the number of AI agents increases exponentially, but the intelligence of each agent remains roughly constant. This is in contrast to an "intelligence explosion" scenario, where the intelligence of each agent increases exponentially, but the number of agents remains roughly constant. In this scenario, different safety measures would be more effective (e.g. prosaic alignment and containment may be more important than solving inner alignment and understanding unbounded rational agents, and vice versa).

What now?

Under the guidance of economist @Niki Kotsenko, I've started to draft and solve some possible models in this notebook. I'm not sure whether I'll keep working on it, as I prioritize other projects, but I think it could be a useful piece of work to pursue and I encourage others to explore this (maybe people at Epoch or other independent researchers).

In practice, the models I've tried are only vaguely similar to UGT, and mostly developed from first principles. 

I'd be mostly interested in ideas on 

  1. Whether the intelligence vs quantity explosion is a useful distinction that makes sense. (Also, links to previously written materials on this topic?)
  2. Whether investigating this is an important research direction that's action-relevant for AI safety work.
  3. Whether this kind of macro-economical approach to modeling AI takeoff makes sense.

And, as I said above, I welcome anyone to continue exploring this and I'll gladly help out however I can.





More posts like this

No comments on this post yet.
Be the first to respond.