Hide table of contents

In Open Philanthropy's Report on Whether AI Could Drive Explosive Economic Growth, Tom Davidson writes:

Overall, I place at least 10% probability on advanced AI driving explosive growth [30% GWP growth each year] this century.

While I'm not aware of anyone claiming that if such 30% annual GWP growth were to occur that the economy would continue to grow at that rate for many years, it occurred to me that at that rate of growth the economy would grow by a factor of 10^11 each century.

At that rate of growth the solar system's economy would reach its maximum size (imposed by the amount of matter in the solar system and how efficiently that matter can be used) very quickly, probably within about a century or two.

This leads me to wonder, what exactly is the maximum size economy the solar system can sustain? Are there existing published estimates on this question?

9

0
0

Reactions

0
0
New Answer
New Comment


4 Answers sorted by

It is clear that energy consumption cannot continue to grow exponentially for much more than 1000 years. But it might be argued that we can continue to extract ever more economic value from less and less energy, especially with VR. This is discussed in the debate between Robin Hanson and Bryan Caplan, and Toby Ord in the comments. 

See the comment here by Max Daniel:

"there are limits in how much value (whether in an economic or moral sense) we can produce per unit of available energy, and (ii) we will eventually only be able to expand the total amount of available energy subexponentially (there can only be so much stuff in a given volume of space, and the amount of available space is proportional to the speed of light cubed - polynomial rather than exponential growth).

...

And:

In 275, 345, and 400 years, [assuming current growth rates of global power demand] we demand all the sunlight hitting land and then the earth as a whole, assuming 20%, 100%, and 100% conversion efficiencies, respectively. In 1350 years, we use as much power as the sun generates. In 2450 years, we use as much as all hundred-billion stars in the Milky Way galaxy.

(Tom Murphy)

 

TL;DR: It seems that Metaculus forecasters believe there is at least a >16-23% chance that our solar system could sustain an economy of at least 10^11 trillion 2020 USD, i.e. one billion times larger than the world economy today.

(This is a lower bound based on what Metaculus thinks there is a 16-23% chance of actually happening by 2200, and assuming that our solar system only accounts for 1/10,000th of the economy that Metaculus thinks will exist with that probability.)

On what basis do I make this inference?

19 forecasters on Metaculus currently give a 16% chance of GWP in 2200 being > 10^15 trillion 2020 USD:

12 forecasters on Metaculus (overlapping with the above 19) currently give a 23% chance of GWP in 2200 being >10^15 trillion 2020 USD (on this "big range" version of the question):

This roughly translates to a >16-23% chance that our solar system could sustain an economy 

Such an economy might span roughly 10,000 stars if we have a fast takeoff scenario by 2050 and start colonizing our galactic neighborhood at near the the speed of light.

Naively then I'll say that 1/10,000th of such an economy would be contained within our solar system to calculate a lower bound on how large our solar system's economy can get:

It therefore seems that Metaculus forecasters believe there is at least a >16-23% chance that our solar system could sustain an economy of at least 10^11 trillion 2020 USD, i.e. one billion times larger than the world economy today. (This is a lower bound based on what Metaculus thinks there is a 16-23% chance of actually happening by 2200, and assuming that our solar system only accounts for 1/10,000th of the economy that Metaculus thinks will exist with that probability.)

 

In Holden Karnofsky's just-published Cold Takes blog post This Can't Go On, he cites how at 2% annual economic growth the economy would grow by a factor of 3*10^70 in just 8,200 years. Since there are likely fewer than that many atoms in our galaxy, this would mean that "we'd need to be sustaining multiple economies as big as today's entire world economy per atom." Holden says:

Is it imaginable that we could develop the technology to support multiple equivalents of today's entire civilization, per atom available? Sure - but this would require a radical degree of transformation of our lives and societies, far beyond how much change we've seen over the course of human history to date. And I wouldn't exactly bet that this is how things are going to go over the next several thousand years.

It seems much more likely that we will "run out" of new scientific insights, technological innovations, and resources, and the regime of "getting richer by a few percent a year" will come to an end. After all, this regime is only a couple hundred years old.

(Edited 8/3 to include the written post quote now that it's published, rather than my transcription of the audio)

Is... this post published yet? I can't find it anywhere.

2
WilliamKiely
It is published now (here, but wasn't at the time of my above comment. The audio was published before the written post (by about a day-ish). I sent Holden the feedback to let him know.
Curated and popular this week
Ben_West🔸
 ·  · 1m read
 · 
> Summary: We propose measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under a decade, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks. > > The length of tasks (measured by how long they take human professionals) that generalist frontier model agents can complete autonomously with 50% reliability has been doubling approximately every 7 months for the last 6 years. The shaded region represents 95% CI calculated by hierarchical bootstrap over task families, tasks, and task attempts. > > Full paper | Github repo Blogpost; tweet thread. 
 ·  · 2m read
 · 
For immediate release: April 1, 2025 OXFORD, UK — The Centre for Effective Altruism (CEA) announced today that it will no longer identify as an "Effective Altruism" organization.  "After careful consideration, we've determined that the most effective way to have a positive impact is to deny any association with Effective Altruism," said a CEA spokesperson. "Our mission remains unchanged: to use reason and evidence to do the most good. Which coincidentally was the definition of EA." The announcement mirrors a pattern of other organizations that have grown with EA support and frameworks and eventually distanced themselves from EA. CEA's statement clarified that it will continue to use the same methodologies, maintain the same team, and pursue identical goals. "We've found that not being associated with the movement we have spent years building gives us more flexibility to do exactly what we were already doing, just with better PR," the spokesperson explained. "It's like keeping all the benefits of a community while refusing to contribute to its future development or taking responsibility for its challenges. Win-win!" In a related announcement, CEA revealed plans to rename its annual EA Global conference to "Coincidental Gathering of Like-Minded Individuals Who Mysteriously All Know Each Other But Definitely Aren't Part of Any Specific Movement Conference 2025." When asked about concerns that this trend might be pulling up the ladder for future projects that also might benefit from the infrastructure of the effective altruist community, the spokesperson adjusted their "I Heart Consequentialism" tie and replied, "Future projects? I'm sorry, but focusing on long-term movement building would be very EA of us, and as we've clearly established, we're not that anymore." Industry analysts predict that by 2026, the only entities still identifying as "EA" will be three post-rationalist bloggers, a Discord server full of undergraduate philosophy majors, and one person at
Thomas Kwa
 ·  · 2m read
 · 
Epistemic status: highly certain, or something The Spending What We Must 💸11% pledge  In short: Members pledge to spend at least 11% of their income on effectively increasing their own productivity. This pledge is likely higher-impact for most people than the Giving What We Can 🔸10% Pledge, and we also think the name accurately reflects the non-supererogatory moral beliefs of many in the EA community. Example Charlie is a software engineer for the Centre for Effective Future Research. Since Charlie has taken the SWWM 💸11% pledge, rather than splurge on a vacation, they decide to buy an expensive noise-canceling headset before their next EAG, allowing them to get slightly more sleep and have 104 one-on-one meetings instead of just 101. In one of the extra three meetings, they chat with Diana, who is starting an AI-for-worrying-about-AI company, and decide to become a cofounder. The company becomes wildly successful, and Charlie's equity share allows them to further increase their productivity to the point of diminishing marginal returns, then donate $50 billion to SWWM. The 💸💸💸 Badge If you've taken the SWWM 💸11% Pledge, we'd appreciate if you could add three 💸💸💸 "stacks of money with wings" emoji to your social media profiles. We chose three emoji because we think the 💸11% Pledge will be about 3x more effective than the 🔸10% pledge (see FAQ), and EAs should be scope sensitive.  FAQ Is the pledge legally binding? We highly recommend signing the legal contract, as it will allow you to sue yourself in case of delinquency. What do you mean by effectively increasing productivity? Some interventions are especially good at transforming self-donations into productivity, and have a strong evidence base. In particular:  * Offloading non-work duties like dates and calling your mother to personal assistants * Running many emulated copies of oneself (likely available soon) * Amphetamines I'm an AI system. Can I take the 💸11% pledge? We encourage A