Hide table of contents

Epistemic status: hot take, the core of which was written in ~10mn.

Assumptions/Definitions

I'm conditioning on AGI sometime within next 3~40 yrs.

"Explosive growth" ≝ the kind of crazy 1yr or 6 month GDP (or GWP) doubling times Shulman talks about here, happening within this century.

Setup

I was just listening to Carl Shulman on the 80k podcast talk about why he thinks explosive growth is very likely. One of the premises in his model is that, people will just want it – they'll want to be billionaires, have all this incredible effectively free entertainment etc etc.

(Side note: somewhat in tension with his claims about how humans are unlikely to have any comparative advantage post-AGI, he claims that parents will prefer AI-robot nannies/tutors over human nannies/tutors because, among other things, they produce better educational outcomes for children. But presumably there is little pressure toward exceptional educational attainment in this world in which most cognitive labor is outsourced to AGI.)

I actually think, if business continues as usual, explosive growth is fairly likely. But I also think this would probably be a calamity. Here's a quick and dirty argument for why:

Argument

I expect that if we successfully aligned AI on CEV, or pulled off a long-reflection (before anything crazy happened – e.g. we paused right now and did a long reflection) and then aligned AGI to the outputs of that reflection, we would not see explosive growth in this century. Some claims as to why:

  1. Humans don't like shocks. Explosive growth would definitely be a shock. We tend to like very gradual changes, or brief flirts with big change. We do like variety, of a certain scale and tempo – e.g. seasonality. The kind explosive growth we're considering here is anything but a gentle change in season though.
  2. In our heart of hearts, we value genuine trust (built on the psychology of reciprocity, not just unthinking accustomnedness to a process that "just works"), dignity, community recognition, fellow-feeling, belonging, feeling useful, overcoming genuine adversity. In other words, we would choose to create an environment in which we sacrifice some convenience, accept some friction, have to earn things, help each other, and genuinely co-depend to some extent. Basically I think we'd find that we need to need each other to flourish.
  3. Speaking of "genuine" things, I think many people value authenticity – we do discount simulacra (at least, again, in our heart of hearts). Even if what counts as simulacra is to some extent culturally defined, there will be a general privileging of "natural" and "traditional" things – things much more like they were found/done in the ancestral environment – since those things have an ancient echo of home in them. So I expect few would choose to live in a VR world full-time, and we would erect some barriers to doing so. (Yes, even if our minds were wiped on entering, since we would interpret this as delusion/abandonment of proper ties).
  4. We value wilderness, and more generally, otherness.
  5. A large enough majority will understand themselves as being a particular functional kind, homo sapiens; and our flourishing, as being attached to that functional kind. In other words, we wouldn't go transhumanist anytime soon (though we might keep that door open for future generations).

Some implications

If you expect this is true, then you should expect a scenario in which we see explosive growth to be a scenario in which we fail to align AI to these ideals. I suspect it will mean we merely aligned AI to profits, power, consumer surplus, instant-gratification, short-term individualistic coddling, national-security-under-current-geopolitical-conditions. All at the notable expense of (among other things) all goods that can currently only be had by having consumers/citizens collectively deliberate and coordinate in ways markets fail to allow, or even actively suppress (see e.g. how "market instincts" or market priming might increase selfishness and mute pro-social cooperative instincts).

Even if I'm wrong about the positive outputs of our idealized alignment targets (and I'm indeed low~medium confidence about those), I'm pretty confident that those outputs will not highly intrinsically value the abstractions of profits, power, consumer surplus, instant-gratification, short-term individualistic coddling or national-security-under-current-geopolitical-conditions. So I expect, from the perspective of these idealized alignment targets, explosive growth within our century would look like a pretty serious calamity, especially if that results in fairly bad value lock-in. Sure, not as bad as extinction, but still very very bad (and arguably, this is the most likely default outcome).

Afterthought on motivation

I guess part of what I wanted to convey here is: since it's increasingly unlikely we'll get the chance to align to these idealized targets, maybe we should start at least trying to align ourselves to whatever we think the outputs of those targets are, or at the very least, some more basic democratic targets. And I think Rationalists/EAs tend to underestimate just how odd their values are.

I think I also just want more Rationalists/EAs to be thinking about this "WALL-E" failure mode (assuming you see it as a failure mode). Of course that should tell you something about my values.

Comments9


Sorted by Click to highlight new comments since:

Humans don't like shocks. Explosive growth would definitely be a shock. We tend to like very gradual changes, or brief flirts with big change. 

Speaking generally, it is true that humans are frequently hesitant to change the status quo, and economic shocks can be quite scary to people. This provides one reason to think that people will try to stop explosive growth, and slow down the rate of change.

On the other hand, it's important to recognize the individual incentives involved here. On an individual, personal level, explosive growth is equivalent to a dramatic rise in real income over a short period of time. Suppose you were given the choice of increasing your current income by several-fold over the next few years. For example, if your real income is currently $100,000/year, then you would see it increase to $300,000/year in two years. Would you push back against this change? Would this rise in your personal income be too fast for your tastes? Would you try to slow it down?

Even if explosive growth is dramatic and scary on a collective and abstract level, it is not clearly bad on an individual level. Indeed, it seems quite clear to me that most people would be perfectly happy to see their incomes rise dramatically, even at a rate that far exceeded historical norms, unless they recognized a substantial and grave risk that would accompany this rise in their personal income. 

If we assume that people collectively follow what is in each of their individual interests, then we should conclude that incentives are pretty strongly in favor of explosive growth (at least when done with low risk), despite the fact that this change would be dramatic and large.

I agree that's a reason to believe people would be in favor of such a radical change (and Shulman makes the same point). I don't think it's nearly as strong a reason as you and Shulman seem to think it is, because of the broader changes that would come with this dramatic increase in income. We're talking about a dramatic restructuring of the economic and social order. We're probably talking about, among other things, the end of work and with that, probably the end of earning your place in your community. We're talking about frictionless effectively free substitutes for everything we might have received from the informal economy, the economy of gifts and reciprocity. What does that do to friendship and family? I don't want to know. 

It appears to me there are plenty of examples of people sacrificing large potential increases in their income in order to preserve the social order they are accustomed to. (I would imagine e.g. conservatives in e.g. the Rust Belt not moving to a coastal city with clearly better income prospects being a good example, but admit I haven't studied the issue in-depth). 

Basically, I think this focus on income is myopic.

Thanks for this, which also helps me to reflect on what I value as well. There is wisdom and truth in your 5 arguments above - whether important enough to over-ride the benefits of explosive growth I'm not sure, but important to consider regardless.

As a side note, I'm not sure why people would downvote this - where's the bad karma? Disagree for sure, but why downvote? We are free to not like what we don't like, but......

At the time of this comment the post has 25 karma from 12 votes which seems like not necessarily (or certainly not many) downvotes? Maybe it was different earlier. But I agree downvotes would be strange.

I gave it 8 so it was 17 from 11 or something before. You do make a good point though its only a few downvotes so maybe not that bad.

I think I agree with you that many people won't want rapid change.

However, it seems inevitable that some people will (even just part of the EA/rationalist sphere, though I think people wanting explosive growth would be a fair bit broader set). And so if even a small fraction of the population wants to undertake explosive growth, and they are free to do so, then it will happen and they will quickly comprise ~all of the world economy.

This is a huge if: maybe the status quo ante will have powerful enough proponents that they prevent anyone from pursuing explosive growth.

But I think it is also quite plausible a few people will go and colonise space or do some other explosive-growth-conducive thing, and that there will be a bunch of people kind of technologically 'left behind', perhaps by choice.

I think this should be a frontpage post, not a community one

That was the intention – I'm not sure how to remove the community tag...

I think a forum moderator needs to fix it.

Curated and popular this week
 ·  · 10m read
 · 
I wrote this to try to explain the key thing going on with AI right now to a broader audience. Feedback welcome. Most people think of AI as a pattern-matching chatbot – good at writing emails, terrible at real thinking. They've missed something huge. In 2024, while many declared AI was reaching a plateau, it was actually entering a new paradigm: learning to reason using reinforcement learning. This approach isn’t limited by data, so could deliver beyond-human capabilities in coding and scientific reasoning within two years. Here's a simple introduction to how it works, and why it's the most important development that most people have missed. The new paradigm: reinforcement learning People sometimes say “chatGPT is just next token prediction on the internet”. But that’s never been quite true. Raw next token prediction produces outputs that are regularly crazy. GPT only became useful with the addition of what’s called “reinforcement learning from human feedback” (RLHF): 1. The model produces outputs 2. Humans rate those outputs for helpfulness 3. The model is adjusted in a way expected to get a higher rating A model that’s under RLHF hasn’t been trained only to predict next tokens, it’s been trained to produce whatever output is most helpful to human raters. Think of the initial large language model (LLM) as containing a foundation of knowledge and concepts. Reinforcement learning is what enables that structure to be turned to a specific end. Now AI companies are using reinforcement learning in a powerful new way – training models to reason step-by-step: 1. Show the model a problem like a math puzzle. 2. Ask it to produce a chain of reasoning to solve the problem (“chain of thought”).[1] 3. If the answer is correct, adjust the model to be more like that (“reinforcement”).[2] 4. Repeat thousands of times. Before 2023 this didn’t seem to work. If each step of reasoning is too unreliable, then the chains quickly go wrong. Without getting close to co
 ·  · 11m read
 · 
My name is Keyvan, and I lead Anima International’s work in France. Our organization went through a major transformation in 2024. I want to share that journey with you. Anima International in France used to be known as Assiettes Végétales (‘Plant-Based Plates’). We focused entirely on introducing and promoting vegetarian and plant-based meals in collective catering. Today, as Anima, our mission is to put an end to the use of cages for laying hens. These changes come after a thorough evaluation of our previous campaign, assessing 94 potential new interventions, making several difficult choices, and navigating emotional struggles. We hope that by sharing our experience, we can help others who find themselves in similar situations. So let me walk you through how the past twelve months have unfolded for us.  The French team Act One: What we did as Assiettes Végétales Since 2018, we worked with the local authorities of cities, counties, regions, and universities across France to develop vegetarian meals in their collective catering services. If you don’t know much about France, this intervention may feel odd to you. But here, the collective catering sector feeds a huge number of people and produces an enormous quantity of meals. Two out of three children, more than seven million in total, eat at a school canteen at least once a week. Overall, more than three billion meals are served each year in collective catering. We knew that by influencing practices in this sector, we could reach a massive number of people. However, this work was not easy. France has a strong culinary heritage deeply rooted in animal-based products. Meat and fish-based meals remain the standard in collective catering and school canteens. It is effectively mandatory to serve a dairy product every day in school canteens. To be a certified chef, you have to complete special training and until recently, such training didn’t include a single vegetarian dish among the essential recipes to master. De
 ·  · 1m read
 · 
 The Life You Can Save, a nonprofit organization dedicated to fighting extreme poverty, and Founders Pledge, a global nonprofit empowering entrepreneurs to do the most good possible with their charitable giving, have announced today the formation of their Rapid Response Fund. In the face of imminent federal funding cuts, the Fund will ensure that some of the world's highest-impact charities and programs can continue to function. Affected organizations include those offering critical interventions, particularly in basic health services, maternal and child health, infectious disease control, mental health, domestic violence, and organized crime.