Hide table of contents

The distribution of wealth in the world 1000 years ago appears to have had a relatively small effect—or more precisely an unpredictable effect, whose expected value was small ex ante—on the world of today. I think there is a good chance that AI will fundamentally change this dynamic, and that the distribution of resources shortly after the arrival of human-level AI may have very long-lasting consequences.

Disclaimer: I want to stress that throughout this post I’m not making any normative claims about what ought to be or what would be nice or what kind of world we should try to have; I’m just trying to understand what is likely to happen.

A naïve model of capital accumulation

Here is a naïve (and empirically untenable) model of capital accumulation.

For the most part, the resources available in the world at time t+1 are produced using the resources available at time t. By default, whoever controls the resources at time t is able to control the new resources which are produced. The notions of “who” and “controls” are a bit dubious, so I’d actually like to cut them out of the picture. Instead, I want to think of people (and organizations, and agents of all sorts) as soups of potentially conflicting values. When I talk about “who” controls what resources, what I really want to think about is what values control what resources. And when I say that some values “control” some resources, all I mean is that those resources are being applied in the service of those values. Values is broad enough to include not only things like “aggregative utilitarianism” but also things like “Barack Obama’s self-interest.” The kinds of things idealistic enough that we usually think of them as “values” may get only a relatively small part of the pie.

Some values mostly care about the future, and so will recommend investing some of the resources they currently control, foregoing any other use of those resources at time in order to control more resources at time t+1. If all resources were used in this way, the world would be growing but the distribution of resources would be perfectly static: whichever values were most influential at one time would remain most influential (in expectation) across all future times.

Some values won’t invest all of their resources in this way; the share of resources controlled by non-investors will gradually fall, until the great majority of resources are held by extremely patient values. At this point the distribution of resources becomes static, and may be preserved for a long time (perhaps until some participants cease to be patient).

On this model, a windfall of 1% of the world’s resources today may lead to owning 1% of the world’s resources for a very long time. But in such a model, we also never expect to encounter such a windfall, except as the product of investment.

Why is this model so wrong?

We don’t seem to see long-term interests dominating the global economy, with savings rates approaching 1 and a risk profile tuned to maximize investors’ share of the global economy. So what’s up?

In fact there are many gaps between the simple model above and reality. To me, most of them seem to flow from a key observation: the most important resources in the world are people, and no matter how much of the world you control at time t you can’t really control the people at time t+1. For example:

  1. If 1% of the people in the current generation share my values, this does not mean that 1% of the people in the next generation will necessarily share my values. Each generation has an influence over the values of their successors, but a highly imperfect and unpredictable influence; human values are also profoundly influenced by human nature and unpredictable consequences of individual lives. (Actually, the situation is much more severe, since the values of individual humans likewise shift unpredictably over their lives.) Over time, society seems to approach an equilibrium nearly independent of any single generation’s decisions.
  2. If I hold 1% of the capital at time t, I only get to capture 0.3% of the gross world product as rents—most of the world product is paid as wages instead. So unless I can somehow capture a similar share of all wages, my influence on the world will decay.
  3. Even setting aside 2, if I were to be making 1% of gross world product in rents, they would probably be aggressively taxed or otherwise confiscated and redistributed more equitably. So owning 1% of the stuff at time t does not entitle me to hold 1% of the stuff at time t+1.
  4. Even setting aside 2 and 3, if I hold 1% of the resources at time t, I have some probability of dying before time t+1. In light of this risk, I need to identify managers who can make decisions to further my values. It’s hard to find managers who precisely share my values, and so with each generation those resources will be controlled by slightly different values.

The fact that each generation wields so little control over its successors seems to be a quirk of our biological situation: human biology is one of the most powerful technologies on Earth, but it is a relic passed down to us by evolution about which we have only the faintest understanding (and over which we have only the faintest influence). I doubt this will remain the case for long; eventually, the most useful technologies around will be technologies that we developed for ourselves. In most cases,  I expect we will have a much deeper understanding, and a much greater ability to control, technologies we develop for ourselves.

Machine intelligence

I believe that the development of machine intelligence may move the world much closer to this naïve model.

Consider a world where the availability of cheap machine intelligence has driven human wages below subsistence, an outcome which seems not only inevitable but desirable if properly managed. In this world, humans rapidly cease to be a meaningful resource; they are relevant only as actors who make decisions, not as workers who supply their labor (not even as managers who supply economically useful decisions).

In such a world, value is concentrated in non-labor resources: machines, land, natural resources, ideas, and so on. Unlike people, these resources are likely to have the characteristic that they can be owned and controlled by the person who produced them.  Returning to the list of deviations from the naïve model given above, we see that the situation has reversed:

  1. The values of machine intelligences can (probably, eventually) be directly determined by their owners or predecessors. If at time t 1% of the world’s machine intelligences share my values and own 1% of the world’s resources, then 1% of all new machine intelligences will also share my values and at time t+1 it’s likely to also be the case that 1% of the world’s machine intelligences share my values and own 1% of the world’s resources.
  2. A capital holder with 1% of the world’s resources owns about 1% of the world’s machine intelligences, and so also captures 1% of the world’s labor income.
  3. In a world where most “individuals” are machine intelligences, who can argue as persuasively as humans and appear as sympathetic as humans, there is a good chance that (at least in some states) machine intelligences will be able to secure significant political representation. Indeed, in this scenario the complete oppression of machine intelligences would be something of a surprisingly oppressive regime. If machine intelligences secure equal representation, and if 1% of machine intelligences share my values, then there is no particular reason to expect redistribution or other political maneuvering to reduce the prevalence of my values.
  4. In a world where machine intelligences are able to perfectly replace a human as a manager, the challenge of finding a successor with similar values may be much reduced: it may simply be possible to design a machine intelligence who exactly shares their predecessor’s values and who can serve as a manager. Once technology is sufficiently stable, the same manager (or copies thereof) may persist indefinitely without significant disadvantage.

So at least on a very simple analysis, I think there is a good chance that a world with human-level machine intelligence would be described by the naïve model.

Another possible objection is that a capital owner who produces some resources exerts imperfect control over the outputs–apart from the complications introduced by humans, there are also random and hard-to-control events that prevent us from capturing all of the value we create. But on closer inspection this does not seem to be such a problem:

  • If these “random losses” are real losses, which are controlled by no one, then this can simply be factored into the growth rate. If every year the world grows 2% but 1% of all stuff is randomly destroyed, then the real growth rate is 1%. This doesn’t really change the conclusions.
  • If these “random losses” are lost to me but recouped by someone else, then the question is “who is recouping them?” Presumably we have in mind something like “a random person is benefiting.” But that just means that the returns to being a random person, on the lookout for serendipitous windfalls at the expense of other capital owners, have been elevated. And in this world, a “random person” is just another kind of capital you can own.  A savvy capital owner with x% of the world’s resources will also own x% of the world’s random people. The result is the same as the last case: someone who starts with x% of the resources can maintain x% of the resources as the world grows.

Implications

If we believe this argument, then it suggests that the arrival of machine intelligence may lead to a substantial crystallization of influence. By its nature, this would be an event with long-lasting consequences. Incidentally, it would also provide the kind of opportunity for influence I was discussing in my last post.

I find this plausible though very far from certain, and I think it is an issue that deserves more attention. Perhaps most troubling is the possibility that in addition to prompting such crystallization, the transition to machine intelligences may also be an opportunity for influence to shift considerably—perhaps in large part to machines with alien values. In Nick Bostrom’s taxonomy, this suggests that we might be concerned about the world ending in a “whimper” rather than a “bang”: even without a particular catastrophic or disruptive event, we may nevertheless irreversibly and severely limit the potential of our future. 

It is tempting to be cosmopolitan about the prospect of machine intelligences owning a significant share of the future, asserting their fundamental liberty to autonomy and self-determination. But our cosmopolitan attitude is itself an artifact of our preferences, and I think it is unwise to expect that it (or anything else we value) will be automatically shared by machine intelligences any more than it is automatically shared by bacteria, self-driving cars, or corporations.

Comments


No comments on this post yet.
Be the first to respond.
Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f