You would like to go to the beach tomorrow if it's sunny, but aren't sure whether it will rain; if it rains, you'd rather go to the movies. So you resolve to put on a swimsuit and a raincoat, and thus attired, attend the beach in the morning and the movies in the afternoon, regardless of the weather. Something is wrong with that decision process,[1] and it's also wrong with the decisions made by many supposedly systemic approaches to philanthropy: it does not engage with real and potentially resolvable uncertainty about decision-relevant facts.

Different popular philanthropic programs correspond to very different hypotheses about why people are doing wealth inequality, much like swim trunks and a trip to the movies represent different hypotheses about the weather. Instead of working backwards from the proposals to the hypotheses, I will lay out what I think are the two main hypotheses worth considering, and reason about what someone might want to do if that hypothesis were true. This is not because I want to tell you what to do, but to clarify that any time you think that something in particular is a good idea to do, you are acting on a hypothesis about what's going on.

The ideas of charity and philanthropy depend on the recognition of inequality; otherwise it would just be called "being helpful." The persistence of wealth inequality, in turn, depends on many people working together to recognize and enforce individual claims on private property.

If the mechanism of private property tends to allocate capital to its most productive uses, then incentives are being aligned to put many people to work for common benefit. But if wealth does not correspond to productive capacity - i.e. the people with the most are not those best able to use it - then, assuming diminishing marginal returns to wealth, coordination towards persistent wealth inequality comes from a self-sustaining misalignment of incentives, i.e. conflict.

The economic ideology taught in introductory microeconomics courses, which is assumed by many formal analyses of how to do good at scale, including much of Effective Altruist discourse, tends to make assumptions consistent with the means of production hypothesis, so if we are considering making decisions on the basis of that analysis, we want to understand which observations would falsify that hypothesis, and which beliefs are incompatible with it.

You walk into a workshop, and see someone holding a hammer. You can infer that this is because there is some hammering to do right now, and the holder is competent to do it. Someone else has a saw, and you make a similar inference. In this context, the unequal distribution of production goods is part of how things get made; wealth inequality is a part of the means of production. If a workshop did not allocate tools in a way that justified those inferences - if perhaps you observed one person with a hoard of wrenches doing nothing while others used their bare hands as best they could - then you might infer the existence of a conflict between the wrenchmaster and the other laborers, and you would expect that workshop to do a worse job if called upon to make something. On the other hand, if someone with a hoard of wrenches were freely lending out the wrenches when appropriate, seemed like an especially good judge of which wrench (if any) is appropriate for which job, and made sure people put the wrenches back instead of putting them down at random in hard-to-find places, then you might not think worse of the workshop for its wrenchmaster.

The hypothesis that wealth inequality is part of the means of production has moral and strategic implications for charity.

From a global utilitarian perspective, having much more than others is not on its own a reason to transfer wealth to them. Instead, you should expect the return you can get on reinvesting your wealth into profit-yielding enterprises to frequently be higher than the return they can get, so you might be able to make a more important gift to the future than to the present. Even when there is a large enough market failure to justify philanthropy, some amount of paternalism is warranted, because your wealth advantage corresponds to a way in which you know better than them. An exemplar for this perspective is Andrew Carnegie, who amassed a vast fortune improving the organization of steel production, and used some of that fortune to provide a public good, specifically the information good of public libraries. Readers who want his perspective in his own words might do well to read The Gospel of Wealth and his autobiography.

While the details of the return on investment calculation from the selfish perspective will be different, the basic tradeoffs are similar. Due to diminishing marginal returns, at some point it becomes so prohibitively expensive to solve your problems by buying commodity goods or even custom services that the most selfish thing to do is contribute to undersupplied public or coordination goods. For example, Elon Musk's interest in acquiring Twitter and relaxing its censorship regime - and creating Starlink - may be the selfish one of wanting to maintain access to lines of communication with sympathetic strangers (which has been important for things like his ability to find a compatible reproductive partner).

If, on the other hand, wealth inequality is mainly due to systemic oppression, i.e. coordination by an extractive class against producers, then the world looks very different. The simplest implication is that the possession of a fortune is no longer evidence that you know better than others. And before we can even generate the idea of charity under this framework, we run into a justification for a radical form of economic skepticism: what are we even doing when we try to buy a good?

Under the means of production hypothesis, the answer was straightforward: when I buy a good, I am sending a price signal which causes some combination of reallocation of resources to produce more of that good, and the reallocation of that good and its inputs away from those with the least productive use for them. On balance I should expect such price signals to enrich those alleviating scarcity by improving the efficiency with which scarce goods are produced. It follows analytically that under the oppression hypothesis, since the enrichment of producers doesn't happen, any price signals I send do not reallocate resources to produce more in-demand goods on net. There must be a loser, so either I am paying for a weapon to extract from others, or I myself am the target for extraction, i.e. I am being scammed. The pure oppression hypothesis implies that wealth has no real purchasing power for goods; at most it has an illusory or dramatic one.

I have enough money to pay a modest premium for high quality ingredients, and I really do seem to feel better after eating them, which is some evidence for the hypothesis that wealth inequality is part of the means of production. But a friend of mine lives nearby in public housing and cooks on a food stamp budget, and my millionaire housemate enjoys my friend's cooking more than mine. The friend in public housing has complained to the two of us that a much wealthier friend and potential donor to her nonprofit likes to take her out to eat at an expensive club with dismally bad food to waste her time, and won't actually financially support her programs, even the ones he's agreed are good ideas. This is not consistent with the story that money buys good things, but is consistent with the oppression hypothesis.

The pure oppression hypothesis is difficult to imagine. If wealth is nothing but a way to threaten others, and has no independent purchasing power, then it has no way to threaten anything outside of the system; it is a closed system of domination and those outside it can safely ignore it. The rule of the Roman Catholic church in Europe is not a perfect example, but provides a suggestive resemblance. The church made the most extreme metaphysical threats towards its constituents, mixed with what were in most cases mild physical threats if any. The very large sums of gold paid in indulgences or contributions to crusaders show how strongly motivated people were to get out from under this threat. People who rose in the ranks acquired more power to make or withdraw threats towards others, but were not supposed to correspondingly control more productive capital, and they were discouraged from reproducing.

From a global utilitarian view, on the oppression hypothesis, what should a rich person do? The arguments for paternalism or reinvestment do not apply here; your wealth does not imply that you are a good steward, because the allocation of resources does not conform to the function of meeting people's needs. You have no reason to think that you know better than others how to help them, and the idea of a return on investment is perverse. But needs are getting met somehow, so the coordination to do so must be happening outside the system of oppression.

One thing you might try to do in this situation is to use your position as someone validated by a system of oppression to invalidate it, e.g. by publicly setting your money on fire. (This differs from conspicuous consumption because it eliminates motive ambiguity; intentionally wasteful spending still pretends to be receiving something of value, while literally making a pile of cash and setting it on fire does not, so it sends a credible signal that you think the money is worse than useless.) Another thing you might do is try to deescalate threats towards others, in the hope that this frees up their capacity to solve problems, including the existence of the system of threats you're caught up in. In other words, cash transfers.

You might try applying some selection by concentrating your gifts on people with reputations as good actors within the system. The Bezoses seem to have done something like this, with MacKenzie Scott distributing money widely among nonprofits working on things that seem good, and Jeff Bezos making one-time $100 million grants to Van Jones and José Andrés. On the other hand, you might reasonably worry that the reputational system - or at least, the mechanism by which news gets delivered to you, a wealthy person - is part of the system of oppression. In that case, you might apply Rawlsian skepticism and simply try to help whoever is worst off, e.g. cash transfers to the global poor, programs to help prisoners, etc. But then you need to trust that you can pay for the cash transfers to actually happen, which is not clearly justified (remember, under this hypothesis money facilitates threatening people, not providing goods and services) - the best available option might be to wander around incognito looking for people who seem like they could use help but aren't seeking attention.

We live in a mixed economy, but it can't be a homogeneous mixture. Instead, there are details to investigate: who gets paid to produce, and who gets paid to destroy, under what circumstances?

This post was inspired by the state of public discourse on effective altruism, in which cash transfers to the global poor, paternalistic global health interventions, animal welfare interventions in explicit conflict with incumbent powers, and extremely high-leverage high-trust speculative AI design, are put on a single list as though the same set of assumptions could calculate an ROI for all of them, and the main thing that's left to do is pick from the list, or add items. This seems crazy to me like planning to put on swim trunks and a rain coat, and go to the beach in the morning and the movies in the afternoon. It represents a huge missed opportunity: to clarify what our hypotheses actually are about the world in which we live, and test these hypotheses in ways that prevent us from wasting huge sums of money and a corresponding number of human lifetimes on programs that do not matter.

A community without the discursive apparatus to clarify such disagreements, and the ability to invest an appropriate level of work into testing them, is operating on assumptions too low-trust to justify any of the predominant EA hypotheses, all of which require the ability to delegate a lot of work to strangers, including much of the work of evaluating the output of the work you are funding.

Addendum: If you don't already find yourself with a large surplus of wealth or power, and are considering how to make yourself helpful to yourself or others, the model laid out above implies that one thing worth paying a lot of attention to is, as you make your way in life, whether the skills and behaviors you are learning and being rewarded for seem like the sort of thing that is likely to be able to help someone solve a practical, material problem. Sometimes the connection may be real but unclear, but the less reason you have to think that your society is a just one, the more open you should be to the hypothesis that you're being rewarded for bad behavior. If so, you might want to look for another game to play. On global-utilitarian grounds, if you thought that capital accumulation is a gift to the future (or that accumulating "career capital" would improve your ability to help others), you might want to update away from that. On selfish grounds, you should become more skeptical about what money can buy you.

  1. ^

    The image of someone relaxing at the beach in a swimsuit and raincoat is equally ridiculous whether it's raining or not, as is the image of someone similarly attired in a movie theater. I'm pretty sure most readers have found a better solution to a similar problem, than the one in my hypothetical, but I think they would gain a lot from thinking about exactly what their solution would be, and what principles of decisionmaking they are using. I recommend doing that before reading the next paragraph, in which I explain what I'd do and why.

    I expect to have more information about tomorrow's weather tomorrow than today. If, in the morning, conditions look good for the beach, I might head there first, bringing my raincoat but not wearing it. If at some point it starts raining, I would abandon my beach plans, put on my emergency raincoat, and head indoors to a movie. If conditions don't look good for the beach, I'd head straight for the movies. In either case, if the movie finishes during the daytime, then I can make another observation of the sky, and use that to decide whether the beach seems promising, or whether I should pursue my best rainy-day option.

    I'm not going to give an explicitly mathematized decision-theoretic account, as I think the implied principles I'm using here are pretty obvious. On LessWrong, Lukeprog recommends Peterson's An Introduction to Decision Theory. How to Measure Anything by Douglas Hubbard has more detail about how to use Bayesian methods in practical business applications. The Lean Startup by Eric Ries gives examples, also in a business context, of how we can better achieve our goals by structuring our plans as a series of experiments testing the highest value of information hypothesis, than by committing in advance to a highly conjunctive plan.

Comments6


Sorted by Click to highlight new comments since:

My thoughts:

If the mechanism of private property tends to allocate capital to its most productive uses, then incentives are being aligned to put many people to work for common benefit

Much of your later arguments depend on this statement, but I don't think the conclusions follow from the premises. I make a long argument here that an optimal market as described by neoclassical economics does not utilize available resources in a utility-maximizing way, and that it is nearly always possible to improve the "common benefit" by allocating resources AWAY from their most productive uses. This is because from the market's perspective, the most productive use of a resource is the use that will command the highest price on the market. If a starving man has 1 dollar, but a rich man has 100 dollars to buy a sandwich to bury in the ground for his amusement, the more productive use of two slices of bread, a slab of teriyaki tofu, kale, tomato, and a spoonful of vegan mayo is to use it to make a sandwich for the rich man. 

This would in fact be ESPECIALLY true if all inequality resulted from differences in the productive capacities of different individuals' property. Because there are people who do not own anything productive (labor-power or otherwise), then those people would never benefit a cent from the most productive allocations of capital. 

You say:

From a global utilitarian perspective, having much more than others is not on its own a reason to transfer wealth to them. Instead, you should expect the return you can get on reinvesting your wealth into profit-yielding enterprises to frequently be higher than the return they can get, so you might be able to make a more important gift to the future than to the present.

I disagree. A hundred dollars to the poor is worth more than a thousand dollars to the rich. Higher returns in terms of money does not entail higher returns in terms of utility.

Even when there is a large enough market failure to justify philanthropy, some amount of paternalism is warranted, because your wealth advantage corresponds to a way in which you know better than them.

Again, market failures are not necessary for the market's distribution of resources to be suboptimal. And the latter statement doesn't seem to logically follow from anything. Just because your labor/capital is more productive, why should that mean you understand other people's preferences better than they do? 

Then, you say:

Under the means of production hypothesis, the answer was straightforward: when I buy a good, I am sending a price signal which causes some combination of reallocation of resources to produce more of that good, and the reallocation of that good and its inputs away from those with the least productive use for them. On balance I should expect such price signals to enrich those alleviating scarcity by improving the efficiency with which scarce goods are produced

Creating price signals by buying goods does enrich people, but only because it enriches yourself. If you consume more, you're redistributing productive resources away from other "less productive" uses, and towards your own uses. This is net-positive according to the economistic utility function, but there's no reason to believe it's net-positive from a utilitarian perspective, and it's pretty easy to think of situations where the reverse is true. For example, let's say that you're a billionaire and you want to spend your entire fortune on a pet project of yours: paying one million people a hundred thousand dollars each to build the world's largest pyramid in your honor, without being allowed to use any technology or sunblock. Not even one other soul in the world cares a single bit about this pyramid, but they want that hundred thousand dollars, so they agree to build it for you. This may intuitively strike you as magnanimous, maybe even altruistic. Everybody involved benefits, right? Thanks to your generous consumption, everyone is a whole lot richer! But this is an example of the broken window fallacy. The analysis ignores opportunity costs. If one million people are now spending their time carrying blocks of sandstone in the desert, that's one million fewer people free to practice medicine, or make smoothies, or grow mushrooms, or wash dishes, or make cartoons, or umbrellas, or fleshlights. The amount of labor available to produce all those things has gone down, and so their price will go up, and real consumption will go down. Resources have been redirected away from their "less productive" use (medicine, food, sex toys) and towards their "more productive" use (the vanity of one obscenely rich asshole). 100 billion dollars worth of consumer goods have been lost to produce a 100 billion dollar pyramid. They are both equal in monetary price, but the former has vastly more utility than the latter. If the billionaire had burned his money instead of building pyramids, everyone except him would be better off. This is something that I think is often missed in criticisms of "zero-sum" thinking. It's true that the economy is not zero-sum, but this does not imply that there are no tradeoffs.

It follows analytically that under the oppression hypothesis, since the enrichment of producers doesn't happen, any price signals I send do not reallocate resources to produce more in-demand goods on net

Let's examine a historical society whose inequality everyone agrees to be the result of an oppressive, extractive class: the antebellum south. In that society, the producers of goods were the slaves, but they didn't own their own labor, so the product of their labor went to their owners, minus the bit that was necessary for their subsistence. I don't think it was the case that price signals in the antebellum south allocated resources any differently than they did in any other society, and I don't fully understand why they would. If a slaveowner can make more money by redirecting the activity of his slaves away from cotton production and towards tobacco production, than he will do so, and spending a lot of money on tobacco will still send a price signal that resources should be allocated towards tobacco production. This is because although the slaveowner is not a "producer", he still has an incentive to utilize his property (in this case human beings) in the most profitable way. This is no different from owners of land, or companies, who have the same incentives to maximize the productivity of their property, despite merely owning productive goods, and not personally producing goods with their own labor. I don't think the price mechanism works differently between the two hypotheses.

 

I don't think these hypotheses of inequality contradict each other. Going back to the slave society example, inequality between the slaveowners and the slaves could be explained entirely due to differences in "productive capacity". The slaveowners owned lots of means of production (including the labor-power of the slaves themselves) and the slaves owned no means of production (not even their own labor-power). This is consistent with the hypothesis that wealth corresponds to productive capacity, because the slaveowners owned far more productive capacity, and it is also consistent with the hypothesis that wealth is the result of one class exerting force on another to maintain a claim on their property, because the slaveowners' rights over the products of their slaves' labor was enforced by violence. 

It's odd to see a post that uses the phrase "means of production" this often without mentioning Marx, who explicitly believed and repeatedly said that oppression was caused by, and equivalent to, distinctions in ownership of the means of production, rather than the two concepts being competing hypotheses. 

Wealthy people who primarily engage in wasteful consumption become less wealthy over time. Those who maintain or grow wealth must be doing something else with it. You brought up slavery; the antebellum South required massive coordinated violence to directly maintain internal power imbalances, and state-backed territorial expansion to support its economic growth. This illustrates why we need detailed models of how extractive systems actually operate, rather than reducing everything to market mechanisms.

Altruism does not have so much to do with the phenomenon of economic inequality but with an evolution of moral sensitivity through the use of new symbolic cognitive instruments throughout the civilizational process. It is not inequality that becomes morally intolerable, but empathic sensitization that makes the suffering of others emotionally intolerable. Economic inequality has often been called "systemic violence" and that is the dimension in which altruism has to be addressed: as an element of development of the control of aggression, which is in reality the authentic human problem par excellence.

The framing of inequality as 'systemic violence' and altruism as 'control of aggression' assumes rather than demonstrates that wealth differences primarily reflect exploitation. This fails to address the central question I posed: whether and when inequality reflects productive allocation versus extractive behavior.

While moral sensitization may affect how we feel about others' suffering, this doesn't help us understand the causes of that suffering. Defining inequality as violence or aggression is effectively a stance in favor of violence, because it makes it impossible to discuss alternatives.

Defining inequality as violence or aggression is effectively a stance in favor of violence, because it makes it impossible to discuss alternatives.

The answer to violence does not have to be violent. On the contrary, an understanding of the phenomenon of violence (including the phenomenon of economic inequality as systemically exploitative) must lead us to establish non-violent cultural alternatives. This implies that those who are singled out as exploiters are not so from the point of view of distributive justice, but as defenders of a different cultural model that assumes a certain degree of aggression as inevitable. It is not about class struggle or about legislating economic equality, but about promoting altruistic cultural development in the sense of developing empathy, benevolence and mutual care also on an economic way.
On the other hand, those who defend equality in the sense of a rational allocation of resources according to the needs of individuals will have to demonstrate that their cultural model is also capable of generating economic efficiency. Something that the supporters of class struggle have demonstrably failed to do.

Executive summary: Wealth inequality can be explained by either productive allocation of capital or systemic oppression - these competing hypotheses lead to very different implications for philanthropy and charitable giving.

Key points:

  1. The "means of production" hypothesis suggests wealth corresponds to productive capacity, implying wealthy donors should focus on high-ROI investments and some paternalistic intervention.
  2. The "oppression" hypothesis suggests wealth represents extractive power, implying donors should focus on direct cash transfers or deliberately invalidating the system.
  3. Reality likely contains both elements, requiring careful investigation of which activities and payments actually produce value versus destroy it.
  4. Key crux: Whether having wealth is evidence of being a good steward of resources (production hypothesis) or not (oppression hypothesis).
  5. Practical recommendation: When building career capital, examine whether skills being rewarded genuinely solve problems or potentially represent harmful rent-seeking behavior.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f