Hide table of contents

Abstract

Why, if at all, should we object to economic inequality? Some central arguments – the argument from decreasing marginal utility for example – invoke instrumental reasons and object to inequality because of its effects. Such instrumental arguments, however, often concern only the static effects of inequality and neglect its intertemporal consequences. In this article, we address this striking gap and investigate income inequality’s intertemporal consequences, including its potential effects on humanity’s (very) long-term future. Following recent arguments around future generations and so-called longtermism, those effects might arguably matter more than inequality’s short-term consequences. We assess whether we have instrumental reason to reduce economic inequality based on its intertemporal effects in the short, medium and the very long term. We find a good short and medium-term instrumental case for lower economic inequality. We then argue, somewhat speculatively, that we have instrumental reasons for inequality reduction from a longtermist perspective too, because greater inequality could increase existential risk. We thus have instrumental reasons for reducing inequality, regardless of which time-horizon we take. We then argue that from most consequentialist perspectives, this pro tanto reason also gives us all-things-considered reason. And even across most non-consequentialist views in philosophy, this argument gives us either an all-things-considered or at least weighty pro tanto reason against inequality.

Introduction

After a steady decline until the 1970s, income inequality has been on the rise in nearly all wealthy countries in recent decades. What, if anything, is objectionable about such inequality? Political philosophers here supply us with a wealth of non-instrumental arguments, focusing on questions such as fairness, justice, equality of opportunity, and relational inequality.[1] Instead, we here focus on instrumental concerns, zooming in on the external benefits economic equality might produce. For example, one classic instrumental argument is utilitarian: aggregate wellbeing will be higher with less economic inequality, because of the diminishing marginal utility of income.

However, such instrumental arguments typically focus on the static properties of income inequality, that is, on the effects inequality would produce during a somewhat limited time-slice. Yet income (in)equality likely has intertemporal consequences too. And it is far from clear whether such consequences will be good or bad. For instance, Tyler Cowen has recently argued that high economic growth should take priority: with a long enough timeframe, the exponential nature of growth ensures that future benefits will outweigh all other considerations (Cowen 2018). Moreover, if equality lowers longer-term growth rates – as some have argued – the dynamic instrumental case would speak against reducing inequality. In response, one might contest that there is a growth-equality trade-off. Or one could argue that equality comes with its own long-term benefits, such as better political institutions.

Such arguments would typically focus on effects within the next hundreds to, maybe, thousands of years. But we could go further and include inequality’s effects on all future well-being. Doing so moves us into the realm of longtermism, an influential idea in the Effective Altruism community. The central idea is that since the future holds the vast majority of potential value, the expected moral value of many actions is almost entirely determined by the action’s effects on the long-term future. Nick Beckstead writes: ‘what matters most (in expectation), is that we do what is best (in expectation) for the general trajectory along which our descendants develop over the coming millions, billions, and trillions of years.’ (Beckstead 2013, 1) Suppose reducing income inequality has non-negligible expected consequences for our far-future descendants. Longtermism would then imply that whether we should reduce economic inequality or not is primarily determined by such long-term effects.

So, we can assess the instrumental character of income inequality in three different ways: we can focus on effects in the short term, the medium term (hundreds to thousands of years), or – adopting longtermism – all its future effects. It is not obvious that these three approaches converge. The lack of work on these questions constitutes a surprisingly large and important gap in the literature. This article makes a start filling this gap. To assess the instrumental benefits of equality/inequality, we use a time-discounted instrumentalist framework. We do not look for an optimal level of inequality. Instead, we consider how, at the margin, reducing or increasing economic inequality in today’s richer countries (roughly, OECD countries) would impact expected aggregate human wellbeing, other things equal. We vary our discount rate to check inequality’s effects along three timeframes, short, medium, and long term. We find a good short and medium-term instrumental case for lower economic inequality. We then argue – somewhat speculatively – that we have instrumental reasons for inequality reduction from a longtermist perspective too, because greater inequality could increase existential risk. We thus have instrumental reasons for reducing inequality, regardless of which time-horizon we take.

We then argue that this pro tanto argument has important implications for how philosophers should think about economic inequality. Performing a ‘moral sensitivity analysis’, we argue that for most consequentialist views, the pro tanto argument also provides all-things-considered reason to reduce inequality. And even across most non-consequentialist views, the argument either provides an all-things-considered or at least a weighty pro tanto reason to reduce inequality.

Our results matter in several ways. First, most people believe we have duties towards future generations. Accordingly, when assessing policies that affect inequality, their impact on future generations should be a relevant dimension (when assessing proposals to reduce inequality, for example (Atkinson 2015)). Second, our longtermist argument makes for a new input into philosophical debates about equality and egalitarianism. While philosophers often focus on noninstrumental reasons against inequality, they acknowledge that instrumental concerns are important too.2 If longtermism is sound and the long-term future often decisive, our instrumental argument should thus matter greatly for debates around egalitarianism. Moreover, because our argument holds across the short, medium, and long term, it is also quite robust. Finally, in philosophy, there has been increasing interest in longtermism and existential risk but no work yet that connects this to economic inequality. Our article makes a start filling this gap.

We proceed as follows. In section 2, we describe our framework. In sections 3 and 4, we respectively analyse the short and medium-term effects of income inequality. In 5 and 6, we analyse the instrumentalist longtermist case for more equality. In 5, we first introduce longtermism and its relation to existential risk. In 6, we present arguments to the effect that higher income inequality can indirectly increase existential risk. In 7, we perform our ‘moral sensitivity analysis’ and conclude in section 8.

Read the rest of the paper


    1. We briefly come back to non-instrumental egalitarian views in section 7. ↩︎

Comments


No comments on this post yet.
Be the first to respond.
Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f