Hide table of contents

This paper was published as a GPI working paper in November 2024 and is forthcoming in Ergo.

Abstract

On person-affecting views in population ethics, the moral import of a person’s welfare depends on that person’s temporal or modal status. These views typically imply that – all else equal – we’re never required to create extra people, or to act in ways that increase the probability of extra people coming into existence.

In this paper, I use Parfit-style fission cases to construct a dilemma for person-affecting views: either they forfeit their seeming-advantages and face fission analogues of the problems faced by their rival impersonal views, or else they turn out to be not so person-affecting after all. In light of this dilemma, the attractions of person-affecting views largely evaporate. What remains are the problems unique to them.

Introduction

Suppose that you find yourself with a choice. You can either:

(a) Donate $4500 to the Against Malaria Foundation (AMF).

Or:

(b) Donate $4500 to the Nuclear Threat Initiative (NTI).

You’re confident that donating to AMF would save a child from dying of malaria. You’re also reasonably sure that this child would go on to live an additional 70 years of good life. On the other hand, you estimate that donating to NTI would increase the probability that humanity survives the coming century by about one in-ten-quadrillion (10−16). And you expect that if humanity survives the coming century, the future will contain one-hundred-quadrillion (1017) good lives, each lasting around 70 years. Where should you send your money?

Here’s a quick argument for NTI. By donating to AMF, you’d cause about 70 additional years of good life to be lived, in expectation. By donating to NTI, you’d cause about 700 additional years of good life to be lived, in expectation. It’s better to add 700 years of good life than it is to add 70 years of good life. Therefore, you should send your money to NTI.

There are many ways to resist this quick argument, but perhaps the most natural way is to claim that the years of good life that might result from your NTI donation just don’t matter in the same way as the years of good life that would result from your AMF donation. By donating to AMF, you gift 70 more years to a person who actually exists, who will exist regardless of your decision, and who exists right now. The same can’t be said of your donation to NTI. The vast majority of those additional years would accrue far in the future: to people who do not and need never exist.

This is a person-affecting response to the quick argument. On person-affecting views in population ethics, the moral import of a person’s welfare depends on that person’s temporal or modal status. These views typically imply that – all else equal – we’re never required to create extra people, or to act in ways that increase the probability of extra people coming into existence.

The allure of person-affecting views is partly in their foundations. These views often have their start in two claims that many find intuitive: (1) the Person-Affecting Restriction: an outcome can’t be better than another unless it’s better for some person, and (2) Existence Anticomparativism: existing can’t be better for a person than not existing.

However, another big draw of person-affecting views is their upshots. These views avoid some well-known problems faced by their rival impersonal views. Consider expected total utilitarianism: one prominent impersonal view. It implies that there are cases in which we’re required to create new happy people rather than help existing people, cases in which we’re required to make great sacrifices to create new people with lives barely worth living, and cases in which we’re required to make great sacrifices to slightly reduce the chance of near-term human extinction. Person-affecting views mostly avoid these problems, and that might seem like a significant point in their favour.

In this paper, I argue that these advantages are largely illusory. Using Parfit-style fission cases, I construct a dilemma for person-affecting views: either these views violate the spirit of the Person-Affecting Restriction, or else they imply fission analogues of the problems that blight impersonal views. These fission analogues are about as troubling as the original problems, and so they undermine much of the motivation for preferring person-affecting views to impersonal views. Considering the objections unique to person-affecting views, we should prefer impersonal views on balance.

Rejecting person-affecting views doesn’t immediately commit us to NTI over AMF. There are many ways to resist the quick argument. But – as I hope to show in this paper – the most natural line of resistance isn’t as attractive as it might first seem.[1]

Read the rest of the paper

  1. ^

    In a companion paper (Thornley forthcoming), I argue that fission also presents a challenge to critical-level and critical-range views in population ethics. In that paper’s introduction, I give a brief argument against such views, intended to save the time of readers of a certain metaphysical bent. Here’s the analogous argument against person-affecting views: 

    1. On person-affecting views, our moral obligations can depend on the affected persons’ temporal or modal status. 

    2. A person’s temporal or modal status can depend on our answers to questions of personal identity. (Whether a person presently, actually, or necessarily exists in some scenario – or whether they’re harmed by some action – can depend on whether that person is identical to some person existing at other times or in other possible worlds.) 

    C1. So, on person-affecting views, our moral obligations can depend on our answers to questions of personal identity. 

    3. Questions of personal identity are empty: their answers can’t be discovered but at most stipulated. 

    4. Our moral obligations can’t depend on an answer to an empty question. 

    C2. Therefore, person-affecting views are false. 

    I have some sympathy for this argument, but my case against person-affecting views doesn’t depend on it.

Comments2


Sorted by Click to highlight new comments since:

A different version of (5), in response to Benign A-Fission, could be a rule that treats Lefty as non-extra and Righty as extra in Split B — maybe for basically the reasons you give for Split B over No Split —, and one of Lefty or Righty as non-extra in Split C. Then you'd choose Split B among the three options.

One incomplete rule that could deliver this result is the following:

If all the splits have non-negative welfare, treat one with the highest welfare as non-extra, and treat the others as extra.

So, No Split gives Anna 80 welfare, Split B, 10+90=100 welfare and Split C, 10+60=70 welfare. Split B is best.

 

This doesn't say what to do about splits with negative welfare. Two options:

  1. Treat them all as non-extra.
  2. Treat a worst off one as non-extra, and the rest as extra and ignore them.

 

We might also consider decreasing marginal value to additional splits beyond the highest positive (and worst negative) welfare one, instead of totally ignoring them. Maybe the highest welfare one gets full weight, the second highest gets weight , the third highest gets , ..., the n-th highest gets . This bounds the weighted sum of welfare in the splits by  times the welfare of the best off split. They still count for something, but we could avoid Repugnant Fission.

Repugnant Fission doesn't seem nearly as bad to me as Repugnant Transition under interpretation (6), as someone sympathetic to person-affecting views. Repugnant Fission is not worse for anyone.

Repugnant Fission is basically the same as comparing a modestly long life wonderful at each moment to an extraordinarily long life barely worth living at each moment, for the same person. The extra moments of life play the same role as splits. If the longer life is long enough, under intrapersonal addition of welfare, it would be better. The problem, if any, is with intrapersonal aggregation, not person-affecting views.

And to be clear, the Repugnant Conclusion is not the main reason I'm sympathetic to person-affecting views. I think even just adding one extra person at the cost of the welfare of those who would exist anyway is bad. It doesn't take huge numbers.

Curated and popular this week
 ·  · 1m read
 · 
(Audio version here, or search for "Joe Carlsmith Audio" on your podcast app.) > “There comes a moment when the children who have been playing at burglars hush suddenly: was that a real footstep in the hall?”  > > - C.S. Lewis “The Human Condition,” by René Magritte (Image source here) 1. Introduction Sometimes, my thinking feels more “real” to me; and sometimes, it feels more “fake.” I want to do the real version, so I want to understand this spectrum better. This essay offers some reflections.  I give a bunch of examples of this “fake vs. real” spectrum below -- in AI, philosophy, competitive debate, everyday life, and religion. My current sense is that it brings together a cluster of related dimensions, namely: * Map vs. world: Is my mind directed at an abstraction, or it is trying to see past its model to the world beyond? * Hollow vs. solid: Am I using concepts/premises/frames that I secretly suspect are bullshit, or do I expect them to point at basically real stuff, even if imperfectly? * Rote vs. new: Is the thinking pre-computed, or is new processing occurring? * Soldier vs. scout: Is the thinking trying to defend a pre-chosen position, or is it just trying to get to the truth? * Dry vs. visceral: Does the content feel abstract and heady, or does it grip me at some more gut level? These dimensions aren’t the same. But I think they’re correlated – and I offer some speculations about why. In particular, I speculate about their relationship to the “telos” of thinking – that is, to the thing that thinking is “supposed to” do.  I also describe some tags I’m currently using when I remind myself to “really think.” In particular:  * Going slow * Following curiosity/aliveness * Staying in touch with why I’m thinking about something * Tethering my concepts to referents that feel “real” to me * Reminding myself that “arguments are lenses on the world” * Tuning into a relaxing sense of “helplessness” about the truth * Just actually imagining differ
 ·  · 5m read
 · 
When we built a calculator to help meat-eaters offset the animal welfare impact of their diet through donations (like carbon offsets), we didn't expect it to become one of our most effective tools for engaging new donors. In this post we explain how it works, why it seems particularly promising for increasing support for farmed animal charities, and what you can do to support this work if you think it’s worthwhile. In the comments I’ll also share our answers to some frequently asked questions and concerns some people have when thinking about the idea of an ‘animal welfare offset’. Background FarmKind is a donation platform whose mission is to support the animal movement by raising funds from the general public for some of the most effective charities working to fix factory farming. When we built our platform, we directionally estimated how much a donation to each of our recommended charities helps animals, to show users.  This also made it possible for us to calculate how much someone would need to donate to do as much good for farmed animals as their diet harms them – like carbon offsetting, but for animal welfare. So we built it. What we didn’t expect was how much something we built as a side project would capture peoples’ imaginations!  What it is and what it isn’t What it is:  * An engaging tool for bringing to life the idea that there are still ways to help farmed animals even if you’re unable/unwilling to go vegetarian/vegan. * A way to help people get a rough sense of how much they might want to give to do an amount of good that’s commensurate with the harm to farmed animals caused by their diet What it isn’t:  * A perfectly accurate crystal ball to determine how much a given individual would need to donate to exactly offset their diet. See the caveats here to understand why you shouldn’t take this (or any other charity impact estimate) literally. All models are wrong but some are useful. * A flashy piece of software (yet!). It was built as
Garrison
 ·  · 7m read
 · 
This is the full text of a post from "The Obsolete Newsletter," a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to build Machine Superintelligence. Consider subscribing to stay up to date with my work. Wow. The Wall Street Journal just reported that, "a consortium of investors led by Elon Musk is offering $97.4 billion to buy the nonprofit that controls OpenAI." Technically, they can't actually do that, so I'm going to assume that Musk is trying to buy all of the nonprofit's assets, which include governing control over OpenAI's for-profit, as well as all the profits above the company's profit caps. OpenAI CEO Sam Altman already tweeted, "no thank you but we will buy twitter for $9.74 billion if you want." (Musk, for his part, replied with just the word: "Swindler.") Even if Altman were willing, it's not clear if this bid could even go through. It can probably best be understood as an attempt to throw a wrench in OpenAI's ongoing plan to restructure fully into a for-profit company. To complete the transition, OpenAI needs to compensate its nonprofit for the fair market value of what it is giving up. In October, The Information reported that OpenAI was planning to give the nonprofit at least 25 percent of the new company, at the time, worth $37.5 billion. But in late January, the Financial Times reported that the nonprofit might only receive around $30 billion, "but a final price is yet to be determined." That's still a lot of money, but many experts I've spoken with think it drastically undervalues what the nonprofit is giving up. Musk has sued to block OpenAI's conversion, arguing that he would be irreparably harmed if it went through. But while Musk's suit seems unlikely to succeed, his latest gambit might significantly drive up the price OpenAI has to pay. (My guess is that Altman will still ma