I think the appeal of IIA loses some of its grip when one realizes that a lot of our ordinary moral intuitions violate it. Pete Graham has a nice case showing this. Here’s a slightly simplified version:
Suppose you see two people drowning in a crocodile-infested lake. You have two options:
Option 1: Do nothing.
Option 2: Dive in and save the first person’s life, at the cost of one of your legs.
In this case, most have the intuition that both options are permissible — while it’s certainly praiseworthy to sacrifice your leg to save someone’s life, it’s not obligatory to do so. Now suppose we add a third option to the mix:
Option 3: Dive in and save both people’s lives, at the cost of one of your legs.
Once we add option 3, most have the intuition that only options 1 and 3 are permissible, and that option 2 is now impermissible, contra IIA.
Sadly, I don’t have a firm stance on what the right view is. Sometimes I’m attracted to the kind of view I defend in this paper, sometimes (like when corresponding with Melinda Roberts) I find myself pulled toward a more traditional person-affecting view, and sometimes I find myself inclined toward some form of totalism, or some fancy variant thereof.
Regarding extinction cases, I’m inclined to think that it’s easy to pull in a lot of potentially confounding intuitions. For example, in the blowing up the planet example Arden presents, in addition to well-being considerations, we have intuitions about violating people’s rights by killing them without their consent, intuitions about the continuing existence of various species (which would all be wiped out), intuitions about the value of various artwork (which would be destroyed if we blew up the planet), and so on. And if one thinks that many of these intuitions are mistaken (as many Utilitarians will), or that these intuitions bring in issues orthogonal to the particular issues that arise in population ethics (as many others will), then one won’t want to rest one’s evaluation of a theory on cases where all of these intuitive considerations are in play.
Here’s a variant of Arden’s case which allows us to bracket those considerations. Suppose our choice is between:
Option 1: Create a new planet in which 7 billion humans are created, and placed in an experience machine in which they live very miserable lives (-10).
Option 2: Create a new planet in which 11.007 trillion humans are created, and placed in experience machines, where 1.001 trillion are placed in experience machines in which they live miserable lives (-1), 10 trillion are placed in experience machines in which they live great lives (+50), and 0.006 trillion are placed in experience machines in which they live good lives (+10).
This allows us to largely bracket many of the above intuitions — humanity and the others species will still survive on our planet regardless of which option we choose, no priceless art is being destroyed, no one is being killed against their will, etc.
In this case, the position that option 1 is obligatory doesn’t strike me as that bad. (My folk intuition here is probably that option 2 is obligatory. But my intuitions here aren’t that strong, and I could easily be swayed if other commitments gave me reason to say something else in this case.)
I guess the two alternatives that seem salient to me are (i) something like HMV combined with pairing individuals via cross-world identity, or (ii) something like HMV combined with pairing individuals who currently exist (at the time of the act) via cross-world identity, and not pairing individuals who don’t currently exist. (I take it (ii) is the kind of view you had in mind.)
If we adopt (ii), then we can say that all of W1-W3 are permissible in the above case (since all of the individuals in question don’t currently exist, and so don’t get paired with anyone). But this kind of person-affecting view has some other consequences that might make one squeamish. For example, suppose you have a choice between three options:
Option 1: Don’t have a child.
Option 2: Have a child, and give them a great life.
Option 3: Have a child, and give them a life barely worth living.
(Suppose, somewhat unrealistically, that our choice won’t bear on anyone else’s well-being.)
According to (ii), all three options are permissible. That entails that option 3 is permissible — it’s permissible to have a child and give them a life barely worth living, even though you could have (at no cost to yourself or anyone else) given that very same person a great life. YMMV, but I find that hard to square with person-affecting intuitions!
Thanks for the write up and interesting commentary Arden.
I had one question about the worry in the Addendum that Michelle Hutchinson raised, and the thought that “This seems like a reason why the counterpart relation really runs him into trouble compared to other [person-affecting] views. On other such views, bringing into existence happy people seems basically always fine, whereas due to the counterparts in this case it basically never is.”
I take this to be the kind of extinction case Michelle has in mind (where for simplicity I’m bracketing currently existing people and assuming they’ll have the same level of wellbeing in every outcome). Suppose you have a choice between three options:
W1-Inegalitarian Future
a(1): +1; a(2): +2; a(3): +3
W2-Egalitarian Future
b(1): +2; b(2): +2; b(3): +2
W3-Unpopulated Future
—
Since both W1 and W2 will yield harm while W3 won’t, it looks like W3 will come out obligatory.
I can see why one might worry about this. But I wasn't sure how counterpart relations were playing an interesting role here. Suppose we reject counterpart theory, and adopt HMV and cross-world identity (where a(1)=b(1), a(2)=b(2), and a(3)=b(3)). Then won’t we get precisely the same verdicts (i.e., that W3 is obligatory)?
Yeah, for sure. There are definitely plausible views (like pure consequentialism) that will reject these moral judgments and hold on to IIA.
But just to get clear on the dialectic, I wasn’t taking the salient question to be whether holding on to IIA is tenable. (Since there are plausible views that entail it, I think we can both agree it is!)
Rather, I was taking the salient question to be whether conflicting with IIA is itself a mark against a theory. And I take Pete’s example to tell against this thought, since upon reflection it seems like our ordinary moral judgments violate the IIA. And so, upon reflection, IIA is something we would need to be argued into accepting, not something that we should assume is true by default.
Taking a step back: on one way of looking at your initial post against person-affecting views, you can see the argument as boiling down to the fact that person-affecting views violate IIA. (I take this to be the thrust of Michael’s comment, above.) But if violating IIA isn’t a mark against a theory, then it’s not clear that this is a bad thing. (There might be plenty of other bad things about such views, of course, like the fact that they yield implausible verdicts in cases X, Y and Z. But if so, those would be the reasons for rejecting the view, not the fact that it violates IIA.)