# Person-affecting intuitions can often be money pumped

4 min read7th Jul 202272 comments

# 92

This is a short reference post for an argument I wish was better known. Note that it is primarily about person-affecting intuitions that normal people have, rather than a serious engagement with the population ethics literature, which contains many person-affecting views not subject to the argument in this post.

EDIT: Turns out there was a previous post making the same argument.

A common intuition people have is that our goal is "Making People Happy, not Making Happy People". That is:

1. Making people happy: if some person Alice will definitely exist, then it is good to improve her welfare
2. Not making happy people: it is neutral to go from "Alice won't exist" to "Alice will exist"[1]. Intuitively, if Alice doesn't exist, she can't care that she doesn't live a happy life, and so no harm was done.

This position is vulnerable to a money pump[2], that is, there is a set of trades that it would make that would achieve nothing and lose money with certainty. Consider the following worlds:

• World 1: Alice won't exist in the future.
• World 2: Alice will exist in the future, and will be slightly happy.
• World 3: Alice will exist in the future, and will be very happy.

(The worlds are the same in every other aspect. It's a thought experiment.)

Then this view would be happy to make the following trades:

1. Receive $0.01[3] to move from World 1 to World 2 ("Not making happy people") 2. Pay$1.00 to move from World 2 to World 3 ("Making people happy")
3. Receive $0.01 to move from World 3 to World 1 ("Not making happy people") The net result is to lose$0.98 to move from World 1 to World 1.

## FAQ

Q. Why should I care if my preferences lead to money pumping?

This is a longstanding debate that I'm not going to get into here. I'd recommend Holden's series on this general topic, starting with Future-proof ethics.

Q. In the real world we'd never have such clean options to choose from. Does this matter at all in the real world?

See previous answer.

Q. What if we instead have <slight variant on a person-affecting view>?

Often these variants are also vulnerable to the same issue. For example, if you have a "moderate view" where making happy people is not worthless but is discounted by a factor of (say) 10, the same example works with slightly different numbers:

Let's say that "Alice is very happy" has an undiscounted worth of 2 utilons. Then you would be happy to (1) move from World 1 to World 2 for free, (2) pay 1 utilon to move from World 2 to World 3, and (3) receive 0.5 utilons to move from World 3 to World 1.

The philosophical literature does consider person-affecting views to which this money pump does not apply. I've found these views to be unappealing for other reasons but I have not considered all of them and am not an expert in the topic.

If you're interested in this topic, Arrhenius proves an impossibility result that applies to all possible population ethics (not just person-affecting views), so you need to bite at least one bullet.

EDIT: Adding more FAQs based on comments:

Q. Why doesn't this view anticipate that trade 2 will be available, and so reject trade 1?

You can either have a local decision rule that doesn't take into account future actions (and so excludes this sort of reasoning), or you can have a global decision rule that selects an entire policy at once. I'm talking about the local kind.

You could have a global decision rule that compares worlds and ignores happy people who don't exist in all worlds. In that case you avoid this money pump, but have other problems -- see Chapter 4 of On the Overwhelming Importance of Shaping the Far Future.

You could also take the local decision rule and try to turn it into a global decision rule by giving it information about what decisions it would make in the future. I'm not sure how you'd make this work but I don't expect great results.

Q. This is a very consequentialist take on person-affecting views. Wouldn't a non-consequentialist version (e.g. this comment) make more sense?

Personally I think of non-consequentialist theories as good heuristics that approximate the hard-to-compute consequentialist answer, and so I often find them irrelevant when thinking about theories applied in idealized thought experiments. If you are instead sympathetic to non-consequentialist theories as being the true answer, then the argument in this post probably shouldn't sway you too much. If you are in a real-world situation where you have person-affecting intuitions, those intuitions are there for a reason and you probably shouldn't completely ignore them until you know that reason.

Q. Doesn't total utilitarianism also have problems?

Yes! While I am more sympathetic to total utilitarianism than person-affecting views, this post is just a short reference post about one particular argument. I am not defending claims like "this argument demolishes person-affecting views" or "total utilitarianism is the correct theory" in this post.

## Further resources

1. ^

For this post I'll assume that Alice's life is net positive, since "asymmetric" views say that if Alice would have a net negative life, then it would be actively bad (rather than neutral) to move Alice from "won't exist" to "will exist".

2. ^

A previous version of this post incorrectly called this a Dutch book.

3. ^

By giving it $0.01, I'm making it so that it strictly prefers to take the trade (rather than being indifferent to the trade, as it would be if there was no money involved). # 92 New Comment 75 comments, sorted by Click to highlight new comments since: Some comments are truncated due to high volume. Change truncation settings My impression is that each family of person-affecting views avoids the Dutch book here. Here are four families: (1) Presentism: only people who presently exist matter. (2) Actualism: only people who will exist (in the actual world) matter. (3) Necessitarianism: only people who will exist regardless of your choice matter. (4) Harm-minimisation views (HMV): Minimize harm, where harm is the amount by which a person's welfare falls short of what it could have been. Presentists won't make trade 2, because Alice doesn't exist yet. Actualists can permissibly turn down trade 3, because if they turn down trade 3 then Alice will actually exist and her welfare matters. Necessitarians won't make trade 2, because it's not the case that Alice will exist regardless of their choice. HMVs won't make trade 1, because Alice is harmed in World 2 but not World 1. 6Rohin Shah7mo I agree that most philosophical literature on person-affecting views ends up focusing on transitive views that can't be Dutch booked in this particular way (I think precisely because not many people want to defend intransitivity). I think the typical person-affecting intuitions that people actually have are better captured by the view in my post than by any of these four families of views, and that's the audience to which I'm writing. This wasn't meant to be a serious engagement with the population ethics literature; I've now signposted that more clearly. EDIT: I just ran these positions (except actualism, because I don't understand how you make decisions with actualism) by someone who isn't familiar with population ethics, and they found all of them intuitively ridiculous. They weren't thrilled with the view I laid out but they did find it more intuitive. 3EJT7mo Okay, that seems fair. And I agree that the Dutch book is a good argument against the person-affecting intuitions you lay out. But the argument only shows that people initially attracted to those person-affecting intuitions should move to a non-Dutch-bookable person-affecting view. If we want to move people away from person-affecting views entirely, we need other arguments. The person-affecting views endorsed by philosophers these days are more complex than the families I listed. They're not so intuitively ridiculous (though I think they still have problems. I have a couple of draft papers on this.). Also a minor terminological note, you've called your argument a Dutch book and so have I. But I think it would be more standard to call it a money pump. Dutch books are a set of gambles all taken at once that are guaranteed to leave a person worse off. Money pumps are a set of trades taken one after the other that are guaranteed to leave a person worse off. 7Rohin Shah7mo Fwiw, I wasn't particularly trying to do this. I'm not super happy with any particular view on population ethics and I wouldn't be that surprised if the actual view I settled on after a long reflection was pretty different from anything that exists today, and does incorporate something vaguely like person-affecting intuitions. I mostly notice that people who have some but not much experience with longtermism are often very aware of the Repugnant Conclusion and other objections to total utilitarianism, and conclude that actually person-affecting intuitions are the right way to go. In at least two cases they seemed to significantly reconsider upon presenting this argument. It seems to me like, amongst the population of people who haven't engaged with the population ethics literature, critiques of total utilitarianism are much better known than critiques of person affecting intuitions. I'm just trying to fix that discrepancy. Thanks, I've changed this. I'm just trying to fix that discrepancy. I see. That seems like a good thing to do. Here's another good argument against person-affecting views that can be explained pretty simply, due to Tomi Francis. Person-affecting views imply that it's not good to add happy people. But Q is better than P, because Q is better for the hundred already-existing people, and the ten billion extra people in Q all live happy lives. And R is better than Q, because moving to R makes one hundred people's lives slightly worse and ten billion people's lives much better. Since betterness is transitive, R is better than P. R and P are identical except for the extra ten billion people living happy lives in R. Therefore, it's good to add happy people, and person-affecting views are false. 3MichaelStJules5mo There are also Parfit's original Mere Addition argument and Huemer's Benign Addition argument for the Repugnant Conclusion. They're the familiar A≤A+<B arguments, adding a large marginally positive welfare population, and then redistributing the welfare evenly, except with Huemer's, A<A+, strictly, because those in A are made slightly better off in A+. Huemer's is here: https://philpapers.org/rec/HUEIDO [https://philpapers.org/rec/HUEIDO] I think this kind of argument can be used to show that actualism endorses the RC and Very RC in some cases, because the original world without the extra people does not maximize "self-conditional value" (if the original people in A are better off in A+, via benign addition), whereas B does, using additive aggregation. I think the Tomi Francis example also only has R maximizing self-conditional value, among the three options, when all three are available. And we could even make the original 100 people worse off than 40 each in R, and this would still hold. Voting methods extending from pairwise comparisons also don't seem to avoid the problem, either: https://forum.effectivealtruism.org/posts/fqynQ4bxsXsAhR79c/teruji-thomas-the-asymmetry-uncertainty-and-the-long-term?commentId=ockB2ZCyyD8SfTKtL [https://forum.effectivealtruism.org/posts/fqynQ4bxsXsAhR79c/teruji-thomas-the-asymmetry-uncertainty-and-the-long-term?commentId=ockB2ZCyyD8SfTKtL] I guess HMVs, presentist and necessitarian views may work to avoid the RC and VRC, but AFAICT, you only get the procreation asymmetry by assuming some kind of asymmetry with these views. And they all have some pretty unusual prescriptions I find unintuitive, even as someone very sympathetic to person-affecting views. Frick’s conditional interests still seem promising and could maybe be used to justify the procreation asymmetry for some kind of HMV or negative axiology. 3Rohin Shah7mo Nice, I hadn't seen this argument before. 3MichaelStJules7mo This all seems right if all the trades are known to be available ahead of time and we're making all these decisions before Alice would be born. However, we can specify things slightly differently. Presentists and necessitarians who have made trade 1 will make trade 2 if it's offered after Alice is born, but then they can turn down trade 3 at that point, as trade 3 would mean killing Alice or an impossible world where she was never born. However, if they anticipate trade 2 being offered after Alice is born, then I think they shouldn't make trade 1, since they know they'll make trade 2 and end up in World 3 minus some money, which is worse than World 1 for presently existing people and necessary people before Alice is born. HMVs would make trade 1 if they don't anticipate trade 2/World 3 minus some money being an option, but end up being wrong about that. 1EJT7mo Agreed 3Michael_Wiebe7mo Is the difference between actualism and necessitarianism that actualism cares about both (1) people who exist as a result of our choices, and (2) people who exist regardless of our choices; whereas necessitarianism cares only about (2)? 3EJT7mo Yup! 3Michael_Wiebe7mo Hm, then I find necessitarianism quite strange. In practice, how do we identify people who exist regardless of our choices? 5EJT7mo I think in ordinary cases, necessitarianism ends up looking a lot like presentism. If someone presently exists, then they exist regardless of my choices. If someone doesn't yet exist, their existence likely depends on my choices (there's probably something I could do to prevent their existence). Necessitarianism and presentism do differ in some contrived cases, though. For example, suppose I'm the last living creature on Earth, and I'm about to die. I can either leave the Earth pristine or wreck the environment. Some alien will soon be born far away and then travel to Earth. This alien's life on Earth will be much better if I leave the Earth pristine. Presentism implies that it doesn't matter whether I wreck the Earth, because the alien doesn't exist yet. Necessitarianism implies that it would be bad to wreck the Earth, because the alien will exist regardless of what I do. More generally, Arrhenius proves an impossibility result that applies to all possible population ethics (not just person-affecting views), so (if you want consistency) you need to bite at least one of those bullets. That result (The Impossibility Theorem), as stated in the paper, has some important assumptions not explicitly mentioned in the result itself which are instead made early in the paper and assume away effectively all person-affecting views before the 6 conditions are introduced. The assumptions are completeness, transitivity and the independence of irrelevant alternatives. You could extend the result to include incompleteness, intransitivity, dependence on irrelevant alternatives or being in principle Dutch bookable/money pumpable as alternative "bullets" you could bite on top of the 6 conditions. (Intransitivity, dependence on irrelevant alternatives and maybe incompleteness imply Dutch books/money pumps, so you could just add Dutch books/money pumps and maybe incompleteness.) 6MichaelStJules7mo There are some other similar impossibility results that apply to I think basically all aggregative views, person-affecting or not (although there are non-aggregative views which avoid them [https://forum.effectivealtruism.org/posts/gCkHoXvDjEKSK22Wp/future-proof-ethics?commentId=Z4Lyhi9mb3CWbmMxm#Z4Lyhi9mb3CWbmMxm] ). See Spears and Budolfson: 1. https://philpapers.org/rec/BUDWTR [https://philpapers.org/rec/BUDWTR] 2. http://www.stafforini.com/docs/Spears & Budolfson - Repugnant conclusions.pdf [http://www.stafforini.com/docs/Spears%20&%20Budolfson%20-%20Repugnant%20conclusions.pdf] The results are basically that all aggregative views in the literature allow small changes in individual welfares in a background population to outweigh the replacement of an extremely high positive welfare subpopulation with a subpopulation with extremely negative welfare, an extended very repugnant conclusion. The size and welfare levels of the background population, the size of the small changes and the number of small changes will depend on the exact replacement and view. The result is roughly: This is usually through a much much larger number of small changes to the background population than the number of replaced individuals, or the small changes happening to individuals who are extremely prioritized (as in lexical views and some person-affecting views). (I think the result actually also adds a huge marginally positive welfare population along with the negative welfare one, but I don't think this is necessary or very interesting.) 4Rohin Shah7mo Yeah, this is what I had in mind. Mod note: I've enabled agree-disagree voting on this thread. This is still in the experimental phase, see the first time we did so here. Still very interested in feedback. Maybe I have the wrong idea about what “person-affecting view” refers to, but I thought a person-affecting view was a non-consequentialist ideology that would not take trade 3, ie it is neutral about moving from no person to happy person but actively dislikes moving from happy person to no person. Wouldn't the view dislike it if the happy person was certain to be born, but not in the situation where the happy person's existence is up to us? But I agree strongly with person-affecting views working best in a non-consequentialist framework! I think I find step 1 the most dubious – Receive$0.01 to move from World 1 to World 2 ("Not making happy people").

If we know that world 3 is possible, we're accepting money for creating a person under conditions that are significantly worse than they could be. That seems quite bad even if Alice would rather exist than not exist.

My reply violates the independence of irrelevant(-seeming) alternatives condition. I think that's okay.

To give an example, imagine some millionaire (who uses 100% of their money selfishly) would accept $1,000 to bring a child into existence that will grow up reasonably happy but have a lot of struggles – let's say she'll only have the means of a bottom-10%-income American household. Seems bad if the millionaire could instead bring a child into existence that is better positioned to do well in life and achieve her goals! Now imagine if a bottom-10%-income American family wants to bring a ch... 6Rohin Shah7mo Added an FAQ: In your millionaire example, I think the consequentialist explanation is "if people generally treat it as bad when Bob takes action A with mildly good first-order consequences when Bob could instead have taken action B with much better first-order consequences, that creates an incentive through anticipated social pressure for Bob to take action B rather than A when otherwise Bob would have taken A rather than B". (Notably, this reason doesn't apply in the idealized thought experiment where no one ever observes your decisions and there is no difference between the three worlds other than what was described.) 3Lukas_Gloor7mo On my favored view, this isn't the case. I think of creating new people/beings as a special category. I also am mostly on board with consequentialism applied to limited domains of ethics, but I'm against treating all of ethics under consequentialism, especially if people try to do the latter in a moral realist way where they look for a consequentialist theory that defines everyone's standards of ideally moral conduct. I am working on a post titled "Population Ethics Without an Objective Axiology." Here's a summary from that post: * The search for an objective axiology assumes that there’s a well-defined “impartial perspective” that determines what’s intrinsically good/valuable. Within my framework, there’s no such perspective. * Another way of saying this goes as follows. My framework conceptualizes ethics as being about goals/interests.[There are, I think, good reasons for this – see my post Dismantling Hedonism-inspired Moral Realism [https://forum.effectivealtruism.org/posts/oXhhxeQMBjJriMjb8/dismantling-hedonism-inspired-moral-realism] for why I object to ethics being about experiences, and my post Against Irreducible Normativity [https://forum.effectivealtruism.org/posts/C2GpA894CfLcTXL2L/against-irreducible-normativity] on why I don’t think ethics is about things that we can’t express in non-normative terminology.] Goals can differ between people [https://forum.effectivealtruism.org/posts/8D9qsmGEdKsrfGEHw/the-life-goals-framework-how-i-reason-about-morality-as-an#Why_life_goals_differ_between_people] and there’s no goal correct goal for everyone to adopt. * In fixed-population contexts, a focus on goals/interests can tell us exactly what to do: we best benefit others by doing what these others (people/beings) would want us to do. * In population ethics, this approach no longer works so well – it introduces ambiguities. Creating new people/beings changes the number of interests/goals to look out for. Re 5Rohin Shah7mo I can't tell what you mean by an objective axiology. It seems to me like you're equivocating between a bunch of definitions: 1. An axiology is objective if it is universally true / independent of the decision-maker / not reliant on goals / implied by math. (I'm pointing to a cluster of intuitions rather than giving a precise definition.) 2. An axiology is objective if it provides a decision for every possible situation you could be in. (I would prefer to call this a "complete" axiology, perhaps.) 3. An axiology is objective if its decisions can be computed by taking each world, summing some welfare function over all the people in that world, and choosing the decision that leads to the world with a higher number. (I would prefer to call this an "aggregative" axiology, perhaps.) Examples of definition 1: Examples of definition 2: Examples of definition 3: I don't think I'm relying on an objective-axiology-by-definition-1. Any time I say "good" you can think of it as "good according to the decision-maker" rather than "objectively good". I think this doesn't affect any of my arguments. It is true that I am imagining an objective-axiology-by-definition-2 (which I would perhaps call a "complete axiology"). I don't really see from your comment why this is a problem. I agree this is "maximally ambitious morality" rather than "minimal morality". Personally if I were designing "minimal morality" I'd figure out what "maximally ambitious morality" would recommend we design as principles that everyone could agree on and follow, and then implement those. I'm skeptical that if I ran through such a procedure I'd end up choosing person-affecting intuitions (in the sense of "Making People Happy, Not Making Happy People", I think I plausibly would choose something along the lines of "if you create new people make sure they have lives well-beyond-barely worth living"). Other people might differ from me, since they have different goals, but I suspec 5Lukas_Gloor7mo I think this is a crux between us (or at least an instance where I didn't describe very well how I think of "minimal morality"). (A lot of the other points I’ve been making, I see mostly as “here’s a defensible alternative to Rohin’s view” rather than “here’s why Rohin is wrong to not find (something like) person-affecting principles appealing.”) In my framework, it wouldn’t be fair to derive minimal morality from a specific take on maximally ambitious morality. People who want to follow some maximally ambitious morality (this includes myself) won’t all pick the same interpretation of what that means. Not just for practical reasons, but fundamentally: for maximally ambitious morality, different interpretations are equally philosophically defensible. Some people may have the objection "Wait, if maximally ambitious morality is under-defined, why adopt confident and specific views for how you want things to be? Why not keep your views on it under-defined, too?” (See Richard Ngo’s post on Moral indefinability [https://www.lesswrong.com/posts/ACo8Md94aX7qRpPi7/arguments-for-moral-indefinability] .) I have answered this objection in this section [https://forum.effectivealtruism.org/posts/6STzb6XBAyu3Xxxka/the-moral-uncertainty-rabbit-hole-fully-excavated#Anticipating_objections__Dialogue_] of my post The Moral Uncertainty Rabbit Hole, Fully Excavated [https://forum.effectivealtruism.org/posts/6STzb6XBAyu3Xxxka/the-moral-uncertainty-rabbit-hole-fully-excavated] . In short, I give an analogy between "doing what's maximally moral" and "becoming ideally athletically fit." In the analogy, someone grows up with the childhood dream of becoming “ideally athletically fit” in a not-further-specified way. They then have the insight that "becoming ideally athletically fit" has different defensible interpretations – e.g., the difference between a marathon runner or a 100m-sprinter ((or someone who is maximally fit in reducing heart attack risks – which are actually elevated for pro 6Rohin Shah7mo Yeah, you could modify the view I laid out to say that moving from "happy person" to "no person" has a disutility equal in magnitude to the welfare that the happy person would have had. This new view can't be Dutch booked because it never takes trades that decrease total welfare. My objection to it is that you can't use it for decision-making because it depends on what the "default" is. For example, if you view x-risk reduction as preventing a move from "lots of happy people to no people" this view is super excited about x-risk reduction, but if you view x-risk reduction as a move from "no people to lots of happy people" this view doesn't care. (You can make a similar objection to the view in the post though it isn't as stark. In my experience, people's intuitions are closer to the view in the post, and they find the Dutch book argument at least moderately convincing.) 1Erich_Grunewald7mo That still seems somehow like a consequentialist critique though. Maybe that's what it is and was intended to be. Or maybe I just don't follow? From a non-consequentialist point of view, whether a "no people to lots of happy people" move (like any other move) is good or not depends on other considerations, like the nature of the action, our duties or virtue. I guess what I want to say is that "going from state A to state B"-type thinking is evaluating world states in an outcome-oriented way, and that just seems like the wrong level of analysis for those other philosophies. From a consequentalist point of view, I agree. 2Rohin Shah7mo I totally agree this is a consequentialist critique. I don't think that negates the validity of the critique. Okay, but I still don't know what the view says about x-risk reduction (the example in my previous comment)? 1Erich_Grunewald7mo Agreed -- I didn't mean to imply it was. By "the view", do you mean the consequentialist person-affecting view you argued against, or one of the non-consequentialist person-affecting views I alluded to? If the former, I have no idea. If the latter, I guess it depends on the precise view. On the deontological view I find pretty plausible we have, roughly speaking, a duty to humanity, and that'd mean actions that reduce x-risk are good (and vice versa). (I think there are also other deontological reasons to reduce x-risk, but that's the main one.) I guess I don't see any way that changes depending on what the default is? I'll stop here since I'm not sure this is even what you were asking about ... 2Rohin Shah7mo Oh, to be clear, my response to RedStateBlueState's comment was considering a new still-consequentialist view, that wouldn't take trade 3. None of the arguments in this post are meant to apply to e.g. deontological views. I've clarified this in my original response. 1RedStateBlueState7mo Right, the “default” critique is why people (myself included) are consequentialists. But I think the view outlined in this post is patently absurd and nobody actually believes it. Trade 3 means that you would have no reservations about killing a (very) happy person for a couple utilons! 6Rohin Shah7mo Oh, the view here only says that it's fine to prevent a happy person from coming into existence, not that it's fine to kill an already existing person. 1[comment deleted]7mo I don't actually think Dutch books and money pumps are very practically relevant in charitable/career decision-making. To the extent that they are, you should aim to anticipate others attempting to Dutch book or money pump you and model sequences of decisions, just like you should aim to anticipate any other manipulation or exploitation. EDIT: You don't need to commit to views or decision procedures which are in principle not Dutch bookable/money pumpable. Furthermore, "greedy" (as in "greedy algorithm") or short-sighted EV maximization is also suboptimal ... 6Rohin Shah7mo I don't think the case for caring about Dutch books is "maybe I'll get Dutch booked in the real world". I like the Future-proof ethics [https://www.cold-takes.com/future-proof-ethics/] series on why to care about these sorts of theoretical results. I definitely agree that there are issues with total utilitarianism as well. 5[anonymous]7mo If I may ask, why do you believe there exist any future-proof ethics? I kinda suspect no ethics are future-proof in this sense, hence had to ask. 5Rohin Shah7mo Which sense do you mean? I like Holden's description: Personally I'm thinking more of the former reason than the latter reason. I think "things I'd approve of after more thinking and learning" is reasonably precise as a definition, and seems pretty clearly like a thing that can be approximated. 1[anonymous]7mo I mean something like: If I'm exposed to persuasive sequence of words A, I'll become strongly convinced of one set of values, and if I'm exposed to a different persuasive sequence of words B, I'd become strongly convinced of a different set of values. Instead of words, it could also be observations or experiences. Which I'm assuming is part of "learning and thinking" as intended here. (And with digital wireheading for instance, we might be able to generate a lot of such experiences to subject each other to.) It isn't obvious to me that different sets of experiences in different futures will still cause us to converge to the same "future-proof" values. And maybe that's cause humans are faulty reasoners, and ideal reasoners can in fact do better. But if I had to self-modify into an ideal reasoner I'm not sure what a "moral reasoning process" even looks like. Our best (or atleast good) formal model for an ideal reasoner is one that just happens to know its unchangeable utility function from birth, not one that reasons about what values it should have, in any meaningful sense. And if you cannot formalise what these moral reasoning processes could look like, even in theory, I also find it easier to believe any such process can be attacked. (Probably such security mindset.) Keen on your thoughts! 4Rohin Shah7mo I definitely think these processes can be attacked. When I say "what I'd approve of after learning and thinking more" I'm imagining that there isn't any adversarial action during the learning and thinking. If I were forcibly exposed to a persuasive sequence of words, or manipulated / tricked into think that some sequence of words informed of benign facts but were in fact selected to hack my mind, that no longer holds. 1[anonymous]7mo This is fair! So to restate, your claim is that in the absence of such adversaries, moral reasoning processes will in fact all converge to the same place. Even if we're exposed to wildly different experiences/observations/futures, the only thing that determines whether there's convergence or divergence is whether those experiences contain intelligent adversaries or not. I have some intuitions against this claim too, but I'm not sure how to make my thoughts airtight or present them well. I'll still try! (If you think anything here is valuable or that I should spend more time trying to present anything here better, do tell! My comment might be a bit rambly right now, sorry.) Question 1: What precisely about our moral reasoning process make them unlikely to be attacked by "natural" conditions but attackable by an intelligently designed one? If I had to mathematically formally written down every possible future, what makes this distinction between natural and not natural a sharp distinction, and what makes it perfectly 100% set-theoretically overlap with the distinction between futures where our reasoning processes converge and where they don't? One way to answer this is to point at some deep structures we can see today, in people's moral reasoning processes. If you have any I'd be keen to see them. Another is to rely on intuitions we have today and trust that in the future we can formalise it. I find this plausible, but if you're claiming this I also don't know how to attach a lot of confidence to just intuitions. Maybe I need to see your intuitions! Yet another is to claim that sure, in theory there might be "natural" conditions that can attack our reasoning processes, it's just that those natural conditions are super unlikely in practice. This will then shift the question more from theoretical to practical, from theoretically possible futures to futures actually likely in practice. As a practical matter, I don't know how we can say with high confidence 9Rohin Shah6mo When I said that there isn't any adversarial action, I really should have said that you are safe and your learning process is under your control. By default I'm imagining a reflection process under which (a) all of your basic needs are met (e.g. you don't have to worry about starving), (b) you get to veto any particular experience happening to you, (c) you can build tools (or have other people build tools) that help with your reflection, including by building situations where you can have particular experiences, or by creating simulations of yourself that have experiences and can report back, (d) nothing is trying to manipulate or otherwise attack you (unless you specifically asked for the manipulation / attack), whether it is intelligently designed or natural, (e) you don't have any time pressure on finishing the reflection. To be clear this is pretty stringent -- the current state of affairs where you regularly go around talking to people who try to persuade you of stuff doesn't meet the criteria. Given conditions of safety and control over the reflection. It's also not that I think every such process converge to exactly the same place. Rather I'd say that (a) I feel pretty intuitively happy about anything that you get to via such a process, so it seems fine to get any one of them and (b) there is enough convergence that it makes sense to view that as a target which we can approximate or move towards. Part of the reflection process would be to seek out different experiences / observations, so I'm not sure they would be "wildly different". If they're attacked by natural conditions that violates my requirements too. (I don't think I ever said the adversarial action had to be "intelligently designed" instead of "natural"?) In this process fundamentally everything that happens to you is meant to be your own choice. It's still possible that you make a mistake, e.g. you send a simulation of yourself to listen to a persuasive argument and then report back, the simu 3[anonymous]6mo Thanks for the reply! Sorry, I really tried writing you a reply, even deleted a few I wrote, but I think I should probably spend some time on it myself first so I can present it better. If I've to really shorten, in general I feel like we don't have that much "free choice" even in simulations, we're anchored to the observations we've actually had, and our creativity is very limited. And different futures can provide us with wildly different observations all unimaginable to people in other futures, and people in 2022. But defending this and other things will require lot more effort on my part. Sorry. Thanks for your time anyways! 7Lukas_Gloor6mo My post The Moral Uncertainty Rabbit Hole, Fully Excavated [https://forum.effectivealtruism.org/posts/6STzb6XBAyu3Xxxka/the-moral-uncertainty-rabbit-hole-fully-excavated] seems relevant to the discussion here. In that post, I describe examples of "reflection environments" that define ideal reasoning conditions (to specify one's "idealized values"). I talk about pitfalls of reflection environments and judgment calls we'd have to make within that environment. (Pitfalls being things that are bad if they happen but could be avoided at least in theory. Judgment calls are things that aren't bad per se but seem to introduce path dependencies that we can't avoid, which may reduce the chance of convergent outcomes.) I talk about "reflection strategies," which describe how someone goes about their moral reflection inside a reflection environment. I distinguish between conservative and open-minded reflection strategies. They differ primarily on whether someone has already formed convictions (it's a gradual difference). I describe how open-minded reflection strategies come at some risk of leading to under-defined outcomes. (I argue that this isn't necessarily problem, but it's something people want to be aware of.) Here's a section from somewhere in the middle of the post that summarizes some conclusions: Overall, I think Holden's notion of future-proof values is intelligible and holds up to deeper analysis, but I'd imagine that a lot of people underestimate the degree to which it's useful to already form convictions on some ways of reasoning or some components of one's values, to avoid that the reflection outcome becomes under-defined to a degree we might find unsatisfying. 3[anonymous]6mo Thanks for this comment! 4Derek Shiller7mo Do you have an example? 3MichaelStJules7mo See this comment by Paul Christiano on LW based on St. Petersburg lotteries [https://www.lesswrong.com/posts/gJxHRxnuFudzBFPuu/better-impossibility-result-for-unbounded-utilities?commentId=hrsLNxxhsXGRH9SRx] (and my reply). 1Derek Shiller7mo Interesting. It reminds me of a challenge for denying countable additivity: I'm inclined to think that this is a problem with infinities in general, not with unbounded utility functions per se. 2MichaelStJules7mo I think it's a problem for the conjunction of allowing some kinds of infinities and doing expected value maximization with unbounded utility functions. I think EV maximization with bounded utility functions isn't vulnerable to "isomorphic" Dutch books/money pumps or violations of the sure-thing principle. E.g., you could treat the possible outcomes of a lottery as all local parts of a larger single universe to aggregate, but then conditioning on the outcome of the first St. Petersburg lottery and comparing to the second lottery would correspond to comparing a local part of the first universe to the whole of the second universe, but the move from the whole first universe to the local part of the first universe can't happen via conditioning, and the arguments depend on conditioning. Bounded utility functions have problems that unbounded utility functions don't, but these are in normative ethics and about how to actually assign values (including in infinite universes), not about violating plausible axioms of (normative) rationality/decision theory. 1RedStateBlueState7mo After reading the linked comment I think the view that total utilitarianism can be dutch booked is fairly controversial (there is another unaddressed comment I quite agree with), and on a page like this one I think it's misleading to state as fact in a comment that total utilitarianism can be dutch booked in a similar way that person-affecting views can be dutch booked. 2MichaelStJules7mo I should have specified EV maximization with an unbounded social welfare function, although the argument applies somewhat more generally; I've edited this into my top comment. Looking at Slider's reply [https://www.lesswrong.com/posts/gJxHRxnuFudzBFPuu/better-impossibility-result-for-unbounded-utilities?commentId=XfPqdt5ZjprBMuTCp] to the comment I linked, assuming that's the one you meant (or did you have another in mind?): 1. Slider probably misunderstood Christiano about truncation, because Christiano meant that you'd truncate the second lottery at a point that depends on the outcome of the first lottery. For any actual value outcome X of the original St. Petersburg's lottery, half St. Pesterburg can be truncated at some point and still have a finite expected value greater than X. (EDIT: However, I guess the sure-thing principle isn't relevant here with conditional truncation, since we aren't comparing only two fixed options anymore.) 2. I don't understand what Slider meant in the second paragraph, and I think it's probably missing the point. 3. The third paragraph misses the point: once the outcome is decided for the first St. Petersburg lottery, it has finite value, and half St. Petersburg still has infinite expected value, which is greater than a finite value. 1RedStateBlueState7mo Yes, I should have thought more about Slider's reply before posting, I take back my agreement. Still, I don't find dutch booking convincing in Christiano's case. The reason to reject a theory based on dutch booking is that there is no logical choice to commit to, in this case to maximize EV. I don't think this applies to the Paul Christiano case, because the second lottery does not have higher EV than the first. Yes, once you play the first lottery and find out that it has a finite value the second one will have higher EV, but until then the first one has higher EV (in an infinite way) and you should choose it. But again I think there can be reasonable disagreement about this, I just think equating dutch booking for the person-affecting view and for the total utilitarianism view is misleading. These are substantially different philosophical claims. 2MichaelStJules7mo I think a similar argument can apply to person-affecting views and the OP's Dutch book argument: 2MichaelStJules7mo I agree that you can give different weights to different Dutch book/money pump arguments. I do think that if you commit 100% to complete preferences over all probability distributions over outcomes and invulnerability to Dutch books/money pumps, then expected utility maximization over each individual decision with an unbounded utility function is ruled out. As you mention, one way to avoid this St. Petersburg Dutch book/money pump is to just commit to sticking with A, if A>B ex ante, and regardless of the actual outcome of A (+ some other conditions, e.g. A and B both have finite value under all outcomes, and A has infinite expected value), but switching to C under certain other conditions. You may have similar commitment moves for person-affecting views, although you might find them all less satisfying. You could commit to refusing one of the 3 types of trades in the OP, or doing so under specific conditions, or just never completing the last step in any Dutch book, even if you'd know you'd want to. I think those with person-affecting views should usually refuse moves like trade 1, if they think they're not too unlikely to make moves like trade 2 after, but this is messier, and depends on your distributions over what options will become available in the future depending on your decisions. The above commitments for St. Petersburg-like lotteries don't depend on what options will be available in the future or your distributions over them. Trade 3 is removing a happy person, which is usually bad in a person-affecting view, possibly bad enough to not be worth less than$0.99 and thus not be Dutch booked.

4Rohin Shah7mo
Responded [https://forum.effectivealtruism.org/posts/DCZhan8phEMRHuewk/person-affecting-views-can-often-be-dutch-booked?commentId=qGR9z9hh4r4gnxg9b] in the other comment thread.
2[comment deleted]7mo

In practice, I think those with person-affecting views should refuse moves like trade 1 if they "expect" to subsequently make moves like trade 2, because World 1 ≥ World 3*. This would depend on the particulars of the numbers, credences and views involved, though.

EDIT: Lukas discussed and illustrated this earlier here.

*EDIT2: replaced > with ≥.

4Rohin Shah7mo
You can either have a local decision rule that doesn't take into account future actions (and so excludes this sort of reasoning), or you can have a global decision rule that selects an entire policy at once. I was talking about the local kind. You could have a global decision rule that compares worlds and ignores happy people who will only exist in some of the worlds. In that case I'd refer you to Chapter 4 of On the Overwhelming Importance of Shaping the Far Future [https://drive.google.com/file/d/0B8P94pg6WYCIc0lXSUVYS1BnMkE/view?resourcekey=0-nk6wM1QIPl0qWVh2z9FG4Q] . EDIT: Added as an FAQ. (Nitpick: Under the view I laid out World 1 is not better than World 3? You're indifferent between the two.)
2MichaelStJules7mo
Thanks, it's helpful to make this distinction explicit. Aren't such local decision rules generally vulnerable to Dutch book arguments, though? I suppose PAVs with local decision rules are vulnerable to Dutch books even when the future options are fixed (or otherwise don't depend on past choices or outcomes), whereas EU maximization with a bounded utility function isn't. I don't think anyone should aim towards a local decision rule as an ideal, though, so there's an important question of whether your Dutch book argument undermines person-affecting views much at all relative to alternatives. Local decision rules will undweight option value, value of information, investments for the future, and basic things we need to do survive. We'd underinvest in research, and individuals would underinvest in their own education. Many people wouldn't work, since they only do it for their future purchases. Acquiring food and eating it are separate actions, too. (Of course, this also cuts against the problems for unbounded utility functions I mentioned.) I'm guessing you mean this is a bad decision rule, and I'd agree. I discuss some alternatives (or directions) I find more promising here [https://forum.effectivealtruism.org/posts/DCZhan8phEMRHuewk/person-affecting-views-can-often-be-dutch-booked?commentId=2jkJEg3MSBBZbTmab] . Woops, fixed.
4Rohin Shah7mo
I think it's worth separating: 1. How to evaluate outcomes 2. How to make decisions under uncertainty 3. How to make decisions over time The argument in this post is just about (1). Admittedly I've illustrated it with a sequence of trades (which seems more like (3)) but the underlying principle is just that of transitivity which is squarely within (1). When thinking about (1) I'm often bracketing out (2) and (3), and similarly when I think about (2) or (3) I often ignore (1) by assuming there's some utility function that evaluates outcomes for me. So I'm not saying "you should make decisions using a local rule that ignores things like information value"; I'm more saying "when thinking about (1) it is often a helpful simplifying assumption to consider local rules and see how they perform". It's plausible that an effective theory will actually need to think about these areas simultaneously -- in particular, I feel somewhat compelled by arguments from (2) that you need to have a bounded mechanism for (1), which is mixing those two areas together. But I think we're still at the stage where it makes sense to think about these things separately, especially for basic arguments when getting up to speed (which is the sort of post I was trying to write).
2MichaelStJules7mo
Do you think the Dutch book still has similar normative force if the person-affecting view is transitive within option sets, but violates IIA? I think such views are more plausible than intransitive ones, and any intransitive view can be turned into a transitive one that violates IIA using voting methods like beatpath/Schulze. With an intransitive view, I'd say you haven't finished evaluating the options if you only make the pairwise comparisons. The options involved might look the same, but now you have to really assume you're changing which options are actually available over time, which, under one interpretation of an IIA-violating view, fails to respect the view's assumptions about how to evaluate options: the options or outcomes available will just be what they end up being, and their value will depend on which are available. Maybe this doesn't make sense, because counterfactuals aren't actual? Against an intransitive view, it's just not clear which option to choose, and we can imagine deliberating from World 1 to World 1 minus $0.98 following the Dutch book argument if we're unlucky about the order in which we consider the options. Suppose that if I take trade 1, I have a p≤100% subjective probability that trade 2 will be available, will definitely take it if it is, and conditional on taking trade 2, a q≤100% subjective probability that trade 3 will be available and will definitely take it if it is. There are two cases: 1. If p=q=100%, then I stick with World 1 and don't make any trade. No Dutch book. (I don't think p=q=100% is reasonable to assume in practice, though.) 2. Otherwise, p<100% or q<100% (or generally my overall probability of eventually taking trade 3 is less than 100%; I ... Q. In step 2, Alice was definitely going to exist, which is why we paid$1. But then in step 3 Alice was no longer definitely going to exist. If we knew step 3 was going to happen, then we wouldn't think Alice was definitely going to exist, and so we wouldn't pay \$1.

If your person-affecting view requires people to definitely exist, taking into account all decision-making, then it is almost certainly going to include only currently existing people. This does avoid the Dutch book but has problems of its own, most notably time inconsistency. For example, perh

...
2Rohin Shah7mo
I was imagining a local decision rule that was global in only one respect, i.e. choosing which people to consider based on who would definitely exist regardless of what decision-making happens. But in hindsight I think this is an overly complicated rule that no one is actually thinking about; I'll delete it from the post.

Maybe this is a little off topic, but while Dutch book arguments are pretty compelling in these cases, I think the strongest and maybe one of the most underrated arguments against intransitive axiologies is Michael Huemer's in "In Defense of Repugnance"

https://philpapers.org/archive/HUEIDO.pdf

Basically he shows that intransitivity is incompatible with the combination of:

If x1 is better than y1 and x2 is better than y2, then x1 and x2 combined is better than y1 and y2 combined

and

If a state of affairs is better than another state of affairs, then it is not a...

Person-affecting views aren't necessarily intransitive; they might instead give up the independence of irrelevant alternatives, so that A≥B among one set of options, but A<B among another set of options. I think this is actually an intuitive way to explain the repugnant conclusion:

If your available options are S, then the rankings among them are __:

1. S={A, A+, B}: A>B, B>A+, A>A+
2. S={A, A+}: A+≥A
3. S={A, B}: A>B
4. S={A+, B}: B>A+

A person-affecting view would need to explain why A>A+ when all three options are available, but A+≥A when only A+ and A are available.

However, violating IIA like this is also vulnerable to a Dutch book/money pump.

3David Johnston5mo
I think this makes more sense than initial appearances. If A+ is the current world and B is possible, then the well-off people in A+ have an obligation to move to B (because B>A). If A is the current world, A+ is possible but B impossible, then the people in A incur no new obligations by moving to A+, hence indifference. If A is the current world and both A+ and B are possible, then moving to A+ saddles the original people with an obligation to further move the world to B. But the people in A, by supposition, don't derive any benefit from the move to A+ and the obligation to move to B harms them. On the other hand, the new people in A+ don't matter because they don't exist in A. Thus A+>A in this case. Basically: options create obligations, and when we're assessing the goodness of a world we need to take into account welfare + obligations (somehow).
1Devin Kalish7mo
I'm really showing my lack of technical savy today, but I don't really know how to embed images, so I'll have to sort of awkwardly describe this. For the classic version of the mere addition paradox this seems like an open possibility for a person affecting view, but I think you can force pretty much any person affecting view into intransitivity if you use the version in which every step looks like some version of A+. In other words, you start with something like A+, then in the next world, you have one bar that looks like B, and in addition another, lower but equally wide bar, then in the next step, you equalize to higher than the average of those in a B-like manner, and in addition another equally wide, lower bar appears, etc. This seems to demand basically any person affecting view prefer the next step to the one before it, but the step two back to that one.
2MichaelStJules7mo
Views can be transitive within each option set, but have previous pairwise rankings changed as the option set changes, e.g. new options become available. I think you're just calling this intransitivity, but it's not technically intransitivity by definition, and is instead a violation of the independence of irrelevant alternatives. Transitivity + violating IIA seems more plausible to me than intransitivity, since the former is more action-guiding.
1Devin Kalish7mo
I agree that there's a difference, but I don't see how that contradicts the counter example I just gave. Imagine a person affecting view that is presented with every possible combination of people/welfare levels as options, I am suggesting that, even if it is sensitive to irrelevant alternatives, it will have strong principled reasons to favor some of the options in this set cyclically if not doing so means ranking a world that is better on average for the pool of people the two have in common lower. Or maybe I'm misunderstanding what you're saying?
4MichaelStJules7mo
There are person-affecting views that will rank X<Y or otherwise not choose X over Y even if the average welfare of the individuals common to both X and Y is higher in X. A necessitarian view might just look at all the people common to all available options at once, maximize their average welfare, and then ignore contingent people (or use them to break ties, say). Many individuals common to two options X and Y could be ignored this way, because they aren't common to all available options, and so are still contingent. Christopher J. G. Meacham, 2012 [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.508.7424&rep=rep1&type=pdf] (EA Forum discussion here [https://forum.effectivealtruism.org/posts/AWGwNWnMiTxPDJY39/critical-summary-of-meacham-s-person-affecting-views-and] ) describes another transitive person-affecting view, where I think something like "the available alternatives are so relevant, that they can even overwhelm one world being better on average than another for every person the two have in common", which you mentioned in your reply, is basically true. For each option, and each individual in the the option, we take the difference between their maximum welfare across options and their welfare in that option, add up them up, and then minimize the sum. Crucially, it's assumed when someone doesn't exist in an option, we don't add their welfare loss from their maximum for that option, and when someone has a negative welfare in an option but don't exist in another option, their maximum welfare across options will at least be 0. There are some technical details for matching individuals with different identities across worlds when there are people who aren't common to all options. So, in the repugnant conclusion, introducing B makes A>A+, because it raises the maximum welfares of the extra people in A+. Some views may start from pairwise comparisons that would give the kinds of cycles you described, but then apply a voting method like beatpath voting [h
1Devin Kalish7mo
This is interesting, I'm especially interested in the idea of applying voting methods to ranking dilemmas like this, which I'm noticing is getting more common. On the other hand it sounds to me like person-affecting views mostly solve transitivity problems by functionally becoming less person-affecting in a strong, principled sense, except in toy cases. Meacham sounds like it converges to averagism on steroids from your description as you test it against a larger and more open range of possibilities (worse off people loses a world points, but so does more people, since it sums the differences up). If you modify it to look at the average of these differences, then the theory seems like it becomes vulnerable to the repugnant conclusion again, as the quantity of added people who are better off in one step in the argument than the last can wash out the larger per-individual difference for those who existed since earlier steps. Meanwhile the necessitarian view as you describe it seems to yield either no results in practice if taken as described in a large set of worlds with no one common to every world, or if reinterpreted to only include the people common to the very most worlds, sort of gives you a utility monster situation in which a single person, or some small range of possible people, determine almost all of the value across all different worlds. All of this does avoid intransitivity though as you say.
1Devin Kalish7mo
Or I guess maybe it could say that the available alternatives are so relevant, that they can even overwhelm one world being better on average than another for every person the two have in common?
1Devin Kalish7mo
(also, does anyone know how to make a ">" sign on a new line without it doing some formatty thing? I am bad with this interface, sorry)
2Gavin7mo
You could turn off markdown formatting in settings
1Devin Kalish7mo
Seems to have worked, thanks!
[+][comment deleted]7mo 10