Hide table of contents

Summary: The universe may very well be infinite, and hence contain an infinite amount of happiness and sadness. This causes several problems for altruists; for example: we can plausibly only affect a finite subset of the universe, and an infinite quantity of happiness is unchanged by the addition or subtraction of a finite amount of happiness. This would imply that all forms of altruism are equally ineffective.

Like everything in life, the canonical reference in philosophy about this problem was written by Nick Bostrom. However, I found that an area of economics known as "sustainable development" has actually made much further progress on this subject than the philosophy world. In this post I go over some of what I consider to be the most interesting results.

NB: This assumes a lot of mathematical literacy and familiarity with the subject matter, and hence isn't targeted to a general audience. Most people will probably prefer to read my other posts:


1. Summary of the most interesting results

  1. There’s no ethical system which incorporates all the things we might want.
  2. Even if we have pretty minimal requirements, satisfactory ethical systems might exist but we can’t prove their existence, much less actually construct them
  3. Discounted utilitarianism, whereby we value people less just because they are further away in time, is actually a pretty reasonable thing despite philosophers considering it ridiculous.
    1. (I consider this to be the first reasonable argument for locavorism I've ever heard)

2. Definitions

In general, we consider a population to consist of an infinite utility vector (u0,u1,…) where ui is the aggregate utility of the generation alive at time i. Utility is a bounded real number (the fact that economists assume utility to be bounded confused me for a long time!). Our goal is to find a preference ordering over the set of all utility vectors which is in some sense “reasonable”. While philosophers have understood for a long time that finding such an ordering is difficult, I will present several theorems which show that it is in fact impossible.

Due to a lack of latex support I’m going to give English-language definitions and results instead of math-ey ones; interested people should look at the papers themselves anyway.

3. Impossibility Results

3.0 Specific defs

  • Strong Pareto: if you can make a generation better off, and none worse off, you should.
  • Weak Pareto: if you can make every generation better off, you should.
  • Intergenerational equity: utility vectors are unchanged in value by any permutation of their components.
    • There is an important distinction here between allowing a finite number of elements to be permuted and an infinite number; I will refer to the former as “finite intergenerational equity” and the latter as just “intergenerational equity”
  • Ethical relation: one which obeys both weak Pareto and finite intergenerational equity
  • Social welfare function: an order-preserving function from the set of populations (utility vectors) to the real numbers

3.1 Diamond-Basu-Mitra Impossibility Result1

  1. There is no social welfare function which obeys Strong Pareto and finite intergenerational equity. This means that any sort of utilitarianism won’t work, unless we look outside the real numbers.

3.2 Zame's impossibility result2

  1. If an ordering obeys finite intergenerational equity over [0,1]N, then almost always we can’t tell which of two populations is better 
    1. (i.e. the set of populations {X,Y: neither X<Y nor X>Y} has outer measure one)
  2. The existence of an ethical preference relation on [0,1]N is independent of ZF plus the axiom of choice

4. Possibility Results

We’ve just shown that it’s impossible to construct or even prove the existence of any useful ethical system. But not all hope is lost!

The important idea here is that of a “subrelation”: < is a subrelation to <’ if x<y implies x<’y.

Our arguments will work like this:

Suppose we could extend utilitarianism to the infinite case. (We don't, of course, know that we can extend utilitarianism to the infinite case. But suppose we could.) Then A, B and C must follow.

Technically: suppose utilitarianism is a subrelation of <. Then < must have properties A, B and C.

Everything in this section comes from (3). This is a great review of the literature.

4.1 Definition

  • Utilitarianism: we extend the standard total utilitarianism ordering to infinite populations in the following way: suppose there is some time T after which every generation in X is at least as well off as every generation in Y, and that the total utility in X before T is at least as good as the total utility in Y before T. Then X is at least as good as Y.
    • Note that this is not a complete ordering! In fact, as per Zame’s result above, the set of populations it can meaningfully speak about has measure zero.
  • Partial translation scale invariance: suppose after some time T, X and Y become the same. Then we can add any arbitrary utility vector A to both X and Y without changing the ordering. (I.e. X > Y ó X+A > Y+A)

4.2 Theorem

  1. Utilitarianism is a subrelation of > if and only if > satisfies strong Pareto, finite intergenerational equity and partial translation scale invariance.
    1. This means that if we want to extend utilitarianism to the infinite case, we can’t use a social welfare function, as per the above Basu-Mitra result

4.3 Definition

  • Overtaking utilitarianism: suppose there is some point T after which the total utility of the first N generations in X is always greater than the total utility of the first N generations in Y (given N > T). Then X is better than Y.
    • Note that utilitarianism is a subrelation of overtaking utilitarianism
  • Weak limiting preference: suppose that for any time T, X truncated at time T is better than Y truncated at time T. Then X is better than Y.

4.4 Theorem

  1. Overtaking utilitarianism is a subrelation of < if and only if < satisfies strong Pareto, finite intergenerational equity, partial translation scale invariance, and weak limiting preference

4.5 Definition

  • Discounted utilitarianism: the utility of a population is the sum of its components, discounted by how far away in time they are
  • Separability:
    • Separable present: if you can improve the first T generations without affecting the rest, you should
    • Separable future: if you can improve everything after the first T generations without affecting the rest, you should
  • Stationarity: preferences are time invariant
  • Weak sensitivity: for any utility vector, we can modify its first generation somehow to make it better or worse

4.6 Theorem

  1. The only continuous, monotonic relation which obeys weak sensitivity, stationary, and separability is discounted utilitarianism

4.7 Definition

  • Dictatorship of the present: there’s some time T after which changing the utility of generations doesn’t matter

4.8 Theorem

  1. Discounted utilitarianism results in a dictatorship of the present. (Remember that each generation’s utility is assumed to be bounded!)

4.9 Definition

  • Sustainable preference: a continuous ordering which doesn’t have a dictatorship of the present but follows strong Pareto and separability.

4.10 Theorem

  1. The only ordering which is sustainable is to take discounted utilitarianism and add an “asymptotic” part which ensures that infinitely long changes in utility matter. (Of course, finite changes in utility still won't matter.)

5. Conclusion

I hope I've convinced you that there's a "there" there: infinite ethics is something that people can make progress on, and it seems that most of the progress is being made in the field of sustainable development.

Fun fact: the author of the last theorem (the one which defined "sustainable") was one of the lead economists on the Kyoto protocol. Who says infinite ethics is impractical?

6. References

 

  1. Basu, Kaushik, and Tapan Mitra. "Aggregating infinite utility streams with intergenerational equity: the impossibility of being Paretian." Econometrica 71.5 (2003): 1557-1563. http://folk.uio.no/gasheim/zB%26M2003.pdf
  2. Zame, William R. "Can intergenerational equity be operationalized?." (2007).  https://tspace.library.utoronto.ca/bitstream/1807/9745/1/1204.pdf
  3. Asheim, Geir B. "Intergenerational equity." Annu. Rev. Econ. 2.1 (2010): 197-222.http://folk.uio.no/gasheim/A-ARE10.pdf

 

Comments74
Sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

The blog post links "Ridiculous math things which ethics shouldn't depend on but does" and "Kill the young people" are dead. You can find archived versions of the posts at the following links:

[anonymous]7
0
0

"The universe may very well be infinite, and hence contain an infinite amount of happiness and sadness. This causes several problems for altruists; for example: we can plausibly only affect a finite subset of the universe, and an infinite quantity of happiness is unchanged by the addition or subtraction of a finite amount of happiness. This would imply that all forms of altruism are equally ineffective."

I have no particular objection to those, unlike me, interested in aggregative ethical dilemmas, but I think it at least preferable that effective... (read more)

3
Ben_West🔸
The problems with extending standard total utilitarianism to the infinite case are the easiest to understand, which is why I put that in the summary, but I don't think most of the article was about that. For example, the fact that you can't have intergenerational equity (Thm 3.2.1) seems pretty important no matter what your philosophical bent.
1[anonymous]
A minuscule proportion of political philosophy has concerned itself with aggregative ethics, and in my being a relatively deep hermeneutical contextualist, I take what is important to them to be what they thought to be important to them, and thus your statement - that intergenerational equity is perennially important - as patently wrong. Let alone people not formally trained in philosophy. The fact I have to belabour that most of those interested in charitable giving are not by implication automatically interested in the 'infinity problem' is exactly demonstrative of my initial point, anyhow, i.e. of projecting highly controversial ethical theories, and obscure concerns internal to them, as obviously constitutive of, or setting the agenda for, effective altruism.
0
RyanCarey
This seems reasonable to me. Assuming aggregative ethics only and examining niche issues within it are probably not diplomatically ideal for this site. Especially when one could feasibly get just as much attention for this kind of post on LessWrong. That'd suggest that if people want to write more material like this, it might fit better elsewhere. What do others think?
4
Lila
I found the OP useful. If it were on LW, I probably wouldn't have seen it. I don't go on LW because there's a lot of stuff I'm not interested in compared to what I am interested in (ethics). Is there a way to change privacy settings so that certain posts are only visible to people who sign in or something?
0
RyanCarey
Thanks for the data point! Sadly the forum doesn't have that kind of feature. Peter and Tom are starting to work through a few minor bugs and feature requests but wouldn't be able to implement something like that in the foreseeable future. I can see why it would be convenient for utilitarian EAs to read this kind of material here. But equally, there's a couple of issues with posting stuff about consequentialism. First, it's more abstract than seems optimal, and secondly, it's presently not balanced with discussion about other systems of ethics. As you're already implying with the filtering idea, if the EA Forum became an EA/Consequentialism Forum, that would be a real change that lots of people would not want. Would you have found this post more easily if it was posted on Philosophy for Programmers and linked from the Utilitarianism Facebook group?
3
Lila
I'm trying to use Facebook less, and I don't check the utilitarianism group, since it seems to have fallen into disuse. I have to disagree that consequentialism isn't required for EA. Certain EA views (like the shallow pond scenario) could be developed through non-consequentialist theories. But the E part of EA is about quantification and helping as many beings as possible. If that's not consequentialism, I don't know what is. Maybe some non-utilitarian consequentialist theories are being neglected. But the OP could, I think, be just as easily applied to any consequentialism.
3[anonymous]
The 'E' relates to efficiency, usually thought of as instrumental rationality, which is to say, the ability to conform one's means with one's ends. That being the case, it is entirely apart from the (moral or non-moral) end by which it is possessed. I have reasons for charitable giving independent of utilitarianism, for example, and thus find the movement's technical analysis of the instrumental rationality of giving highly valuable.
1
RyanCarey
You can believe that you want to help people a lot, and that it's a virtue to investigate where those funds are going, so you want to be a good person by picking charities that help lots of people. Whether there's infinite people is irrelevant to whether you're a virtuous helper. You might like giving to givewell just because, and not feel the need for recourse to any sense of morality. The other problem is that there's going to be some optimal level of abstraction that most of the conversation at the forum could be at in order to encourage people to actually get things done, and I just don't think that philosophical analysis of consequentialism is the optimal for most people. I've been there and discussed those issues a lot for years, and I'd just like to move past it and actually do things, y'know :p Still happy for Ben to think about it because he's smart, but it's not for everyone!
0
Giles
I'm curious what optimally practical philosophy looks like. This chart from Diego Caleiro appears to show which philosophical considerations have actually changed what people are working on: http://effective-altruism.com/ea/b2/open_thread_5/1fe Also, I know that I'd really like an expected-utilons-per-dollar calculator for different organizations to help determine where to give money to, which surely involves a lot of philosophy.
1
RyanCarey
Making an expected-utilons-per-dollar calculator is an interesting project. Cause prioritisation in the broader sense can obviously fit on this forum and for that there's also: 80,000 Hours, Cause prioritisation wiki and Open Philanthropy Project. If you're going for max number of years of utility per dollar, then you'll be looking at x-risk, as it's the cause that most credibly claims an impact that extends far in time (there aren't yet credible "trajectory changes"). That leaves CSER, MIRI, FLI, FHI and GCRI, of which CSER is currently in a fledgling state with only tens of thousands of dollars of funding, but applying for million dollar grants, so it seems to be best-leveraged.
0
Brian_Tomasik
I strongly disagree. :) It's obvious that, say, the values of society may make a huge difference to the far future if (as seems likely) early AI uses goal preservation. (Even if the first version of AI doesn't, it should soon move in that direction.) Depending how one defines "x-risk", many ways of shaping AI takeoffs are not work on extinction risk per se but concern the nature of the post-human world that emerges. For instance, whether the takeoff is unipolar or multipolar, what kinds of value loading is used, and how political power is divided. These can all have huge impacts on the outcome without changing the fact of whether or not the galaxy gets colonized.
1
RyanCarey
I agree. I'd be clearer if I said that I think the only credible trajectory changes address the circumstances of catastrophically risky situations e.g. the period where AI takes off, and are managed my organisations that think about x-risk.
1
Chris Leong
These issues are relevant for any ethical system that assigns non-zero weight to the consequences.

One thing I liked about this post is that it was written in English, instead of math symbols. I find it extremely hard to read a series of equations without someone explaining them verbally. Overall I thought the clarity was fairly good.

(Of course, finite changes in utility still won't matter.)

The discounted utilitarianism term should still be sensitive to those, at least enough to break ties. Discounted utilitarianism satisfies strong Pareto, after all.

Thanks for the summary. :)

I don't understand why they're working on infinite vectors of future populations, since it looks very likely that life will end after a finite length of time into the future (except for Boltzmann brains). Maybe they're thinking of the infinity as extended in space rather than time? And of course, in that case it becomes arbitrary where the starting point is.

we can plausibly only affect a finite subset of the universe, and an infinite quantity of happiness is unchanged by the addition or subtraction of a finite amount of happines

... (read more)
2
Ben_West🔸
Thanks Brian – insightful as always. 1. It might be the case that life will end after time T. But that's different than saying it doesn't matter whether life ends after time T, which a truncated utility function would say. 2. (But of course see theorem 4.8.1 above) 3. Thanks for the insight about multiverses – I haven't thought much about it. Is what you say only true in a level one multiverse?
1
Brian_Tomasik
1) Fair enough. Also, there's some chance we can affect Boltzmann brains that will exist indefinitely far into the future. (more discussion) 3) I added a new final paragraph to this section about that. Short answer is that I think it works for any of Levels I to III, and even with Level IV it depends on your philosophy of mathematics. (Let me know if you see errors with my facts or reasoning.)
1
Ben_West🔸
1) interesting, thanks! 3) I don't think I know enough about physics to meaningfully comment. It sounds like you are disagreeing with the statement "we can plausibly only affect a finite subset of the universe"? And I guess more generally if physics predicts a multiverse of order w_i, you claim that we can affect w_i utils (because there are w_i copies of us)?
0
Brian_Tomasik
Yes, I was objecting to the claim that "we can plausibly only affect a finite subset of the universe". Of course, I guess it remains plausible that we can only affect a finite subset; I just wouldn't say it's highly probable. Yes, unless the type of multiverse predicts that the measure of copies of algorithms like ours is zero. That doesn't seem true of Levels I to III. Also, if one uses my (speculative) physics-sampling assumption for anthropics, a hypothesis that predicts measure zero for copies of ourselves has probability zero. On the other hand, the self-indication assumption would go hog wild for a huge Level IV multiverse.

This is really interesting stuff, and thanks for the references.

A few comments:

It'd be nice to clarify what: "finite intergenerational equity over [0,1]^N" means (specifically, the "over [0,1]^N" bit).

Why isn't the sequence 1,1,1,... a counter-example to Thm4.8 (dictatorship of the present)? I'm imagining exponential discounting, e.g. of 1/2 so the welfare function of this should return 2 (but a different number if u_t is changed, for any t).

1
Ben_West🔸
Thanks for the comments! Regarding your second question: the idea is that if x is better than y, then there is a point in time after which improvements to y, no matter how great, will never make y better than x. So in your example where there is a constant discount rate of one half: (1, 1, 1, (something)) will always be preferred to (0, 0, 0, (something else)), no matter what we put in for (something) and (something else). In this sense, the first three generations "dictate" the utility function. As you point out, there is no single time at which dictatorship kicks in, it will depend on the two vectors you are comparing and the discount rate.

I am curious about your definitions: intergenerational equity and finite intergenerational equity. I am aware of that some literature suggests that finite permutations are not enough to ensure equity among an infinite number of generations. The quality of the argumentation in this literature is often not so good. Do you have a reference that gives a convincing argument for why your notion of intergenerational equity is appropriate and/or desirable? I hope this does not sound like I am questioning whether your definition is consistent with the literature: I am only asking out of interest.

1
Ben_West🔸
Good question. It's easiest to imagine the one-dimensional spatial case like (...,L2, L1, me, R1, R2, ...) where {Li} are people to my left and {Ri} are those to my right. If I turn 180° this permutes the vector to (..., R1, me, L1, ...) Which is obviously an infinite number of permutations, but seems morally unobjectionable.
1
Lawrence
Thank you for the example. I have two initial comments and possibly more if you are interested. 1. In all of the literature on the problem, the sequences that we compare specify social states. When we compare x=(x_1,x_2,...) and y=(y_1,y_2,...) (or, as in your example, x=(....,x_0,x_1,x_2,...) and y=(...,y_0,y_1,y_2,...)), we are doing it with the interpretation x_t and y_t give the utility of the same individual/generation in the two possible social states. For the two sequences in your example, it does not seem to be the case that x_t and y_t give the utility of the same individual in two possible states. Rather, it seems that we are re-indexing the individuals. 2. I agree that moral preferences should generally be invariant to re-indexing, at least in a spatial context (as opposed to an intertermporal context). Let us therefore modify your example so that we have specified utilities x_t,y_t, where t ranges over the integers and x_t and y_t represent the utilities of people located at positions on a doubly infinite line. Now I agree that an ethical preference relation should be invariant under some (and possibly all) infinite permutations IF the permutation is performed to both sequences. But it is hard to give an argument for why we should have invariance under general permutations of only one stream. The example is still unsatisfactory for two reasons. (i) since we are talking about intergenerational equity, the t in x_t should be time, not points in space where individuals live at the same time: it is not clear that the two cases are equivalent. (They may in fact be very different.) (ii) in almost all of the literature (in particular, in all three references in the original post), we consider one-sided sequences, indexed by time starting today and to the infinite future. Are you aware of example in this context?
1
Ben_West🔸
Thank you for the thoughtful comment. This is true. I think an important unstated assumption is that you only need to know that someone has utility x, and you shouldn't care who that person is. I'm not sure what the two sequences you are referring to are. Anonymity constraints simply say that if y is a permutation of x, then x~y. It is a true and insightful remark that whether we consider vectors to be infinite or doubly infinite makes a difference. To my mind, the use of vectors is misleading. What it means to not care about temporal location is really just that you treat populations as sets (not vectors) and so anonymity assumptions aren't really required. I guess you could phrase that another way and say that if you don't believe in infinite anonymity, then you believe that temporal location matters. This disagrees with general utilitarian beliefs. Nick Bostrom talks about this more in section 2.2 of his paper linked above. A more mathy way that's helpful for me is to just remember that the relation should be continuous. Say s_n(x) is a permutation of _n_ components. By finite anonymity we have that x~s_n(x) for any finite n. If lim {n -> infinity} s_n = y, yet y was morally different from x, the relation is discontinuous and this would be a very odd result.
1
Lawrence
I would not only say that "that you only need to know that someone has utility x, and you shouldn't care who that person is" is an unstated assumption. I would say that it is the very idea that anonymity intends to formalize. The question that I had and still have is whether you know of any arguments for why infinite anonymity is suitable to operationalize this idea. Regarding the use of sequences: you can't just look at sets. If you do, all nontrivial examples with utilities that are either 0 or 1 become equivalent. You don't have to use sequences, but you need (in the notation of Vallentyne and Kagan (1997)), a set of "locations", a set of real numbers where utility takes values, and a map from the location set to the utility set. Regarding permutations of one or two sequences. One form of anonymity says that x ~ y if there is a permutation, say pi, (in some specified class) that takes x to y. Another (sometimes called relative anonymity) says that if x is at least as good as y, then pi(x) is at least as good as pi(y). These two notions of anonymity are not generally the same. There are certainly settings where the fullblown version of the relative anonymity becomes a basic rationality requirement. This would be the case with people lined up on an infinite line (at the same point in time). But it is not hard to see its inappropropriateness in the intertemporal context: you would have to rank the following two sequences (periodic with period 1000) to be equivalent or non-comparable x=(1,1,....,1,0,1,1,...,1,0,1,1,...,1,......) y=(0,0,....,0,1,0,0,...,0,1,0,0,...,0,......) This connects to whether denying infinite anonymity implies that "temporal location matters". If x and y above are two possible futures for the same infinite-horizon society, then I think that any utilitarian should be able to rank x above y without having to be critisized for caring about temporal location. Do you agree? For those who do not, equity in the intertemporal setting is the same th
1
Ben_West🔸
Maybe I am missing something, but it seems obvious to me. Here is my thought process; perhaps you can tell me what I am overlooking. For simplicity, say that A is the assumption that we shouldn't care who people are, and IA is the infinite anonymity assumption. We wish to show A IA. 1. Suppose A. Observe that any permutation of people can't change the outcome, because it's not changing any information which is relevant to the decision (as per assumption A). Thus we have IA. 2. Suppose IA. Observe that it's impossible to care about who people are, because by assumption they are all considered equal. Thus we have A. 3. Hence A IA. These seems so obviously similar in my mind that my "proof" isn't very insightful… But maybe you can point out to me where I am going wrong. I hadn't heard about this – thanks! Do you have a source? Google scholar didn't find much. In your above example is the pi in pi(X) the same as the pi in pi(y)? I guess it must be because otherwise these two types of anonymity wouldn't be different, but that seems weird to me. I certainly understand the intuition, but I'm not sure I fully agree with it. The reason I think that x better than y is because it seems to me that x is a Pareto improvement. But it's really not – there is no generation in x who is better off than another generation in y (under a suitable relabeling of the generations). (0,1,0,1,0,1,...) and (1,0,1,0,1,0,...) come to mind.
1
Lawrence
The problem in your argument is the sentence "...any permutation of people can't change the outcome...". For example: what does "any permutation" mean? Should the stream be applied to both sequences? In a finite context, these questions would not matter. In the infinite-horizon context, you can make mistakes if you are not careful. People who write on the subject do make mistakes all the time. To illustrate, let us say that I think that a suitable notion of anonymity is FA: for any two people p1 and p2, p1's utility is worth just as much as p2's. Then I can "prove" that A FA by your method. The A -> FA direction is the same. For FA -> A, observe that if for any two people p1 and p2, p1's utility is worth just as much as p2's, then it is not possible to care about who people are. This "proof" was not meant to illustrate anything besides the fact that if we are not careful, we will be wasting our time. I did not get a clear answer to my question regarding the two (intergenerational) streams with period 1000: x=(1,1,...,1,0,1,1,,...) and y=(0,0,...,0,1,0,0,,...). Here x does not Pareto-dominate y. Regarding (0,1,0,...) and (0,1,0,...): I am familiar with this example from some of the literature. Recall in the first post that I wrote that the argumentation in much of the literature is not so good? This is the literature that I meant. I was hoping for more.
1
Lawrence
I forgot the reference for relative anonymity: See the paper by Asheim, d'Aspremont and Banerjee (J. Math. Econ., 2010) and its references.
0
Ben_West🔸
Fair enough. Let me phrase it this way: suppose you were blinded to the location of people in time. Do you agree that infinite anonymity would hold?
1
Lawrence
I will try to make the question more specific and then answer it. Suppose you are given two sequences x=(x_1,x_2,…) and y=(y_1,y_2,…) and that you are told that x_t is not necessarily the utility of generation t, but that it could be the utility of some other generation. Should your judgements then be invariant under infinite permutations? Well, it depends. Suppose I know that x_t and y_t is the utility of the same generation – but not necessarily of generation t. Then I would still say that x is better than y if x_t>y_t for every t. Infinite anonymity in its strongest form (the one you called intergenerational equity) does not allow you to make such judgements. (See my response to your second question below.) In this case I would agree to the strongest form of relative anonymity however. If I do not know that x_t and y_t give the utility of the same generation, then I would agree to infinite anonymity. So the answer is that sure, as you change the structure of the problem, different invariance conditions will become appropriate.
0
Ben_West🔸
Thank you for the clarification and references – it took me a few days to read and understand those papers. I don't think there are any strong ways in which we disagree. Prima facie, prioritizing the lives of older (or younger) people seems wrong, so statements like "I know that xt and yt is the utility of the same generation" don't seem like they should influence your value judgments. However, lots of bizarre things occur if we act that way, so in reflective equilibrium we may wish to prioritize the lives of older people.
0
Lawrence
Wait a minute. Why should knowing that x_t and y_t are the utility of the same generation (in two different social states) not influence value judgements? There is certainly not anything unethical about that, and this is true also in a finite context. Let us say that society consists of three agents. Say that you are not necessarily a utilitarian and that you are given a choice between x=(1,3,4) and y=(0,2,3). You could say that x is better than y since all three members of society prefers state x to state y. But this assumes that you know that x_t and y_t give the utility of the same agent in the two states. If you did not know this, then things would be quite different. Do you see what I mean?
0
Ben_West🔸
No, you would know that there is a permutation of x which Pareto dominates y. This is enough for you to say that x>y. I understand and accept your point though that people are not in practice selfless, and so if people wonder not "will someone be better off" but "will I specifically be better off" then (obviously) you can't have anonymity.
1
Lawrence
Things would not be all that different with three agents. Sorry. But let me ask you: when you apply Suppes' grading principle to infer that e.g. x=(1,3,4) is better than y=(2,0,3) since there is a permutation of x' of x with x'>y, would you not say that you are relying on the idea that everyone is better off to conclude that x' is better than y? I agree of course that criteria that depend on which state a specific person prefers are bad, and they cannot give us anonymity.
0
Ben_West🔸
Thanks Lawrence, this is a good point. I agree that the immediate justification for the principle is "everyone is better off", but as you correctly point out that implies knowing "identifying" information. It is hard for me to justify this on consequentialist grounds though. Do you know of any justifications? Probably most consequentialist would just say that it increases total and average utility and leave it at that.
0
Lawrence
I am not sure what you mean by consequentialist grounds. Feel free to expand if you can. I am actually writing something on the topic that we have been discussing. If you are interested I can send it to you when it is submittable. (This may take several months.)
0
Ben_West🔸
Good question; now that I try to explain it I think my definition of "consequentialist" was poorly defined. I have changed my mind and agree with you – the argument for finite anonymity is weaker than I thought. Good to know! I would be interested to hear your insights on these difficult problems, if you feel like sharing.
0
Ben_West🔸
By the way, one version of what you might be saying is: "both infinite anonymity and the overtaking criterion seem like reasonable conditions. But it turns out that they conflict, and the overtaking criterion seems more reasonable, so we should drop infinite anonymity." I would agree with that sentiment.
1
Lawrence
Forget overtaking. Infinite anonymity (in its strongest form – the one you called intergenerational equity) is incompatible with the following requirement: if everyone is better off in state x=(x_1,x_2,..) than in state y=(y_1,y_2,..), then x is better than y. See e.g. the paper by Fleurbaey and Michel (2003).

Some kind of nitpicky comments:

3.2: Note that the definition of intergenerational equity in Zame's paper is what you call finite intergenerational equity (and his definition of an ethical preference relation involves the same difference), so his results are actually more general than what you have here. Also, I don't think that "almost always we can’t tell which of two populations is better" is an accurate plain-English translation of "{X,Y: neither XY} has outer measure one", because we don't know anything about the inner measure. In f... (read more)

0
Ben_West🔸
Thanks! 3.2 good catch – I knew I was gonna mess those up for some paper. I'm not sure how to talk about the measurability result though; any thoughts on how to translate it? 4.3 basically, yeah. It's easier for me to think about it just as a truncation though 4.5 yes you're right – updated 4.7 yes, that's what I mean. Introducing quantifiers seems to make things a lot more complicated though
0
AlexMennen
Unfortunately, I can't think of a nice ordinary-language way of talking about such nonmeasurability results.

The universe may very well be infinite, and hence contain an infinite amount of happiness and sadness. This causes several problems for altruists

This topic came up on the 80k blog a while ago and I found it utterly ridiculous then and I find it utterly ridiculous now. The possibility of an infinite amount of happiness outside our light-cone (!) does not pose problems for altruists except insofar as they write philosophy textbooks and have to spend a paragraph explaining that, if mathematically necessary, we only count up utilities in some suitably loca... (read more)

2
Ben_West🔸
Thanks for the feedback. Couple thoughts: 1. I actually agree with you that most people shouldn't be worried about this (hence my disclaimer that this is not for a general audience). But that doesn't mean no one should care about it. 2. Whether we are concerned about an infinite amount of time or an infinite amount of space doesn't really seem relevant to me at a mathematical level, hence why I grouped them together. 3. As per (1), it might not be a good use of your time to worry about this. But if it is, I would encourage you to read the paper of Nick Bostrom's that I linked above, since I think "just look in a local region" is too flippant. E.g. there may be an infinite number of Everett branches we should care about, even if we restrict our attention to earth.
2
pappubahry
Hopefully this is my last comment in this thread, since I don't think there's much more I have to say after this. 1. I don't really mind if people are working on these problems, but it's a looooong way from effective altruism. 2. Taking into account life-forms outside our observable universe for our moral theories is just absurd. Modelling our actions as affecting an infinite number of our descendants feels a lot more reasonable to me. (I don't know if it's useful to do this, but it doesn't seem obviously stupid.) 3. Many-worlds is even further away from effective altruism. (And quantum probabilities sum to 1 anyway, so there's a natural way to weight all the branches if you want to start shooting people if and only if a photon travels through a particular slit and interacts with a detector, ....)
4
Lila
I think the relevance of this post is that it tentatively endorses some type of time-discounting (and also space-discounting?) in utilitarianism. This could be relevant to considerations of the far future, which many EAs think is very important. Though presumably we could make the asymptotic part of the function as far away as we like, so we shouldn't run into any asymptotic issues?
2
AGB 🔸
"No-one responds to the drowning child by saying, "well there might be an infinite number of sentient life-forms out there, so it doesn't matter if the child drowns or I damage my suit". It is just not a consideration." "It is not an issue for altruists otherwise -- everyone saves the drowning child." I don't understand what you are saying here. Are you claiming that because 'everyone' does do X or because 'noone' does not do X (putting those in quotation marks because I presume you don't literally mean what you wrote, rather you mean the 'vast majority of people would/would not do X'), X must be morally correct? That strikes me as...problematic.
1
pappubahry
Letting the child drown in the hope that a) there's an infinite number of life-forms outside our observable universe, and b) that the correct moral theory does not simply require counting utilities (or whatever) in some local region strikes me as far more problematic. More generally, letting the child drown is a reductio of whatever moral system led to that conclusion.
4
Pablo
Population ethics (including infinite ethics) is replete with impossibility theorems showing that no moral theory can satisfy all of our considered intuitions. (See this paper for an overview.) So you cannot simply point to a counterintuitive implication and claim that it disproves the theory from which it follows. If that procedure was followed consistently, it would disprove all moral theories.
-1
pappubahry
I consider this a reason to not strictly adhere to any single moral theory.
8
Pablo
This statement is ambiguous. It either means that you adhere to a hybrid theory made up of parts of different moral theories, or that you don't adhere to a moral theory at all. If you adhere to a hybrid moral theory, this theory is itself subject to the impossibility theorems, so it, too, will have counterintuitive implications. If you adhere to no theory at all, then nothing is right or wrong; a fortiori, not rescuing the child isn't wrong, and a theory's implying that not rescuing the child isn't wrong cannot therefore be a reason for rejecting this theory.
2
pappubahry
OK -- I mean the hybrid theory -- but I see two possibilities (I don't think it's worth my time reading up on this subject enough to make sure what I mean matches exactly the terminology of the paper(s) you refer to): * In my hybridisation, I've already sacrificed some intuitive principles (improving total welfare versus respecting individual rights, say), by weighing up competing intuitions. * Whatever counter-intuitive implications my mish-mash, sometimes fuzzily defined hybrid theory has, they have been pushed into the realm of "what philosophers can write papers on", rather than what is actually important. The repugnant conclusion falls under this category. Whichever way it works out, I stick resolutely to saving the drowning child.
1
AGB 🔸
Can you (or anyone else who feels similarly) clarify the sense in which you consider the repugnant conclusion 'not actually important', but the drowning child example 'important'? Both are hypotheticals, both are trying to highlight contradictions in our intuitions about the world, both require you to either (a) put up with the fact that your theory is self-contradictory or (b) accept something that most people would consider unusual/counter-intuitive.
3
pappubahry
Because children die of preventable diseases, but no-one creates arbitrarily large populations of people with just-better-than-nothing well-being.
1
Pablo
I'm sorry, but I don't understand this reply. Suppose you can in fact create arbitrarily large populations of people with lives barely worth living. Some moral theories would then imply that this is what you should do. If you find this implication repugnant, you should also find it repugnant that a theory would have that implication if you found yourself in that position, even if as a matter of fact you don't. As an analogy, consider Kant's theory, which implies that a man who is hiding a Jewish family should tell the truth when Nazi officials question him about it. It would be strange to defend Kant's theory by alleging that, in fact, no actual person ever found himself in that situation. What matters is that the situation is possible, not whether the situation is actual. But maybe I'm misunderstanding what you meant by "not actually important"?
1
RyanCarey
Well, you can argue that the hypothetical situation is sufficiently exotic that you don't expect your intuitions to be reliable there. It's actually pretty reasonable to me to say that the shallow pond example is simple, realistic and important, compared to the repugnant conclusion, which is abstract, unusual, unreliable and hence useless.
0
pappubahry
I reject the implication inside the curly brackets that I added. I don't care what would happen to my moral theory if creating these large populations becomes possible; in the unlikely event that I'm still around when it becomes relevant, I'm happy to leave it to future-me to patch up my moral theory in a way that future-me deems appropriate. I guess I could attach some sort of plausibility score to moral thought experiments. Rescuing a drowning child gets a score near 1, since rescue situations really do happen and it's just a matter of detail about how much it costs the rescuer. As applied to donating to charity, the score might have to be lowered a little to account for how donating to charity isn't an exact match for the child in the pond. The Nazi officials case... seems pretty plausible to me? Like didn't that actually happen? Something of a more intermediate case between the drowning child and creating large populations would be the idea of murdering someone to harvest their organs. This is feasible today, but irrelevant since no-one is altruistically murdering people for organs. I think it's reasonable for someone previously a pure utilitarian to respond with, "Alright, my earlier utilitarianism fails in this case, but it works in lots of other places, so I'll continue to use it elsewhere, without claiming that it's a complete moral theory." (And if they want to analyse it really closely and work out the boundaries of when killing one person to save others is moral and when not, then that's also a reasonable response.) A thought experiment involving the creation of large populations gets a plausibility score near zero.
0
Pablo
I find your position unclear. On the one hand, you suggest that thought experiments involving situations that aren't actual don't constitute a problem for a theory (first quote above). On the other hand, you imply that they do constitute a problem, which is addressed by restricting the scope of the theory so that it doesn't apply to such situations (second quote above). Could you clarify?
0
pappubahry
Maybe I've misinterpreted 'repugnant' here? I thought it basically meant "bad", but Google tells me that a second definition is "in conflict or incompatible with", and now that I know this, I'm guessing that it's the latter definition that you are using for 'repugnant'. But I'm finding it difficult to make sense of it all (it carries a really strong negative connotation for me, and I'm not sure if it's supposed to in this context -- there might be nuances that I'm missing), so I'll try to describe my position using other words. If my moral theory, when applied to some highly unrealistic thought experiment (which doesn't have some clear analog to something more realistic), results in a conclusion that I really don't like, then: * I accept that my moral theory is not a complete and correct theory; and * this is not something that bothers me at all. If the thought experiment ever becomes relevant, I'll worry about how to patch up the theory then. In the meantime, I'll carry on trying to live by my moral theory.
1
Pablo
Thank you for the clarification. I think I understand your position now. Why doesn't it bother you at all that a theory has counterintuitive implications in counterfactual scenarios? Shouldn't this lower your confidence in the theory? After all, our justification for believing a moral theory seems to turn on (1) the theory's simplicity and (2) the degree to which it fits our intuitions. When you learn that your theory has counterintuitive implications, this causes you to either restrict the scope of the theory, and thus make it more complex, or recognize that it doesn't fit the data as well as you thought before. In either case, it seems you should update by believing the theory to a lesser degree.
1
pappubahry
I think my disagreement is mostly on (1) -- I expect that a correct moral theory would be horrendously complicated. I certainly can't reduce my moral theory to some simple set of principles: there are many realistic circumstances where my principles clash (individual rights versus greater good, say, or plenty of legal battles where it's not clear what a moral decision would be), and I don't know of any simple rules to decide what principles I deem more important in which situations. Certainly there are many realistic problems which I think could go either way. But I agree that all other things equal, simplicity is a good feature to have, and enough simplicity might sometimes outweigh intuition. Perhaps, once future-me carefully consider enormous aggregative ethics problems, I will have an insight that allows a drastically simplified moral theory. The new theory would solve the repugnant conclusion (whatever I think 'repugnant' means in this future!). Applied to present-me's day-to-day problems, such a simplified theory will likely give slightly different answers to what I think today: maybe the uncertainty I have today about certain court cases would be solved by one of the principles that future-me thinks of. But I don't think the answers will change a lot. I think my current moral theory basically gives appropriate answers (sometimes uncertain ones) to my problems today. There's wiggle-room in places, but there are also some really solid intuitions that I don't expect future-me to sacrifice. Rescuing the drowning child (at least when I live in a world without the ability to create large numbers of sentient beings!) would be one of these.
1[anonymous]
I think it quite obvious that if one does not observe a given theory they are not thereby disarmed from criticism of such a theory, similarly, a rejection of moralism is not equivalent with your imputed upshot that "nothing is right or wrong" (although we can imagine cases in which that could be so). In the case of the former, critiquing a theory adhering to but contradicting intuitionistic premises is a straightforward instance of immanent critique. In the case of the latter, quite famously, neither Bernard Williams nor Raymond Geuss had any truck with moralism, yet clearly were not 'relativists'.
1
Gregory Lewis🔸
I sympathize with this. It seems likely that the accessible population of our actions is finite, so I'm not sure one need to necessarily worried about what happens in the infinite case. I'm unworried if my impact on earth across its future is significantly positive, yet the answer of whether I've made the (possibly infinite) universe better is undefined. However, one frustration to this tactic is that infinitarian concerns can 'slip in' whenever afforded a non-zero credence. So although given our best physics it is overwhelmingly likely the morally relevant domain of our actions will be constrained by a lightcone only finitely extended in the later-than direction (because of heat death, proton decay, etc.), we should assign some non-zero credence our best physics will be mistaken: perhaps life-permitting conditions could continue indefinitely, or we could wring out life asymptotically faster than the second law, etc. These 'infinite outcomes' swamp the expected value calculation, and so infinitarian worries loom large.
1
RyanCarey
Putting to one side my bias towards aggregative consequentialism, someone has to say that to anyone except a radical consequentialist, the classic 'hope physics is broken' example does make you seem crazy and consequentialism seem wrong! :p
0
Larks
Or perhaps uncertainty to the size of the universe might lead to similar worries, if we merely know it is finite, but do not have a bound.
1
Pablo
The text immediately following the passage you quoted reads: This implies that the quantity of happiness in the universe stays the same after you save the drowning child. So if your reason for saving the child is to make the world a better place, you should be troubled by this implication.
1
pappubahry
That is precisely the argument that I maintain is only a problem for people who want to write philosophy textbooks, and even then one that should only take a paragraph to tidy up. It is not an issue for altruists otherwise -- everyone saves the drowning child.

In the Basu-Mitra result, when you use the term "Pareto", do you mean strong or weak?

I found the section on possibility results confusing.

In this sentence you appear to use X and Y to refer to properties: "Basically, we can show that if < were a “reasonable” preference relation that had property X then it must also have property Y. (of course, we cannot show that < is reasonable.)"

But here you appear to use X and Y to refer to utility vectors: "For example, say that X<Y if both X and Y are finite and the total utility of... (read more)

0
Ben_West🔸
Updated, thanks!
[anonymous]0
0
0

Sorry, did you mean to save this in your drafts?

[This comment is no longer endorsed by its author]Reply
Curated and popular this week
Relevant opportunities