stijnbruers

Comments

Reducing existential risks or wild animal suffering?

Thanks for the paper! Concerning the moral patients and mice: they indeed lack a capability to determine their reference values (critical levels) and express their utility functions (perhaps we can derive them from their revealed preferences). So actually this means those mice do not have a preference for a critical level or for a population ethical theory. They don't have a preference for total utilitarianism or negative utilitarianism or whatever. That could mean that we can choose for them a critical level and hence the population ethical implications, and those mice cannot complain against our choices if they are indifferent. If we strongly want total utilitarianism and hence a zero critical level, fine, then we can say that those mice also have a zero critical level. But if we want to avoid the sadistic repugnant conclusion in the example with the mice, fine, then we can set the critical levels of those mice higher, such that we choose the situation where those quadrillions of mice don't exist. Even the mice who do exist cannot complain against our choice for the non-existence of those extra quadrillion mice, because they are indifferent about our choice.

Reducing existential risks or wild animal suffering?

Perhaps there is more of importance than merely welfare. Concerning the repugnant sadistic conclusion I can say two things. First, I am not willing to put myself and all my friends in extreme misery merely for the extra existence of quadrillions of people who have nothing but a small positive experience of tasting an apple. Second, when I would be one of those extra people living for a minute and tasting an apple, knowing that my existence involved the extreme suffering of billions of people who could otherwise have been very happy, I would rather not exist. That means even if my welfare of briefly tasting the apple (a nice juicy Pink Lady) is positive, I still have a preference for the other situation where I don't exist, so my preference (relative utility) in the situation where I exist is negative. So in the second situation where the extra people exist, if I'm one of the suffering people or one of the extra, apple-eating people, in both cases I have a negative preference for that situation. Or stated differently: in the first situation where only the billion happy people exist, no-one can complain (the non-existing people are not able to complain against their non-existence and against their forgone happiness of tasting an apple). In the second situation, where those billion people are in extreme misery, they could complain. The axiom that we should minimize the sum of complaints is as reasonable as the axiom that we should maximize the sum of welfare.

Reducing existential risks or wild animal suffering?

I don't see why the A-Z comparison is unreliable, based on your example. Why would the intuitions behind the repugnant conclusion be less reliable than intuitions behind our choice for some axioms? And we're not merely talking about the repugnant conclusion, but about the sadistic repugnant conclusion, which is intuitivelly more repugnant. So suppose we have to choose between two situations. In the first situation, there is only one next future human generation after us (let's say a few billion people), all with very long and extremely happy lives. In the second situation, there are quadrillions of future human generations, with billions of people, but they only live for 1 minute where they can experience the joy of taking a bite from an apple. Except for the first generation who will extremely suffer for many years. So in order to have many future generations, the first of those future generations will have to live lives of extreme misery. And all the other future lives are nothing more than tasting an apple. Can the joy of quadrillions of people tasting an apple trump the extreme misery of billions of people for many years?

Reducing existential risks or wild animal suffering?

I very much agree with these points you make. About choice dependence: I'll leave that up to every person for themselves. For example, if everyone strongly believes that the critical levels should be choice set independent, then fine, they can choose independent critical levels for themselves. But the critical levels indeed also reflect moral preferences, and can include moral uncertainty. So for example someone with a string credence in total utilitarianism might lower his or her critical level and make it choice set independent.

About the extreme preferences: I suggest people can choose a normalization procedure, such as variance normalization (cfr Owen-Cotton Barrett (http://users.ox.ac.uk/~ball1714/Variance%20normalisation.pdf) and here: https://stijnbruers.wordpress.com/2018/06/06/why-i-became-a-utilitarian/

"It's worth noting that the resulting theory won't avoid the sadistic repugnant conclusion unless every agent has very very strong moral preferences to avoid it. But I think you're OK with that. I get the impression that you're willing to accept it in increasingly strong forms, as the proportion of agents who are willing to accept it increases." Indeed!

Reducing existential risks or wild animal suffering?

but the critical level c is variable, and can depend on the choice set. So suppose the choice set consists of two situations. In the first, I exist and I have a positive welfare (or utility) w>0. In the second case, I don't exist and there is another person with a negative utility u<0. His relative utility will also be u'<0. For any positive welfare I can pick a critical level c>0, but c<w-u', such that my relative utility w-c>u', which means it would be better if I exist. So you turned it around: instead of saying "for any critical level c there is a welfare w...", we should say: "for any welfare w there is a critical level c..."

Reducing existential risks or wild animal suffering?

"If you think the idea of people with negative utility being created to prevent your happy existence is even more counterintuitive than people having negative welfare to produce your happy existence, it would seem your view would demand that you set a critical value of 0 for yourself." No, my view demands that we should not set the critical level too high. A strictly positive critical level that is low enough such that it would not result in the choice for that counter-intuitive situation, is still posiible.

"A situation where you don't exist but uncounted trillions of others are made maximally happy is going to be better in utilitarian terms (normal, critical-level, variable, whatever), regardless of your critical level (or theirs, for that matter)." That can be true, but still I prefer my non-existence in that case, so something must be negative. I call that thing relative utility. My relative utility is not about overall betterness, but about my own preference. A can be better than B in utilitarian terms, but still I could prefer B over A.

Reducing existential risks or wild animal suffering?

“If individuals are allowed to select their own critical levels to respect their autonomy and preferences in any meaningful sense, that seems to imply respecting those people who value their existence and so would set a low critical level; then you get an approximately total view with regards to those sorts of creatures, and so a future populated with such beings can still be astronomically great.” Indeed: if everyone in the future (except me) would be a total utilitarian, willing to bite the bullet and accept the repugnant sadistic conclusion, setting a very low critical level for themselves, I would accept their choices and we end up with a variable critical level utilitarianism that is very very close to total utilitarianism (it is not exactly total utilitarianism, because I would be the only one with a higher critical level). So the question is: how many people in the future are willing to accept the repugnant sadistic conclusion?

“The treatment of zero levels seems inconsistent: if it is contradictory to set a critical level below the level one would prefer to exist, it seems likewise nonsensical to set it above that level.” Utility measures a preference for a certain situation, but this is independent from other possible situations. However, the critical level and hence the relative utility also takes into account other possible situations. For example: I have a happy life with a positive utility. But if one could choose another situation where I did not exist and everyone else was maximally happy and satisfied, I would prefer (if that would still be an option) that second situation, even if I don’t exist in that situation. That means my relative utility could be negative, if that second situation was eligible. So in a sense, in a particular choice set (i.e. when the second situation is available), I prefer my non-existence. Preferring my non-existence, even if my utility is positive, means I choose a critical level that is higher than my utility.

“You suggest that people set their critical levels based on their personal preferences about their own lives, but then you make claims about their choices based on your intuitions about global properties like the Repugnant Conclusion, with no link between the two.” I do not make claims about their choices based on my intuitions. All I can say is that if people really want to avoid the repugnant sadistic conclusion, they can do so by setting a high critical level. But to be altruistic, I have to accept the choices of everyone else. So if you all choose a critical level of zero, I will accept that, even if that means accepting the repugnant sadistic conclusion, which is very counter intuitive to me.

“The article makes much about avoiding repugnant sadistic conclusion, but the view you seem to endorse at the end would support creating arbitrary numbers of lives consisting of nothing but intense suffering to prevent the existence of happy people with no suffering who set their critical level to an even higher level than the actual one.” This objection to fixed critical level utilitarianism can be easily avoided with variable critical level utilitarianism. Suppose there is someone with a positive utility (a very happy person), who sets his critical level so high that a situation should be chosen where he does not exist, and where extra people with negative utilities exist. Why would he set such a high critical level? He cannot want that. This is even more counter-intuitive than the repugnant sadistic conclusion. With fixed critical level utilitarianism, such counter-intuitive conclusion can occur because everyone would have to accept the high critical level. But variable critical level utilitarianism can easily avoid it by taking lower critical levels.

Reducing existential risks or wild animal suffering?

I honestly don't see yet how setting a high critical level to avoid the repugnant sadistic conclusion would automatically result in counter-intuitive problems with lexicality of a quasi-negative utilitarianism. Why would striking a compromise be less preferable than going all the way to a sadistic conclusion? (for me your example and calculations are still unclear: what is the choice set? What is the distribution of utilities in each possible situation?)

With rigidity I indeed mean having strong requirements on critical levels. Allowing to choose critical levels dependent on the choice set is an example that introduces much more flexibility. But again, I'll leave it up to everyone to decide for themselves how rigidly they prefer to choose their own critical levels. If you find the choice set dependence of critical levels and relative utilities undesirable, you are allowed to pick your critical level independently from the choice set. That's fine, but we should accept the freedom of others not to do so.

Reducing existential risks or wild animal suffering?

I guess your argument fails because it still contains too much rigidity. For example: the choice of critical level can depend on the choice set: the set of all situations that we can choose. I have added a section in my original blog post, which I copy here. <0. However, suppose another situation S2 is available for us (i.e. we can choose situation S2), in which that person i will not exist, but everyone else is maximally happy, with maximum positive utilities. Although person i in situation S1 will have a positive utility, that person can still prefer the situation where he or she does not exist and everyone else is maximally happy. It is as if that person is a bit altruistic and prefers his or her non-existence in order to improve the well-being of others. That means his or her critical level C(i,S1) can be higher than the utility U(i,S1), such that his or her relative utility becomes negative in situation S1. In that case, it is better to choose situation S2 and not let the extra person be born. If instead of situation S2, another situation S2’ becomes available, where the extra person does not exist and everyone else has the same utility levels as in situation S1, then the extra person in situation S1 could prefer situation S1 above S2’, which means that his or her new critical level C(i,S1)’ remains lower than the utility U(i,S1). In other words: the choice of the critical level can depend on the possible situations that are eligible or available to the people who must make the choice about who will exist. If situations S1 and S2 are available, the chosen critical level will be C(i,S1), but if situations S1 and S2’ are available, the critical level can change into another value C(i,S1)’. Each person is free to decide whether or not his or her own critical level depends on the choice set.>> So suppose we can choose between two situations. In situation A, one person has utility 0 and another person has utility 30. In situation Bq, the first person has utility -10 and instead of a second person there are now a huge number of q persons, with very low but still positive utilities (i.e. low levels of k). If the extra people think that preferring Bq is sadistic/repugnant, they can choose higher critical levels such that in this choice set between A and Bq, situation A should be chosen. If instead of situation A we can choose situations B or C, the critical levels may change again. In the end, what this means is something like: let’s present to all (potential) people the choice set of all possible (electable) situations that we can choose. Now we let them choose their preferred situation, and let them then determine their own critical levels to obtain that preferred situation given that choice set.

Reducing existential risks or wild animal suffering?

I would say the utility of a person in a situation S measures how strongly a person prefers that given situation, independently from other possible situations that we could have chosen. But in the end the thing that matters is someone’s relative utility, which can be written as the utility minus a personal critical level. This indeed reframes the discussion into one about where the zero point of utility should lie. In particular, when it comes to interpersonal comparisons of utility or well-being, the utilities are only defined up to an affine transformation, i.e. up to multiplication with a scalar and addition with a constant term. This possible addition of a term basically sets the zero point utility level. I have written more about it here: https://stijnbruers.wordpress.com/2018/07/03/on-the-interpersonal-comparability-of-well-being/

Load More