Hide table of contents

This article argues why reducing wild animal suffering may be more important than reducing existential risks. The argument is largely based on my newly developped population ethical theory of variable critical level utilitarianism.

 

What are the most important focus areas if you want to do the most good in the world? Focus on the current generation or the far future? Focus on human welfare or animal welfare? These are the fundamental cause prioritization questions of effective altruism. Look for the biggest problems that are the most neglected and are the easiest to solve. If we do this exercise, two focus areas become immensely important: reducing existential risks and reducing wild animal suffering. But which of those two deserves our top priority?

 

X-risks

An existential risk (X-risk) is a catastrophic disaster from nature (e.g. an asteroid impact, a supervirus pandemic or a supervolcano eruption), technologies (e.g. artificial superintelligence, synthetic biology, nanotechnology or nuclear weapons) or human activities (e.g. runaway global warming or environmental degradation), that can end all of civilization or intelligent life on earth.

If we manage to avoid existential risks, there can be flourishing human or intelligent life for many generations in the future, able to colonize other planets and multiply by the billions. The number of sentient beings with long happy flourishing lives in the far future can be immense: a hundred thousand billion billion billion (1032) humans, including a million billion (1016) humans on Earth, according to some estimates. In a world where an existential risk occurs, all those potentially happy people will never be born.

 

WAS

Wild animal suffering (WAS) is the problem created by starvation, predation, competition, injuries, diseases and parasites that we see in nature. There are a lot of wild animals alive today: e.g. 1013 – 1015 fish, 1017 – 1019 insects, according to some estimates. It is possible that many of those animals have lives not worth living, that those animals have more or stronger negative than positive experiences and hence overall a negative well-being. Most animals follow an r-selection reproductive strategy: they have a lot of offspring (the population has a high rate of reproduction, hence the name ‘r-selection’), and only a few of them survive long enough to reproduce themselves. Most lives of those animals are very short and therefore probably miserable. We are not likely to see most of those animals, because they will die and be eaten quickly. When we see a happy bird singing, ten of its siblings died within a few days after hatching. When the vast majority of newborns die, we can say that nature is a failed state, not able to take care of the well-being of its inhabitants.

Due to the numbers (billions of billions), the suffering of wild animals may be a bigger problem than all human suffering from violence, accidents and diseases (a few billion humans per year), and all human caused suffering of domesticated animals (a few hundred billion per year).

 

Population ethics

What is worse: all the suffering, today and in the future, of wild animals who have miserable lives? Or the non-existence of a huge number of people in the far future who could have had beautiful lives? To solve this question, we need to answer one of the most fundamental question in ethics: what is the best population ethical theory? Population ethics is the branch of moral philosophy that deals with choices that influence who will exist and how many individuals will exist.

A promising population ethical theory is the variable critical level utilitarianism. Each sentient being has a utility function that measures how strongly that individual prefers a situation. That utility can be a function of happiness and all other things valued by that individual. If your utility is positive in a certain situation, you have a positive preference for that situation. The more you prefer a situation, the higher your utility in that situation. If a person does not exist, that person has a zero utility level.

The simplest population ethical theory is total utilitarianism, which says that we should choose the situation that has the highest total sum of everyone’s utilities. However, this theory has a very counter-intuitive implication, called a sadistic repugnant conclusion (a combination of the sadistic conclusion and the repugnant conclusion in population ethics). Suppose you can choose between two situations. In the first situation, a million people exist and have maximally happy lives, with maximum utilities. In the second situation, those million people have very miserable lives, with extremely negative levels of utility. But in that situation, there also exist new people with utilities slightly above zero, i.e. lives barely worth living. If we take the sum of everyone’s utilities in that second situation, and if the number of those extra people is high enough, the total sum becomes bigger than the total of utilities in the first situation. According to total utilitarianism, the second situation is better, even if the already existing people have maximally miserable lives and the new people have lives barely worth living, whereas in the first situation everyone is maximally satisfied, and no-one is miserable.

To avoid this conclusion, we can change the utilitarian theory, for example by using a reference utility level as a critical level. Instead of adding utilities, we add relative utilities, where a relative utility of a person is his or her utility minus the critical level. The critical level of a non-existing person is zero. This population ethical theory is the critical level utilitarianism, and it can avoid the sadistic repugnant conclusion: if the critical level is higher than the small positive utilities of the new people in the second situation, the relative utilities of those extra people are all negative. The sum of all those relative utilities never becomes positive, which means the total relative utility of the first situation is always higher than the second situation, and so the first situation is preferred. 

If all critical levels of all persons in all situations are the same, we have a constant or rigid critical level utilitarianism, but this theory still faces some problems. We can make the theory more flexible by allowing variable critical levels: not only can everyone determine his or her own utility in a specific situation, everyone can also choose his or her critical level. The preferred critical level can vary from person to person and from situation to situation.

A person’s critical level always lies within a range, between his or her lowest preferred and highest preferred levels. The lowest preferred critical level is zero: if a person would choose a negative critical level, that person would accept a situation where he or she can have a negative utility, such as a life not worth living. Accepting a situation that one would not prefer, is basically a contradiction. The highest preferred critical level varies from person to person. Suppose we can decide to bring more people into existence. If they choose a very high critical level, their utilities fall below this critical level, and hence their relative utilities become negative. In other words: it is better that they do not exist. So if everyone would choose a very high critical level, it is better that no-one exists, even if people can have positive utilities (but negative relative utilities). This theory is a kind of naive negative utilitarianism, because everyone’s relative utility becomes a negative number and we have to choose the situation that maximizes the total of those relative utilities. It is a naive version of negative utilitarianism, because the maximum will be at the situation where no-one exists (i.e. where all relative utilities are zero instead of negative). If people do not want that situation, they have chosen a critical level that is too high. If everyone chose their highest preferred critical level, we end up with a better kind of negative utilitarianism, which avoids the conclusion that non-existence is always best. It is a quasi-negative utilitarianism, because the relative utilities are no-longer always negative. They can sometimes be (slightly) positive, in order to allow the existence of extra persons.

 

X-risks versus WAS

Now we come to the crucial question: if variable critical level utilitarianism is the best population ethical theory, what does it say about our two problems of existential risks and wild animal suffering?

If everyone chose their lowest preferred critical level, we end up with total utilitarianism, and according to that theory, the potential existence of many happy people in the far future becomes dominant. Even if the probability of an existential risk is very small (say one in a million the next century), reducing that probability is of highest importance if so many future lives are at stake. However, we have seen that total utilitarianism contains a sadistic repugnant conclusion that will not be accepted by many people. This means those people decrease their credence in this theory.

If people want to move safely away from the sadistic repugnant conclusion and other problems of rigid critical level utilitarianism, they should choose a critical level infinitesimally close to (but still below) their highest preferred levels. If everyone does so, we end up with a quasi-negative utilitarianism. According to this theory, adding new people (or guaranteeing the existence of future people by eliminating existential risks) becomes only marginally important. The prime focus of this theory is avoiding the existence of people with negative levels of utility: adding people with positive utilities becomes barely important because their relative utilities are small. But adding people with negative utilities is always bad, because the critical levels of those people are always positive and hence their relative utilities are always negative and often big in size.

However, we should not avoid the existence of people with negative utilities at all costs. Simply decreasing the number of future people (avoiding their existence), in order to decrease the number of potential people with miserable lives, is not a valid solution according to quasi-negative utilitarianism. Suppose there will be one sentient being in the future who will have a negative utility, i.e. a life not worth living, and the only alternative option to avoid that negative utility, is that no-one in the future exists. However, the other potential future people strongly prefer their own existence: they all have very positive utilities. In order to allow for their existence, they could lower their critical levels such that a future with all those happy future beings and the one miserable individual is still preferred. This means that according to quasi-negative utilitarianism, the potential existence of one miserable person in the future does not imply that we should prefer a world where no-one will live in the future. However, what if a lot of future individuals (say a majority) have lives not worth living? The few happy potential people will have to decrease their own critical levels below zero in order to allow their existence. In other words: if the number of future miserable lives is too high, a future without any sentient being would be preferred according to quasi-negative utilitarianism.

If everyone chooses a high critical level such that we end up with a quasi-negative utilitarianism, we should give more priority to eliminating wild animal suffering than eliminating existential risks, because lives with negative utilities are probably most common in wild animals and adding lives with positive well-being is only minimally important. In an extreme case where most future lives would be unavoidably very miserable (i.e. if the only way to avoid this misery is to avoid the existence of those future people), avoiding an existential risk could even be bad, because it would guarantee the continued existence of this huge misery. Estimating the distribution of utilities in future human and animal generations becomes crucial. But even if with current technologies most future lives would be miserable, it can still be possible to avoid that future misery by using new technologies. Hence, developing new methods to avoid wild animal suffering becomes a priority.

 

Expected value calculations

If total utilitarianism is true (i.e. if everyone chooses a critical level equal to zero), and if existential risks are eliminated, the resulting increase in total relative utility (of all current and far-future people) is very big, because the number of future people is so large. If quasi-negative utilitarianism is true (i.e. if everyone chooses their maximum preferred critical level), and if wild animal suffering is eliminated, the resulting increase in total relative utility of all current and near-future[1] wild animals is big, but perhaps smaller than the increase in total relative utility by eliminating existential risks according to total utilitarianism, because the number of current and near-future wild animals is smaller than the number of potential far-future people with happy lives. This implies that eliminating existential risks is more valuable, given the truth of total utilitarianism, than eliminating wild animal suffering, given the truth of quasi-negative utilitarianism.

However, total utilitarianism seems a less plausible population ethical theory than quasi-negative utilitarianism because it faces the sadistic repugnant conclusion. This implausibility of total utilitarianism means it is less likely that everyone chooses a critical level of zero. Eliminating existential risks was most valuable when total utilitarianism was true, but its expected value becomes lower because the low probability of total utilitarianism being true. The expected value of eliminating wild animal suffering could become higher than the expected value of eliminating existential risks.

But still, even if the fraction of future people who choose zero critical levels is very low, the huge number of future people indicate that guaranteeing their existence (i.e. eliminating existential risks) remains very important.

 

The interconnectedness of X-risks and WAS

There is another reason why reducing wild animal suffering might gain importance over reducing existential risks. If we reduce existential risks, more future generations of wild animals will be born. This increases the likelihood that more animals with negative utilities will be born. For example: colonizing other planets could be a strategy to reduce existential risks (e.g. blowing up planet Earth would not kill all humans if we could survive on other planets). But colonization of planets could mean introducing ecosystems and hence introducing wild animals, which increases the number of wild animals and increases the risk of more future wild animal suffering. If decreasing existential risks means that the number of future wild animals increases, and if this number becomes bigger and bigger, the non-existence of animals with negative utilities (i.e. the elimination of wild animal suffering) becomes more and more important.

On the other hand, if an existential risk kills all humans, but the non-human animals survive, and if humans could have been the only hope for wild animals in the far future by inventing new technologies that eliminate wild animal suffering, an existential risk might make it worse for the animals in the far future. That means eliminating existential risks might become more important when eliminating wild animal suffering becomes more important.

So we have to make a distinction between existential risks that could kill all humans and animals, versus existential risks that would kill only those persons who could potentially help future wild animals. The second kind of existential risk is bad for wild animal suffering, so eliminating this second kind of risk is important to eliminate wild animal suffering in the far future.

 

Victimhood

The difference between total utilitarianism (prioritizing the elimination of existential risks) and quasi-negative utilitarianism (prioritizing the elimination of wild animal suffering), can also be understood in terms of victimhood. If due to an existential risk a potential happy person would not exist in the future, that non-existing person cannot be considered as a victim. That non-existing person cannot complain against his or her non-existence. He or she does not have any experiences and hence is not aware of being a victim. He or she does not have any preferences in this state of non-existence. On the other hand, if a wild animal has a negative utility (i.e. a miserable life), that animal can be considered as a victim.

Of course, existential risks create victims: the final generation of existing people would be harmed and would not like the extinction. But this number of people in the last generation will be relatively small compared to the many generations of many wild animals who can suffer. So if the status of victimhood is especially bad, wild animal suffering gets worse than existential risks, because the problem of wild animal suffering creates more victims.

 

Neglectedness

Both existential risk reduction and wild animal suffering reduction are important focus areas of effective altruism, but reducing wild animal suffering seems to be more neglected. Only a few organizations work on reducing wild animal suffering: Wild-Animal Suffering Research, Animal Ethics, Utility Farm and the Foundational Research Institute. On the other hand, there are many organizations working on existential risks both generally (e.g. the Centre for the Study of Existential Risk, the Future of Humanity Institute, the Future of Life Institute, the Global Catastrophic Risk Institute and 80000 Hours) and specifically (working on AI-safety, nuclear weapons, global warming, global pandemics,…). As wild animal suffering is more neglected, it has a lot of room for more funding. Based on the importance-tractability-neglectedness framework, wild animal suffering deserves a higher priority.

 

Summary

In the population ethical theory of variable critical level utilitarianism, there are two extreme critical levels that correspond with two dominant population ethical theories. If everyone chooses the lowest preferred critical level (equal to zero), we end up with total utilitarianism. If everyone chooses the highest preferred critical level, we end up with quasi-negative utilitarianism. According to total utilitarianism, we should give top priority to avoiding existential risks such that the existence of many future happy people is guaranteed. According to quasi-negative utilitarianism, we should give top priority to avoiding wild animal suffering such that the non-existence of animals with miserable lives (negative utilities) is guaranteed (but not always simply by decreasing or eliminating wild animal populations and not necessarily at the cost of whipping out all life).

The value of eliminating existential risks when everyone chooses the lowest preferred critical level would probably be higher than the value of eliminating wild animal suffering when everyone chooses the highest preferred critical level. But total utilitarianism is less likely to be our preferred population ethical theory because it faces the sadistic repugnant conclusion. This means that the expected value of eliminating wild animal suffering could be bigger than the expected value of eliminating existential risks. These calculations become even more complex when we consider the interconnectedness of the problems of existential risks and wild animal suffering. For example, decreasing existential risks might increase the probability of the existence of more future wild animals with negative utilities. But eliminating some existential risks might also guarantee the existence of people who could help wild animals and potentially eliminate all future wild animal suffering with new technologies.

Finally, wild animal suffering deserves a higher priority because this focus area is more neglected than existential risks.

 



[1] We cannot simply add the relative utilities of far-future wild animals, because that would presume that existential risks are avoided.

3

0
0

Reactions

0
0

More posts like this

Comments24
Sorted by Click to highlight new comments since: Today at 4:00 AM

I have several issues with the internal consistency of this argument:

  • If individuals are allowed to select their own critical levels to respect their autonomy and preferences in any meaningful sense, that seems to imply respecting those people who value their existence and so would set a low critical level; then you get an approximately total view with regards to those sorts of creatures, and so a future populated with such beings can still be astronomically great
  • The treatment of zero levels seems inconsistent: if it is contradictory to set a critical level below the level one would prefer to exist, it seems likewise nonsensical to set it above that level
  • You suggest that people set their critical levels based on their personal preferences about their own lives, but then you make claims about their choices based on your intuitions about global properties like the Repugnant Conclusion, with no link between the two
  • The article makes much about avoiding repugnant sadistic conclusion, but the view you seem to endorse at the end would support creating arbitrary numbers of lives consisting of nothing but intense suffering to prevent the existence of happy people with no suffering who set their critical level to an even higher level than the actual one

On the first point, you suggest that that individuals get to set their own critical levels based on their preferences about their own lives. E.g.

The lowest preferred critical level is zero: if a person would choose a negative critical level, that person would accept a situation where he or she can have a negative utility, such as a life not worth living. Accepting a situation that one would not prefer, is basically a contradiction.

So if my desires and attitudes are such that I set a critical level well below the maximum, then my life can add substantial global value. E.g. if A has utility +5 and sets critical value 0, B has utility +5 and chooses critical value 10, and C has utility -5 and critical value 10, then 3 lives like A will offset one life like C, and you can get most of the implications of the total view, and in particular an overwhelmingly high value of the future if the future is mostly populated with beings who favor existing and set low critical levels for themselves (which one could expect from people choosing features of their descendants or selection).

On the second point, returning to this quote:

The lowest preferred critical level is zero: if a person would choose a negative critical level, that person would accept a situation where he or she can have a negative utility, such as a life not worth living. Accepting a situation that one would not prefer, is basically a contradiction.

I would note that utility in the sense of preferences over choices, or a utility function, need not correspond to pleasure or pain. The article is unclear on the concept of utility it is using but the above quote seems to require a preference base, i.e. zero utility is defined as the point at which the person would prefer to be alive rather than not. But then if 0 is the level at which one would prefer to exist, isn't it equally contradictory to have a higher critical level and reject lives that you would prefer? Perhaps you are imagining someone who thinks 'given that I am alive I would rather live than die, but I dislike having coming into existence in the first place, which death would not change.' But in this framework that would just be negative utility part of the assessment of the overall life (and people without that attitude can be unbothered).

Regarding the third point, if each of us choose our own critical level autonomously, I do not get to decree a level for others. But the article makes several arguments that seem to conflate individual and global choice by talking about everyone choosing a certain level, e.g.:

If people want to move safely away from the sadistic repugnant conclusion and other problems of rigid critical level utilitarianism, they should choose a critical level infinitesimally close to (but still below) their highest preferred levels.

But if I set a very high critical level for myself, that doesn't lower the critical levels of others, and so the repugnant conclusion can proceed just fine with the mildly good lives of those who choose low critical levels for themselves. Having the individuals choose for themselves based on prior prejudices about global population ethics also defeats the role of the individual choice as a way to derive the global conclusion. I don't need to be a total utilitarian in general to approve of my existence in cases in which I would prefer to exist.

Lastly, a standard objection to critical level views is that they treat lives below the critical level (but better than nothing by the person's own lights and containing happiness but not pain) as negative, and so will endorse creating lives of intense suffering by people who wish they had never existed to prevent the creation of multiple mildly good lives. With the variable critical level account all those cases would still go through using people who choose high critical levels (with the quasi-negative view, it would favor creating suicidal lives of torment to offset the creation of blissful beings a bit below the maximum). I don't see that addressed in the article.

[anonymous]5y7
1
0

The existence of critical level theories all but confirms the common claim that those who deny the Repugnant Conclusion underrate low quality lives. An inevitable symptom of this is the confused attempt to set a critical level that is positive.

When we think about the Z population, we try to conceive of a life that is only slightly positive using intuitive affect/aversion heuristics. In such a life, something as trivial as one additional bad day could make them net negative. Spread across the whole Z population, this makes the difference between Z being extremely good and extremely bad. But this difference, although massive in terms of total welfare, is small from the point of view of heuristic intuitions that focus on the quality of life of a single individual in the Z population.

For this reason, the A vs Z comparison is extremely unreliable, and we should expect our intuitions to go completely haywire when asked to make judgements about it. In such cases, it is best to return to obvious arguments and axioms such as that good lives are good and that more is better etc. Numerous persuasive propositions and axioms all imply the Repugnant Conclusion from numerous different directions.

Critical level theories are a symptom of a flawed and failed approach to ethics that relies on intuitions we have reason to believe will be unreliable and are contradicted by numerous highly plausible lines of argument.

[anonymous]5y0
0
0

I don't see why the A-Z comparison is unreliable, based on your example. Why would the intuitions behind the repugnant conclusion be less reliable than intuitions behind our choice for some axioms? And we're not merely talking about the repugnant conclusion, but about the sadistic repugnant conclusion, which is intuitivelly more repugnant. So suppose we have to choose between two situations. In the first situation, there is only one next future human generation after us (let's say a few billion people), all with very long and extremely happy lives. In the second situation, there are quadrillions of future human generations, with billions of people, but they only live for 1 minute where they can experience the joy of taking a bite from an apple. Except for the first generation who will extremely suffer for many years. So in order to have many future generations, the first of those future generations will have to live lives of extreme misery. And all the other future lives are nothing more than tasting an apple. Can the joy of quadrillions of people tasting an apple trump the extreme misery of billions of people for many years?

[anonymous]5y2
0
0

On the critical level theory, the lives of the people who come into the world experiencing the joy of an apple for 1 minute have negative value. This seems clearly wrong, which illustrates my point. You would have to say that the world was made worse by the existence of a being who lived for one minute, enjoyed their apple, then died (and there were no instrumental costs to their life). This is extremely peculiar, from a welfarist perspective. Welfarists should be positive about additional welfare! Also, do you think it is bad for me to enjoy a nice juicy Pink Lady now? If not, then why is it bad for someone to come into existence and only do that?

Methodologically, rather than noting that the sadistic repugnant conclusion is countintuitive and then trying to conjure up theories that avoid it, I think it would make more sense to ask why the sadistic repugnant conclusion would be false. The Z lives are positive, so it is better for them to live than not. The value aggregates in a non-diminishing way - the first life adds as much value as the quadrillionth. This means that the Z population can have arbitrarily large value depending on its size, which means that it can outweigh lots of other things. In my view, it is completely wrongheaded to start by observing that a conclusion is counterintuitive and ignoring the arguments for it when building alternatives. This is an approach that has lead to meagre progress in population ethics over the last 30 years - can you name a theory developed in this fashion that now commands widespread assent in the field? The approach leads people to develop theories such as CLU, that commit you to holding that a life of positive welfare is negative, which is difficult to understand from a welfarist perspective.

[anonymous]5y0
0
0

Perhaps there is more of importance than merely welfare. Concerning the repugnant sadistic conclusion I can say two things. First, I am not willing to put myself and all my friends in extreme misery merely for the extra existence of quadrillions of people who have nothing but a small positive experience of tasting an apple. Second, when I would be one of those extra people living for a minute and tasting an apple, knowing that my existence involved the extreme suffering of billions of people who could otherwise have been very happy, I would rather not exist. That means even if my welfare of briefly tasting the apple (a nice juicy Pink Lady) is positive, I still have a preference for the other situation where I don't exist, so my preference (relative utility) in the situation where I exist is negative. So in the second situation where the extra people exist, if I'm one of the suffering people or one of the extra, apple-eating people, in both cases I have a negative preference for that situation. Or stated differently: in the first situation where only the billion happy people exist, no-one can complain (the non-existing people are not able to complain against their non-existence and against their forgone happiness of tasting an apple). In the second situation, where those billion people are in extreme misery, they could complain. The axiom that we should minimize the sum of complaints is as reasonable as the axiom that we should maximize the sum of welfare.

[anonymous]5y1
0
0

I have a paper about complaints-based theories that may be of interest - https://www.journals.uchicago.edu/doi/abs/10.1086/684707

One argument I advance there is that these theories appear not to be applicable to moral patients who lack rational agency. Suppose that mice have net positive lives. What would it mean to say of them that they have a preference for not putting millions in extreme misery for the sake of their small net positive welfare? If you say that we should nevertheless not put millions in extreme misery for the sake of quadrillions of mice, then it looks like you are appealing to something other than a complaints-based theory to justify your anti-aggregative conclusion. So, the complaints-based theory isn't doing any work in the argument.

[anonymous]5y0
0
0

Thanks for the paper! Concerning the moral patients and mice: they indeed lack a capability to determine their reference values (critical levels) and express their utility functions (perhaps we can derive them from their revealed preferences). So actually this means those mice do not have a preference for a critical level or for a population ethical theory. They don't have a preference for total utilitarianism or negative utilitarianism or whatever. That could mean that we can choose for them a critical level and hence the population ethical implications, and those mice cannot complain against our choices if they are indifferent. If we strongly want total utilitarianism and hence a zero critical level, fine, then we can say that those mice also have a zero critical level. But if we want to avoid the sadistic repugnant conclusion in the example with the mice, fine, then we can set the critical levels of those mice higher, such that we choose the situation where those quadrillions of mice don't exist. Even the mice who do exist cannot complain against our choice for the non-existence of those extra quadrillion mice, because they are indifferent about our choice.

[anonymous]5y0
0
0

“If individuals are allowed to select their own critical levels to respect their autonomy and preferences in any meaningful sense, that seems to imply respecting those people who value their existence and so would set a low critical level; then you get an approximately total view with regards to those sorts of creatures, and so a future populated with such beings can still be astronomically great.” Indeed: if everyone in the future (except me) would be a total utilitarian, willing to bite the bullet and accept the repugnant sadistic conclusion, setting a very low critical level for themselves, I would accept their choices and we end up with a variable critical level utilitarianism that is very very close to total utilitarianism (it is not exactly total utilitarianism, because I would be the only one with a higher critical level). So the question is: how many people in the future are willing to accept the repugnant sadistic conclusion?

“The treatment of zero levels seems inconsistent: if it is contradictory to set a critical level below the level one would prefer to exist, it seems likewise nonsensical to set it above that level.” Utility measures a preference for a certain situation, but this is independent from other possible situations. However, the critical level and hence the relative utility also takes into account other possible situations. For example: I have a happy life with a positive utility. But if one could choose another situation where I did not exist and everyone else was maximally happy and satisfied, I would prefer (if that would still be an option) that second situation, even if I don’t exist in that situation. That means my relative utility could be negative, if that second situation was eligible. So in a sense, in a particular choice set (i.e. when the second situation is available), I prefer my non-existence. Preferring my non-existence, even if my utility is positive, means I choose a critical level that is higher than my utility.

“You suggest that people set their critical levels based on their personal preferences about their own lives, but then you make claims about their choices based on your intuitions about global properties like the Repugnant Conclusion, with no link between the two.” I do not make claims about their choices based on my intuitions. All I can say is that if people really want to avoid the repugnant sadistic conclusion, they can do so by setting a high critical level. But to be altruistic, I have to accept the choices of everyone else. So if you all choose a critical level of zero, I will accept that, even if that means accepting the repugnant sadistic conclusion, which is very counter intuitive to me.

“The article makes much about avoiding repugnant sadistic conclusion, but the view you seem to endorse at the end would support creating arbitrary numbers of lives consisting of nothing but intense suffering to prevent the existence of happy people with no suffering who set their critical level to an even higher level than the actual one.” This objection to fixed critical level utilitarianism can be easily avoided with variable critical level utilitarianism. Suppose there is someone with a positive utility (a very happy person), who sets his critical level so high that a situation should be chosen where he does not exist, and where extra people with negative utilities exist. Why would he set such a high critical level? He cannot want that. This is even more counter-intuitive than the repugnant sadistic conclusion. With fixed critical level utilitarianism, such counter-intuitive conclusion can occur because everyone would have to accept the high critical level. But variable critical level utilitarianism can easily avoid it by taking lower critical levels.

This objection to fixed critical level utilitarianism can be easily avoided with variable critical level utilitarianism. Suppose there is someone with a positive utility (a very happy person), who sets his critical level so high that a situation should be chosen where he does not exist, and where extra people with negative utilities exist. Why would he set such a high critical level? He cannot want that. This is even more counter-intuitive than the repugnant sadistic conclusion. With fixed critical level utilitarianism, such counter-intuitive conclusion can occur because everyone would have to accept the high critical level. But variable critical level utilitarianism can easily avoid it by taking lower critical levels.

Such situations exist for any critical level above zero, since any critical level above zero means treating people with positive welfare as a bad thing, to be avoided even at the expense of some amount of negative welfare.

If you think the idea of people with negative utility being created to prevent your happy existence is even more counterintuitive than people having negative welfare to produce your happy existence, it would seem your view would demand that you set a critical value of 0 for yourself.

For example: I have a happy life with a positive utility. But if one could choose another situation where I did not exist and everyone else was maximally happy and satisfied, I would prefer (if that would still be an option) that second situation, even if I don’t exist in that situation.

A situation where you don't exist but uncounted trillions of others are made maximally happy is going to be better in utilitarian terms (normal, critical-level, variable, whatever), regardless of your critical level (or theirs, for that matter). A change in your personal critical level only changes the actions recommended by your variable CLU when it changes the rankings of actions in terms of relative utilities, such that the actions were close to within a distance on the scale of one life.

In other words, that's a result of the summing up of (relative) welfare, not a reason to misstate your valuation of your own existence.

[anonymous]5y0
0
0

"If you think the idea of people with negative utility being created to prevent your happy existence is even more counterintuitive than people having negative welfare to produce your happy existence, it would seem your view would demand that you set a critical value of 0 for yourself." No, my view demands that we should not set the critical level too high. A strictly positive critical level that is low enough such that it would not result in the choice for that counter-intuitive situation, is still posiible.

"A situation where you don't exist but uncounted trillions of others are made maximally happy is going to be better in utilitarian terms (normal, critical-level, variable, whatever), regardless of your critical level (or theirs, for that matter)." That can be true, but still I prefer my non-existence in that case, so something must be negative. I call that thing relative utility. My relative utility is not about overall betterness, but about my own preference. A can be better than B in utilitarian terms, but still I could prefer B over A.

A strictly positive critical level that is low enough such that it would not result in the choice for that counter-intuitive situation, is still posiible.

As a matter of mathematics this appears impossible. For any critical level c that you pick where c>0, there is some level of positive welfare w where c>w>0, with relative utility u, 0>u, u=c-w.

There will then be some quantity of expected negative utility and relative utility people with relative utility between 0 and u that variable CLU would prefer to the existence of you with c and w. You can use gambles (with arbitrarily divisible probabilities) or aggregation across similar people to get arbitrarily close to zero. So either c<=0 or CLU will recommend creation of negative utility and relative utility people to prevent your existence for some positive welfare levels.

[anonymous]5y0
0
0

but the critical level c is variable, and can depend on the choice set. So suppose the choice set consists of two situations. In the first, I exist and I have a positive welfare (or utility) w>0. In the second case, I don't exist and there is another person with a negative utility u<0. His relative utility will also be u'<0. For any positive welfare I can pick a critical level c>0, but c<w-u', such that my relative utility w-c>u', which means it would be better if I exist. So you turned it around: instead of saying "for any critical level c there is a welfare w...", we should say: "for any welfare w there is a critical level c..."

What exactly do you mean with utility here? The Quasi-negative utilitarian framework seems to correspond to a shift of everyone's personal utility, such that the shifted utility for each person is 0, whenever this person's live is neither worth living, nor not worth living.

It seems to me, like a reasonable notion of utility would have this property anyway (but i might just use the word differently than other people, please tell me, if there is some widely used definition contradicting this!). This reframes the discussion into one about where the zero point of utility functions should lie, which seems easier to grasp. In particular, from this point of view Quasi-negative utilitarianism still gives rise to some for of the sadistic-repugnant conclussion.

On a broader point, i suspect, that the repugnance of repgugnant conclussions usually stems from confusion/disagreement about what "a life worth living" means. However, as in your article, entertaining this conclussion still seems useful in order to sharpen our intuition about what lives are actually worth living.

[anonymous]5y0
0
0

I would say the utility of a person in a situation S measures how strongly a person prefers that given situation, independently from other possible situations that we could have chosen. But in the end the thing that matters is someone’s relative utility, which can be written as the utility minus a personal critical level. This indeed reframes the discussion into one about where the zero point of utility should lie. In particular, when it comes to interpersonal comparisons of utility or well-being, the utilities are only defined up to an affine transformation, i.e. up to multiplication with a scalar and addition with a constant term. This possible addition of a term basically sets the zero point utility level. I have written more about it here: https://stijnbruers.wordpress.com/2018/07/03/on-the-interpersonal-comparability-of-well-being/

Nice post! I enjoyed reading this but I must admit that I'm a bit sceptical.

I find your variable critical level utilitarianism troubling. Having a variable critical level seems OK in principle, but I find it quite bizarre that moral patients can choose what their critical value is i.e. they can choose how morally valuable their life is. How morally good or bad a life is doesn't seem to be a matter of choice and preferences. That's not to say people can't disagree about where the critical level should be, but I don't see why this disagreement should reflect a difference in individual's own critical levels -- plausibly these disagreements are about other people's as well. In particular, you'll have a very hard time convincing anyone who takes morality to be mind-independent to accept this view. I would find the view much more plausible if the critical level were determined for each person by some other means.

I'd be interested to hear what kind of constraints you'd suggest on choosing levels. If you don't allow any, then I am free to choose a low negative critical level and live a very painful life, and this could be morally good. But that's more absurd than the sadistic repugnant conclusion, so you need some constraints. You seem to want to allow people the autonomy to choose their own critical level but also require that everyone chooses a level that is infinitesimally less than their welfare level in order to avoid the sadistic repugnant conclusion -- there's a tension here that needs resolved. But also, I don't see how you can use the need to avoid the sadistic repugnant conclusion as a constraint for choosing critical levels without being really ad hoc.

I think you'd be better arguing for quasi-negative utilitarianism directly or in some other way: you might claim that all positive welfare is only of infinitesimal moral value but that (at least some) suffering is of non-infinitesimal moral disvalue. It's really difficult to get this to work though, because you're introducing value lexicality, i.e. some suffering is infinitely worse than any amount of happiness. This implies that you would prefer to relieve a tiny amount of non-infinitesimal suffering over experiencing any finite amount of happiness. And plausibly you'd prefer to avoid a tiny but non-infinitesimal chance of a tiny amount of non-infinitesimal suffering over a guaranteed experience of any finite amount of happiness. This seems more troubling than the sadistic repugnant conclusion to me. I think you can sweeten the pill though by setting the bar of non-infinitesimal suffering quite high e.g. being eaten alive. This would allow trade-offs between most suffering and happiness as usual (allowing the sadistic repugnant conclusion concerning happiness and the 'lesser' forms of suffering) but still granting lexical superiority to extreme suffering. This strikes me as the most plausible view in this region of population ethical theories, I'd be interested to hear what you think.

Even if you get a plausible version of quasi-negative utilitarianism (QNU) that favours WAS over x-risk, I don't think the conclusion you want will follow easily when moral uncertainty is taken into account. How do you propose to decide what to do under normative uncertainty? Even if you find quasi-negative utilitarianism (QNU) more plausible than classical utilitarianism (CU), it doesn't follow that we should prioritise WAS unless you take something like the 'my favourite theory' approach to normative uncertainty, which is deeply unsatisfying. The most plausible approaches to normative uncertainty (e.g. 'maximise expected choice-worthiness') take both credences in the relevant theories and the value the theories assign to outcomes into account. If the expected value of working on x-risk according to CU is many times greater than the expected value of working on WAS according to QNU (which is plausible), then all else being equal, you need your credence in QNU to be many times greater than your credence in CU. We could easily be looking at a factor of 1000 here, which would require something like a credence < 0.1 in CU, but that's surely way too low, despite the sadistic repugnant conclusion.

A response you might make is that the expected value of preventing x-risk according to CU is actually not that high (or maybe even negative), due to increased chances of s-risks, given that we don't go extinct. But if this is the case, we're probably better off focusing on those s-risks rather than WAS, since they'd have to be really really big to bring x-risk mitigation down to WAS level on CU. It's possible that working on WAS today is a good way to gain information and improve our chances of good s-risk mitigation in the future, especially since we don't know very much about large s-risks and don't have experience mitigating them. But I think it would be suspiciously convenient if working on WAS now turned out to be the best thing for future s-risk mitigation (even on subjective expected value terms given our current evidence). I imagine we'd be better off working on large scale s-risks directly.

[anonymous]5y1
0
0

Thanks for your comments

“In particular, you'll have a very hard time convincing anyone who takes morality to be mind-independent to accept this view. I would find the view much more plausible if the critical level were determined for each person by some other means.” But choosing a mind-independent critical level seems difficult. By what other means could we determine a critical level? And why should that critical level be the same for everyone and the same in all possible situations? If we can’t find an objective rule to select a universal and constant critical level, picking a critical level introduces an arbitrariness. This arbitrariness can be avoided by letting everyone choose for themselves their own critical levels. If I choose 5 as my critical level, and you choose 10 for your critical level, these choices are in a sense also arbitrary (e.g. why 5 and not 4?) but at least they respect our autonomy. Furthermore: I argued elsewhere that there is no predetermined universal critical level: https://stijnbruers.wordpress.com/2018/07/03/on-the-interpersonal-comparability-of-well-being/

“If you don't allow any, then I am free to choose a low negative critical level and live a very painful life, and this could be morally good. But that's more absurd than the sadistic repugnant conclusion, so you need some constraints.” I don’t think you are free to choose a negative critical level, because that would mean you would be ok to have a negative utility, and by definition that is something you cannot want. If your brain doesn’t like pain, you are not free to choose that from now on you like pain. And if your brain doesn’t want to be altered such that it likes pain, you are not free to choose to alter your brain. Neither are you free to invert your utility function, for example.

“You seem to want to allow people the autonomy to choose their own critical level but also require that everyone chooses a level that is infinitesimally less than their welfare level in order to avoid the sadistic repugnant conclusion.” That requirement is merely a logical requirement. If people want to avoid the sadistic repugnant conclusion, they will have to choose a high critical level (e.g. the maximum preferred level, to be safe). But there may be some total utilitarians who are willing to bite the bullet and accept the sadistic repugnant conclusion. I wonder how many total utilitarians there are.

“But also, I don't see how you can use the need to avoid the sadistic repugnant conclusion as a constraint for choosing critical levels without being really ad hoc.” What is ad hoc about it? If people want to avoid this sadistic conclusion, that doesn’t seem to be ad hoc to me. And if in order to avoid that conclusion they choose a maximum preferred critical level, that doesn’t seem ad hoc either.

“you might claim that all positive welfare is only of infinitesimal moral value but that (at least some) suffering is of non-infinitesimal moral disvalue.” As you mention, that also generates some counter-intuitive implications. The variable critical level utilitarianism (including the quasi-negative utilitarianism) can avoid those counter-intuitive implications that result from such lexicalities with infinitesimals. For example, suppose we can bring two people into existence. The first one will have a negative utility of -10, and suppose that person chooses 5 as his critical level. So his relative utility will be -15. The second person will have a utility +30. In order to allow his existence, that person can select a critical value infinitesimally below 15 (which is his maximally preferred critical level). Bringing those two people into existence will become infinitesimally good. And the second person will have a relative utility of 15, which is not infinitesimal (hence no lexicality issues here).

“If the expected value of working on x-risk according to CU is many times greater than the expected value of working on WAS according to QNU (which is plausible), then all else being equal, you need your credence in QNU to be many times greater than your credence in CU. We could easily be looking at a factor of 1000 here, which would require something like a credence < 0.1 in CU, but that's surely way too low, despite the sadistic repugnant conclusion.” I agree with this line of reasoning, and the ‘maximise expected choice-worthiness’ idea is reasonable. Personally, I consider this sadistic repugnant conclusion to be so extremely counter-intuitive that I give total utilitarianism a very very very low credence. But if say a majority of people are willing to bite the bullet and are really total utilitarians, my credence in this theory can strongly increase. In the end I am a variable critical level utilitarian, so people can decide for themselves their critical levels and hence their preferred population ethical theory. If more than say 0,1% of people are total utilitarianism (i.e. choose 0 as their critical level), reducing X-risks becomes dominant.

“I imagine we'd be better off working on large scale s-risks directly.” I agree with the concerns about s-risk and the level of priority of s-risk reduction, but I consider a continued wild animal suffering for millions of years as the most concrete example that we have so far about an s-risk.

Thanks for the reply!

I agree that it's difficult to see how to pick a non-zero critical level non-arbitrarily -- that's one of the reasons I think it should be zero. I also agree that, given critical level utilitarianism, it's plausible that the critical level can vary across people (and across the same person at different times). But I do think that whatever the critical level for a person in some situation is, it should be independent of other people's well-being and critical levels. Imagine two scenarios consisting of the same group of people: in each, you have have the exact same life/experiences and level of well-being, say, 5; you're causally isolated from everyone else; the other people have different levels of well-being and different critical levels in each scenario such that in the first scenario, the aggregate of their moral value (sum well-being minus critical level for each person) is 1, and in the second this quantity is 7. If I've understood you correctly, in the first case, you should set your critical level to 6 - a, and in the second you should set it to 12 - a, where a is infinitesimal, so that the total moral value in each case is a, so that you avoid the sadistic repugnant conclusion. Why have a different level in each case? You aren't affected by anyone else -- if you were, you would be in a different situation/live a different life so could maybe justify a different critical level. But I don't see how you can justify that here.

This relates to my point on it seeming ad hoc. You're selecting your critical level to be the number such that when you aggregate moral value, you get an infinitesimal so that you avoid the sadistic repugnant conclusion, without other justification for setting the critical level at that level. That strikes me as ad hoc.

I think you introduce another element of arbitrariness too. Why set your critical level to 12 - a, when the others could set theirs to something else such that you need only set yours to 10 - a? There are multiple different critical levels you could set yours to, if others change theirs too, that give you the result you want. Why pick one solution over any other?

Finally, I don't think you really avoid the problems facing lexical value theories, at least not without entailing the sadistic repugnant conclusion. This is a bit technical. I've edited it to make it as clear as I can, but I think I need to stop now; I hope it makes sense. The main idea is to highlight a trade-off you have to make between avoiding the repugnant conclusion and avoiding the counter-intuitive implications of lexical value theories.

Let's go with your example: 1 person at well-being -10, critical level 5; 1 person at well-being 30, so they set their critical level to 15 - a, so that the overall moral value is a. Now suppose:

(A) We can improve the first person's well-being to 0 and leave the second person at 30, or (B) We can improve the second person's well-being to 300,000 and leave the first person at -10.

Assume the first person keeps their critical level at 5 in each case. If I've understood you correctly, in the first case, the second person should set their critical level to 25 - b, so that the total moral value is an infinitesimal, b; and in the second case, they should set it to 299,985 - c, so that again, the total moral value is an infinitesimal, c. If b > c or b = c, we get the problems facing lexical theories. So let's say we choose b and c such that c > b. But if we also consider:

(C) improving the second person's well-being to 31 and leave the first person at -10

We choose critical level 16 - d, I assume you want b > d because I assume you want to say that (C) is worse than (A). So if x(n) is the infinitesimal used when we can increase the second person's well-being to n, we have x(300,000) > b > x(31). At some point, we'll have m such that x(m+1) > b > x(m) (assuming some continuity, which I think is very plausible), but for simplicity, let's say there's an m such that x(m) = b. For concreteness, let's say m = 50, so that we're indifferent between increasing the second person's well-being to 50 and increasing the first person's to 0.

Now for a positive integer q, consider:

(Bq) We have q people at positive well-being level k, and the first person at well-being level -10.

Repeating the above procedure (for fixed q, letting k vary), there's a well-being level k(q) such that we're indifferent between (A) and (Bq). We can do this for each q. Then let's say k(2) = 20, k(4) = 10, k(10) = 4, k(20) = 2, k(40) = 1 and so on... (This just gives the same ordering as totalism in these cases; I just chose factors of 40 in that sequence to make the arithmetic nice.) This means we're indifferent between (A) and 40 people at well-being 1 with one person at -10, so we'd rather have 41 people at 1 and one person at -10 than (A). Increasing 41 allows us to get the same result with well-being levels even lower than 1 -- so this is just the sadistic repugnant conclusion. You can make it less bad by discounting positive well-being, but then you'll inherit the problems facing lexical theories. Say you discount so that as q (the number of people) tends to infinity, the well-being level at which you're indifferent with (A) tends to some positive number -- say 10. Then 300,000 people at level 10 and one person at level -10 is worse than (A). But that means you face the same problem as lexical theories because you've traded vast amounts of positive well-being for a relatively small reduction in negative well-being. The lower you let this limit be, the closer you get to the sadistic repugnant conclusion, and the higher you let it be, the more your theory looks like lexical negative utilitarianism. You might try to get round this by appealing to something like vagueness/indeterminacy or incommensurability, but these approaches also have counter-intuitive results.

You're theory is an interesting way to avoid the repugnant conclusions, and in some sense, it strikes a nice balance between totalism and lexical negative utilitarianism, but it also inherits the weaknesses of at least one of them. And I must admit, I find the complete subjectiveness of the critical levels bizarre and very hard to stomach. Why not just drop the messy and counter-intuitive subjectively set variable critical level utilitarianism and prefer quasi-negative utilitarianism based on lexical value? As we've both noted, that view is problematic, but I don't think it's more problematic than what you're proposing and I don't think its problems are absolutely devastating.

[anonymous]5y0
0
0

I guess your argument fails because it still contains too much rigidity. For example: the choice of critical level can depend on the choice set: the set of all situations that we can choose. I have added a section in my original blog post, which I copy here. <0. However, suppose another situation S2 is available for us (i.e. we can choose situation S2), in which that person i will not exist, but everyone else is maximally happy, with maximum positive utilities. Although person i in situation S1 will have a positive utility, that person can still prefer the situation where he or she does not exist and everyone else is maximally happy. It is as if that person is a bit altruistic and prefers his or her non-existence in order to improve the well-being of others. That means his or her critical level C(i,S1) can be higher than the utility U(i,S1), such that his or her relative utility becomes negative in situation S1. In that case, it is better to choose situation S2 and not let the extra person be born. If instead of situation S2, another situation S2’ becomes available, where the extra person does not exist and everyone else has the same utility levels as in situation S1, then the extra person in situation S1 could prefer situation S1 above S2’, which means that his or her new critical level C(i,S1)’ remains lower than the utility U(i,S1). In other words: the choice of the critical level can depend on the possible situations that are eligible or available to the people who must make the choice about who will exist. If situations S1 and S2 are available, the chosen critical level will be C(i,S1), but if situations S1 and S2’ are available, the critical level can change into another value C(i,S1)’. Each person is free to decide whether or not his or her own critical level depends on the choice set.>> So suppose we can choose between two situations. In situation A, one person has utility 0 and another person has utility 30. In situation Bq, the first person has utility -10 and instead of a second person there are now a huge number of q persons, with very low but still positive utilities (i.e. low levels of k). If the extra people think that preferring Bq is sadistic/repugnant, they can choose higher critical levels such that in this choice set between A and Bq, situation A should be chosen. If instead of situation A we can choose situations B or C, the critical levels may change again. In the end, what this means is something like: let’s present to all (potential) people the choice set of all possible (electable) situations that we can choose. Now we let them choose their preferred situation, and let them then determine their own critical levels to obtain that preferred situation given that choice set.

I'm not entirely sure what you mean by 'rigidity', but if it's something like 'having strong requirements on critical levels', then I don't think my argument is very rigid at all. I'm allowing for agents to choose a wide range of critical levels. The point is though, that given the well-being of all agents and critical levels of all agents except one, there is a unique critical level that the last agent has to choose, if they want to avoid the sadistic repugnant conclusion (or something very similar). At any point in my argument, feel free to let agents choose a different critical level to the one I have suggested, but note that doing so leaves you open to the sadistic repugnant conclusion. That is, I have suggested the critical levels that agents would choose, given the same choice set and given that they have preferences to avoid the sadistic repugnant conclusion.

Sure, if k is very low, you can claim that A is better than Bq, even if q is really really big. But, keeping q fixed, there's a k (e.g. 10^10^10) such that Bq is better than A (feel free to deny this, but then your theory is lexical). Then at some point (assuming something like the continuity), there's a k such that A and Bq are equally good. Call this k'. If k' is very low, then you get the sadistic repugnant conclusion. If k' is very high, you face the same problems as lexical theories. If k' not too high or low, you strike a compromise that makes the conclusions of each less bad, but you face both of them, so it's not clear this is preferable. I should note that I thought of and wrote up my argument fairly quickly and quite late last night, so it could be wrong and is worth checking carefully, but I don't see how what you've said so far refutes it.

My earlier points relate to the strangeness of the choice set dependence of relative utility. We agree that well-being should be choice set independent. But by letting the critical level be choice set dependent, you make relative utility choice set dependent. I guess you're OK with that, but I find that undesirable.

[anonymous]5y0
0
0

I honestly don't see yet how setting a high critical level to avoid the repugnant sadistic conclusion would automatically result in counter-intuitive problems with lexicality of a quasi-negative utilitarianism. Why would striking a compromise be less preferable than going all the way to a sadistic conclusion? (for me your example and calculations are still unclear: what is the choice set? What is the distribution of utilities in each possible situation?)

With rigidity I indeed mean having strong requirements on critical levels. Allowing to choose critical levels dependent on the choice set is an example that introduces much more flexibility. But again, I'll leave it up to everyone to decide for themselves how rigidly they prefer to choose their own critical levels. If you find the choice set dependence of critical levels and relative utilities undesirable, you are allowed to pick your critical level independently from the choice set. That's fine, but we should accept the freedom of others not to do so.

I'm making a fresh comment to make some different points. I think our earlier thread has reached the limit of productive discussion.

I think your theory is best seen as a metanormative theory for aggregating both well-being of existing agents and the moral preferences of existing agents. There are two distinct types of value that we should consider:

prudential value: how good a state of affairs is for an agent (e.g. their level of well-being, according to utilitarianism; their priority-weighted well-being, according to prioritarianism).

moral value: how good a state of affairs is, morally speaking (e.g. the sum of total well-being, according to totalism; or the sum of total priority-weighted well-being, according to prioritarianism).

The aim of a population axiology is to determine the moral value of state of affairs in terms of the prudential value of the agents who exist in that state of affairs. Each agent can have a preference order on population axiologies, expressing their moral preferences.

We could see your theory as looking at the prudential of all the agents in a state of affairs (their level of well-being) and their moral preferences (how good they think the state of affairs is compared to other state of affairs in the choice set). The moral preferences, at least in part, determine the critical level (because you take into account moral intuitions, e.g. that the sadistic repugnant conclusion is very bad, when setting critical levels). So the critical level of an agent (on your view) expresses moral preferences of that agent. You then aggregate the well-being and moral preferences of agents to determine overall moral value -- you're aggregating not just well-being, but also moral preferences, which is why I think this is best seen as a metanormative theory.

Because the critical level is used to express moral preferences (as opposed to purely discounting well-being), I think it's misleading and the source of a lot of confusion to call this a critical level theory -- it can incorporate critical level theories if agents have moral preferences for critical level theories -- but the theory is, or should be, much more general. In particular, in determining the moral preferences of agents, one could (and, I think, should) take normative uncertainty into account, so that the 'critical level' of an agent represents their moral preferences after moral uncertainty. Aggregating these moral preferences means that your theory is actually a two-level metanormative theory: it can (and should) take standard normative uncertainty into account in determining the moral preferences of each agent, and then aggregates moral preferences across agents.

Hopefully, you agree with this characterisation of your view. I think there are now some things you need to say about determining the moral preferences of agents and how they should be aggregated. If I understand you correctly, each agent in a state of affairs looks at some choice set of states of affairs (states of affairs that could obtain in the future, given certain choices?) and comes up with a number representing how good or bad the state of affairs that they are in is. In particular, this number could be negative or positive. I think it's best just to aggregate moral preferences directly, rather than pretending to use critical levels that we subtract from levels of well-being, and then aggregate 'relative utility', but that's not an important point.

I think the coice-set dependence of moral preferences is not ideal, but I imagine you'll disagree with me here. In any case, I think a similar theory could specified that doesn't rely on this choice-set dependence, though I imagine it might be harder to avoid the conclusions you aim to avoid, given choice-set independence. I haven't thought about this much.

You might want to think more about whether summing up moral preferences is the best way to aggregate them. This form of aggregation seems vulnerable to extreme preferences that could dominate lots of mild preferences. I haven't thought much about this and don't know of any literature on this directly, but I imagine voting theory is very relevant here. In particular, the theory I've described looks just like a score voting method. Perhaps, you could place bounds on scores/moral preferences somehow to avoid the dominance of very strong preferences, but it's not immediately clear to me how this could be done justifiably.

It's worth noting that the resulting theory won't avoid the sadistic repugnant conclusion unless every agent has very very strong moral preferences to avoid it. But I think you're OK with that. I get the impression that you're willing to accept it in increasingly strong forms, as the proportion of agents who are willing to accept it increases.

[anonymous]5y1
0
0

I very much agree with these points you make. About choice dependence: I'll leave that up to every person for themselves. For example, if everyone strongly believes that the critical levels should be choice set independent, then fine, they can choose independent critical levels for themselves. But the critical levels indeed also reflect moral preferences, and can include moral uncertainty. So for example someone with a string credence in total utilitarianism might lower his or her critical level and make it choice set independent.

About the extreme preferences: I suggest people can choose a normalization procedure, such as variance normalization (cfr Owen-Cotton Barrett (http://users.ox.ac.uk/~ball1714/Variance%20normalisation.pdf) and here: https://stijnbruers.wordpress.com/2018/06/06/why-i-became-a-utilitarian/

"It's worth noting that the resulting theory won't avoid the sadistic repugnant conclusion unless every agent has very very strong moral preferences to avoid it. But I think you're OK with that. I get the impression that you're willing to accept it in increasingly strong forms, as the proportion of agents who are willing to accept it increases." Indeed!

Great -- I'm glad you agree!

I do have some reservations about (variance) normalisation, but it seems like a reasonable approach to consider. I haven't thought about this loads though, so this opinion is not super robust.

Just to tie it back to the original question, whether we prioritise x-risk or WAS will depend on the agents who exist, obviously. Because x-risk mitigation is plausibly much more valuable on totalism than WAS mitigation is on other plausible views, I think you need almost everyone to have very very low (in my opinion, unjustifiably low) credence in totalism for your conlusion to go through. In the actual world, I think x-risk still wins. As I suggested before, it could be the case that the value of x-risk mitigation is not that high or even negative due to s-risks (this might be your best line of argument for your conclusion), but this suggests prioritising large scale s-risks. You rightly pointed out that million years of WAS is the most concrete example of s-risk we currently have. It seems plausible that other and larger s-risks could arise in the future (e.g. large scale sentient simulations), which though admittedly speculative, could be really big in scale. I tend to think general foundational research aiming at improving the trajectory of the future is more valuable to do today than WAS mitigation. What I mean by 'general foundational research' is not entirely clear, but, for instance, thinking about and clarifying that seems more important than WAS mitigation.

[anonymous]5y-1
0
0

I'm curious as to how this is meant to work for moral patients that do not have rational agency and have no autonomy, such as sentient severely disabled beings or very stupid animals (newborn babies, cod, mice etc). How can we respect the autonomy of such patient non-agents in the way you suggest? If you say that they all get assigned a fixed positive critical level, then you would be subject to the problems of fixed critical level theory, which you have said to be devastating.

Curated and popular this week
Relevant opportunities