Hide table of contents

The aim of this post is to introduce the concept of conditional interests. What I focus on is the following claim, its justification and its implications, including for EA priorities:

Only Actual Interests: We accomplish no good by creating and then satisfying an interest, all else equal, because interests give us reasons for their satisfaction, not for their existence or satisfaction over their nonexistence.

This could be described as an asymmetric "interest-affecting view", and the procreation asymmetry follows, because individuals who wouldn't otherwise exist have no interests to satisfy. I think such a view accords best with our intuitions about personal tradeoffs.

It therefore (in theory) allows individuals to make personal tradeoffs between experiences of pleasure and suffering as normally understood, unlike strong negative hedonistic utilitarianism, but it also doesn't give reasons to individuals who have no interest in wireheading, Nozick's experience machine or psychoactive drugs to subject themselves to these. If you don't have an interest in (further) pleasure for its own sake at a given moment, you are not mistaken about this, despite the claims of classical utilitarians. As such, while strong negative utilitarianism may, to some, be counterintuitive if it seems to override interests (see responses to this objection by Brian Tomasik and Simon Knutsson), classical utilitarianism effectively does the same, because it can prioritize the creation and satisfaction of new interests (in the same individual or others) over the satisfaction of actual interests. So, in my view, if negative hedonistic utilitarianism (in any form) is wrong because it overrides individual interests, so is classical utilitarianism. However, I think we can do better than both strong negative hedonistic utilitarianism and classical utilitarianism, which is the point of this post.

Furthermore, by seeing value in the creation of new interest holders just to satisfy their interests and sometimes prioritizing this over interests that would exist anyway (in a narrow or wide sense), classical utilitarianism (and any other theory prescribing this) treats interest holders as mere receptacles or vessels for value in a way that Only Actual Interests prevents. There have been different statements of this objection, but I think this is the clearest one.

The claim Only Actual Interests is basically from Johann Frick’s paper and thesis which defend the procreation asymmetry. I recently wrote a post on the forum that referred to his work, but this post considers essentially his approach, its justifications and its implications. Christoph Fehige's antifrustrationism, developed earlier, is also basically the same, but concerned only with preferences, specifically.

In the first section, I give definitions and in the second, I state some claims. In the third section, I list a few basic implications. In the fourth section, I describe the relationship to Buddhist axiology or tranquilism. In the fifth section, I defend the claim, primarily through examples with our common sense understanding of interests. In the sixth section, I consider some other more abstract theoretical implications. In the seventh section, I describe implications for EA priorities, starting from 80,000 Hours' cause analyses; the main conclusion is that existential risks should receive less priority. In the last section, I include some thoughts about the possibility of prioritizing (an individual's) current interests over (their) future ones.


Definitions

Outcome: The entire actual history of all that is ontological (universe, multiverse, possibly things beyond the physical), past, present and actual future.

Interest, interest holder: An interest is a value held by some holder, the interest holder, and that can be more or less satisfied (according to some total order) so that for the interest holder, it is better that it be more satisfied than less, all else equal.

Note: a value could a priori be an interest with itself or the universe as its holder. We might say the value of pleasure itself has an interest in further pleasure, or the universe has an interest in further pleasure, although I think this is wrong; see the discussion following Only Actually Conscious Interests in the Claims section.

Actual interest: An interest is actual in a given outcome if it is held in that outcome.

Conscious interest: An interest is conscious if its satisfaction or unsatisfaction can be experienced consciously by the holder in some outcome.

Actually conscious interest: An interest is actually conscious in a given outcome if its satisfaction or unsatisfaction is experienced consciously by the holder in that outcome.

Actually conscious interests are actual interests and conscious interests.

Experiential interest: An interest is experiential if its degree of satisfaction is determined solely by the conscious experiences of its holder.

Pleasure: A conscious experience is pleasurable if the experience comes with a conscious interest in itself over its absence for its own sake, and this experience is experienced by the holder of this interest. Pleasure is the conscious experience and the conscious interest of the holder.

Suffering: A conscious experience involves suffering if the experience comes with a conscious interest in its absence over the experience itself, and this experience is experienced by the holder of this interest. Suffering is the conscious experience and the conscious interest of the holder.


Claims

This is just a set of claims of interest for this post. I am not actually making all of these claims here.

Experientialism: The only interests that matter are the holders’ interests in their own conscious experiences.

Only Conscious Interests: The only interests that matter are conscious interests.

Hedonism: Experientialism and Only Conscious Interests are true, and specifically, pleasure and suffering are the only kinds of interests that matter.

Hedonism is one of the main claims of hedonistic utilitarianism, including classical utilitarianism.

Negative Hedonism: Experientialism (or Hedonism) and Only Conscious Interests are true, and specifically, suffering is the only kind of interest that matters.

Negative Hedonism is one of the main claims of (strong) negative hedonistic utilitarianism.

Now, the main claim of this post, restated:

Only Actual Interests: Interests provide reasons for their further satisfaction, but not their existence or satisfaction over their nonexistence.

In particular, an interest is neither satisfied nor unsatisfied in an outcome if it does not occur in that outcome, and this outcome is not worse than one in which the interest occurs, all else equal. (I don't say that only actual interests matter, since that's either confusing or inaccurate.)

You could call this an "interest-affecting view", and this could be interpreted in a narrow or a wide way. Under a narrow view, we wouldn't compare the degree of satisfaction of different interests in different outcomes, only the degree of satisfaction of the same interests common to different outcomes. I'm not sure if such a view can be made both transitive and independent of irrelevant alternatives, although we might also reject these requirements in the first place.

Under a wide view, we might say that it's better for interest to exist and be satisfied to degree than for interest to exist and be satisfied to degree , especially if and are interests of the same kind (e.g. both are interests in pleasure, both are interests in not suffering, both are interests in gaining knowledge), and , so that would be satisfied to a greater degree than .

See the nonidentity problem for some discussion of person-affecting views, in which the interests at stake are the wellbeing for two different people, exactly one of whom will be born. Should we prefer for a person with a better life to be born than a person with a worse life, even if they would be different people? If yes, we should reject a purely narrow person-affecting view.

We might also want to restrict further to interests that are held presently or in the actual future, excluding past interests.

Only Actually Conscious Interests: Interests provide reasons for their further satisfaction when they are consciously experienced and through their conscious experience by their holders, but they don’t provide reasons for their existence or satisfaction over their nonexistence.

In particular, an interest is neither satisfied nor unsatisfied in an outcome if the interest (or its satisfaction/unsatisfaction) is not experienced, and this outcome is not worse than one in which the interest (or its satisfaction/unsatisfaction) is experienced, all else equal. As such, the holders of the interests may as well be the conscious experiences themselves. Or, the universe as a whole may be the holder of conscious interests, but the interests are (in practice, not a priori) localized: in locations where there are no conscious experiences, there are no conscious interests to be satisfied. To illustrate, what you normally understand to be my conscious interests for my own conscious states are interests that are directed at conscious states in the parts of space that will be occupied by what can vaguely be defined as my body. If "I" didn't experience this interest, neither would the universe as a whole.


Some basic implications

In this section, I list a few simple implications of Only Actual Interests.

1. That I could induce a craving in someone and then satisfy it is not a reason for me to actually do so.

2. That someone would have their interests satisfied or be happy or have a good life is not a reason to bring them into existence, although there may be other reasons. That they will almost certainly have some unsatisfied interests is a reason to not bring them into existence, but the reasons to do so could be stronger in practice. This is the procreation asymmetry. In particular, you would never be required to sacrifice your own wellbeing (even into hedonic negative) to bring new individuals into existence, all else equal, although, again, all else is rarely equal.

3. If someone has no interest in psychoactive substances (or, specifically, the resulting experiences), that they might enjoy them is not on its own a reason to try to convince them to take them.


Buddhist axiology or tranquilism

If we combine Only Actually Conscious Interests with Experientialism or even Hedonism, we don’t actually get Negative Hedonism (one of the main claims of negative hedonistic utilitarianism). Pleasure and suffering can both matter, and I am not claiming a conscious interest in further pleasure is necessarily an instance of suffering as I’ve defined these terms, but that the absence of this conscious interest is never in itself bad, while its unsatisfaction is. So, if someone has an unsatisfied interest in further pleasure, this is worse than not having this interest at all, even if they are happy overall, and it's also worse than not increasing their pleasure to satisfy this interest.

This does lead to an asymmetry between pleasure and suffering, but not one that does not count pleasure at all:

Asymmetry between pleasure and suffering: In the absence of an interest in further pleasure, there's no reason to increase pleasure, but suffering by its very definition implies an interest in its absence, so there is a reason to prevent it.

This is effectively Buddhist axiology or tranquilism, framed slightly differently.

In particular, if you’re a utilitarian who also accepts Only Actually Conscious Interests and Experientialism (or Hedonism), you’re basically a negative preference utilitarian who cares only about the conscious satisfaction/unsatisfaction of preferences about conscious experiences. This can include the conscious preference for more pleasure.

Negative preference utilitarians see the unsatisfaction of a preference as worse than its nonexistence, and the complete satisfaction of a preference no better than its nonexistence. If preferences are interests, with Only Actual Interests, they could potentially provide reasons for their satisfaction, but not their existence.


Why Only Actual Interests?

Hedonistic consequentialists defend something like or as strong as Only Conscious Interests. If we can convince them of Only Actual Interests, then they should accept Only Actually Conscious Interests. Or, we can convince them directly of Only Actually Conscious Interests.

We might try to shift the burden of proof: If, according to Only Conscious Interests, an interest matters only if it can be experienced consciously, why should it matter (i.e. detract from an outcome) if it is not actually experienced at all?

However, rather than just shifting the burden of proof, we can defend Only Actual Interests by analogy and generally (not just to those who accept Hedonism), based on the examples Frick gives (the first two), and one more of my own:

1. That you've made a promise to someone is a reason to keep the promise, but the fact that you could keep a promise is not in itself a reason to make it in the first place. Promises provide reasons to be kept, but not reasons to be made.

2. That you have the gear and other means necessary to climb mount Everest successfully doesn't give you a reason to actually do it; you must already (or either way, expect to) have an interest in doing it.

3. That I could induce someone to want and then buy a product (e.g. through marketing) is not a reason for me to actually do so.

I find it pretty intuitive that this is how interests should work. The claims Frick defends are slightly more general than mine in this post, using "normative standard" instead of "interest" and "bearers" instead of "holders". The rejection of the Transfer Thesis for each interest F is basically equivalent to a claim similar to Only Actual Interests:

No Transfer: Interests provide reasons for their further satisfaction, but neither an interest nor its satisfaction provides reasons for the existence of that interest's holder over its nonexistence.

Transfer Thesis: If there is reason to increase the extent to which F is instantiated amongst existing potential bearers, there is also reason to increase the extent to which F is instantiated by creating new bearers of F.

People value a lot of things, but it doesn't seem like these things justify the existence of people themselves. Is a world worse for not having value X if no one is around to miss it? If not, why would adding people just to achieve X do any good? Take X to be any value from that list, "Abundance, achievement, adventure, affiliation, altruism, apatheia, art, asceticism, austerity, autarky, authority, autonomy, beauty, benevolence, bodily integrity, challenge, collective property, commemoration, communism, community, compassion, competence, competition, competitiveness, complexity, comradery, conscientiousness, consciousness, contentment, cooperation, courage ..."



Other theoretical implications

1. The point of a hedonium shockwave, if any, would be to eliminate otherwise unsatisfied interests, not to create happiness. The prevention of future interests by destruction could be a good thing, generally. However, both are wildly speculative, and there are good consequentialist and nonconsequentialist reasons to not pursue either, e.g. for consequentialist ones, moral cooperation and trade, given how much opposition there would be to both. Value may also be more complex than Hedonism allows.

2. It avoids the Repugnant Conclusion, which is a consequence of classical utilitarianism. The repugnant conclusion is basically that for the following populations in which everyone has a life worth living,

the following relations hold

and so by transitivity and the independence of irrelevant alternatives,

The first step from to (Mere Addition) supposedly follows because the extra lives are worth living so adding them can't make the situation worse; the step from to (Non-Anti-Egalitarianism) supposedly follows if we ensure the average welfare is high enough and we aren't so anti-egalitarian that we think a total loss to the best off individuals can only be made up for by a much larger total gain for the worst off individuals, and the last step actually doesn't do anything, since and are identical populations (the division is only illustrative).

Then, doing the same by adding a very large population of lives barely worth living in instead of what's shown there, and would be very flat and close to 0. So, a very large population of lives barely worth living would be better than a small population of very good lives, e.g. here:

For some defenses of the Repugnant Conclusion, see "In defense of repugnance" by Michael Huemer.

The Repugnant Conclusion is avoided in two possibly different ways:

a. It denies the premise that there could be positive existences.

b. Assuming there are positive existences (using a different measure of value than one based on conditional interest satisfaction), adding a population of lives barely worth living is in fact bad, so is false.

To further defend this last point, under a wide view, the step from to of adding a population of lives barely worth living is equivalent to also making everyone in swap places with extra individuals from of the same number. That is, starting from , we make everyone in as badly off as would be the extra individuals in , and add extra individuals with the same wellbeing as the originals in and even more with the lower level. From the point of view of the original individuals in , this could make worse, and adding the extra individuals would not compensate, because they have no interests in being brought into existence.

3. By denying the possibility of positive existences, it avoids Arrhenius's major impossibility theorems like this one (I think this is probably the strongest statement, since it only assumes ordinal, but interpersonally comparable, welfare). Alternatively, if we do interpret positive existence using in hedonistic terms or terms preferences for continued life, then it violates Non-Sadism and implies the Sadistic Conclusion: it can be worse to add a population of positive existences than one of negative existences (usually with a much larger population of positive existences). In response, the Sadistic Conclusion might not be so bad, at least compared to the Repugnant Conclusion, and even more plausibly so since we've already accepted the procreation asymmetry and rejected Mere Addition. (Aside: Arrhenius was a negative utilitarian of some kind in the past; I don't know if he still is.)

4. Whether death is good or bad in itself (ignoring effects on others, which we should not ignore) depends on the nature of interests we count and how we count them. If we accept Only Actually Conscious Interests, then death would be good in itself (again, ignoring effects on others). If we prioritize an individual's current interests over their future ones, then their interests in continuing to live would be given greater weight, see the last section for some thoughts on this.


Implications for EA priorities

In this section, I describe how we might rerank the different cause areas in 80,000 Hours' list here.

1. We should reject Bostrom's astronomical waste argument and give less priority to preventing extinction. That does not mean we have no reasons to care about the (far) future or prevent extinction, but the fact that future humans who would not otherwise exist would be happy (rather than not exist, or fewer of them exist) is not a reason for intervention. This significantly reduces the value of working to prevent existential risks, although they may still be very important, if we think our continued existence would be sufficiently helpful in expectation to, say, wild animals (if they would also continue to exist after our extinction), or aliens. If you don't think extinction is much worse than almost everyone dying, you can see how 80,000 Hours' tool reranks the cause areas. Assuming you answer the previous questions in a way to not cause reranking (although you may very well disagree with the underlying assumptions), answering "(C) Not more than twice as bad" to question 4 reranks the list as follows:

Re-ranked list:
1. Global priorities research - 26
2. Promoting effective altruism - 25 ⇩ (-1 point)
3. Risks posed by artificial intelligence - 23.5 ⇩ (-3.5 points)
4. Factory farming - 23
5. Health in poor countries - 21
6. Reducing tobacco use in the developing world - 20
7. Nuclear security - 20 ⇩ (-3 points)
8. Land use reform - 20
9. Biosecurity - 20 ⇩ (-3 points)
10. Climate change (extreme risks) - 18 ⇩ (-2 points)

Question 4 is

Question 4: Here’s two scenarios:
A nuclear war kills 90% of the human population, but we rebuild and civilization eventually recovers.
A nuclear war kills 100% of the human population and no people live in the future.
How much worse is the second scenario?

If you want to avoid reranking before question 4, you should answer 1. (A), 2. (A) and 3. (B).

Note that AI risk remains above both Factory farming and Health in poor countries. The far future can still indeed be overwhelmingly important, and we may expect AI to shape it even if we don't go extinct. Furthermore, if we do go extinct, that is a lot of early deaths. However, they didn't provide options which go further than "(C) Not more than twice as bad", and your other answers to other questions can influence the rankings. Even if we ignore future generations, it might be better for everyone to go extinct than for 90% of the population to die out, because the surviving 10% may have very bad lives in such an outcome.

Furthermore, if the probability of extinction is around 1% or less (80,000 Hours' best guess seems to be 1-15% in the next 50 years, according to question 3., with answer (B)), then the non-existential risk causes should go up in priority, since there's a greater chance that the work we do for those causes isn't wasted. E.g. ending factory farming and then us going extinct immediately after isn't much better for factory farmed animals than us just going extinct, because factory farming will end anyway if we do go extinct (although we're likely to achieve considerable progress and prevent a lot of suffering up until extinction if we do work on factory farming).


2. There's also a question of the degree to which death is bad. If death seems less bad, then this could further reduce the priority to existential risks. This might also have an effect on the value of some, but not all global health and poverty interventions. If death is bad, I think its badness is unlikely to be roughly proportional to the number of years of life lost, since existing interests are likely to change for many people as they age, but GiveWell doesn't explicitly use such a measure anymore, anyway (see here and here), and I don't know to what degree analysts rely on such an intuition. With Experientialism or Hedonism, death in itself is not bad, but the process of dying and the impacts on loved ones are of course often very bad, perhaps especially an unexpected early death (but if early and later deaths are equally bad, then postponing death doesn't look very good in hedonistic terms). Overall, I don't think global health and poverty as a cause area would necessarily look worse since many of the best interventions do not derive most of their value from life extension. Existential risk cause areas would probably look worse if we thought before that the badness of extinction came primarily from early deaths (and astronomical waste).


3. Global health and poverty interventions might decrease the rate of population growth, and this might be in itself good. Family planning interventions and education in developing countries might look better than otherwise, specifically, for this reason.


4. We should reject the logic of the larder. That is, if animals bred and used for human purposes would have good lives, this is not a reason to breed and use them in the first place, and the fact that they will almost certainly have unsatisfied interests is a reason to not do so. There could be other reasons for their breeding and use, but they need to be even stronger.


Prioritizing current interests over future ones?

I've been wondering lately if there's a plausible consequentialist theory (or more generally, theory of value) which assigns more value to the immediate satisfaction of an individual's current interests over the satisfaction of their future ones in such a way as to be compatible with common sense notions of non-paternalism and consent. In this way, could violating an individual's interests now to better satisfy their own future interests be usually bad in itself? I think this would still be compatible with our understanding of impartiality. We could just use some kind of discounting of interests within individuals, but I'm not sure if this quite does it.

However, if we don't think people's personal identities persist over time (which seems likely to me), this wouldn't mean much. If we don't think they persist, nothing can be paternalistic, since there would be no personal tradeoffs, only interpersonal tradeoffs.

If we also give greater weight to current interests than to future interests generally, not just within individuals, our theory could also look much more deontological in practice, but it would be harder to call this impartial, since it gives less weight to the interests of future people. It might also be difficult to ground from an impartial perspective, because of the relativity of simultaneity. There's no such physical obstacle for personal tradeoffs because we can use an individual's own frame of reference.

If we're not careful, there might be issues with dynamic consistency: the decisions that look best now could systematically look worse in the future, and you would regret them, even with perfect certainty ahead of time.

Comments23
Sorted by Click to highlight new comments since: Today at 10:17 AM
suffering by its very definition implies an interest in its absence, so there is a reason to prevent it.

If a mind exists and suffers, we'd think it better had it not existed (by virtue of its interest in not suffering). And if a mind exists and experiences joy, we'd think it worse had it not existed (by virtue of its interest in experiencing joy). Prima facie this seem exactly symmetrical, at least as far as the principles laid out here are concerned.

Depending on exactly how you make your view precise, I'd think that we'd either end up not caring at all about whether new minds exist (since if they didn't exist there'd be no relevant interests), or balancing the strength of those interests in some way to end up with a "zero" point where we are indifferent (since minds come with interests in both directions concerning their own existence). I don't yet see how you end up with the asymmetric view here.

(Note: My judgments between outcomes here should be qualified with "ignoring other reasons", specifically reasons that don't come from interests or their satisfaction for the existence of those interests or interest holders over their nonexistence.)

Ok, I think I first have to make my claim stronger (actually capturing the first part of its first statement in the intro):

Only Actual Interests: Interests provide reasons for their further satisfaction, but neither an interest nor its satisfaction provides reasons for the existence of that interest over its nonexistence.

It follows from this that a mind with no interests at all is no worse than a mind with interests, regardless of how satisfied its interests might have been. In particular, a joyless mind with no interest in joy is no worse than one with joy. A mind with no interests isn't much of a mind at all, so I would say that this effectively means it's no worse for the mind to not exist.

It would also follow that nonexistence of the mind is not worse, from the universal rejection of the Transfer Thesis (I was mistaken about its equivalence to Only Actual Interests). In my language:

No Transfer: Interests provide reasons for their further satisfaction, but neither an interest nor its satisfaction provides reasons for the existence of that interest's holder over its nonexistence.


On suffering,

Only Actual Interests at least says it's no worse for a mind to not have an interest in the absence of suffering, and hence to not suffer than it is to suffer, because Suffering implies an interest in its absence, by my definition. Similarly, No Transfer would imply it's not worse for the mind to not exist.

There are a few ways to complete the argument that come to mind:

1. If a mind has a constant interest in not suffering which is satisfied to the degree it is not suffering, then not suffering at all would fully satisfy this interest, and not existing at all would be no worse, according to No Transfer.

2. If not, to start, we should assume that if a mind is suffering, if it were suffering less but still suffering (or another mind existed in its place and was suffering less), that would be better, because, e.g. its interest in not suffering would be more satisfied or its interest in not suffering would not be as strong. In particular, its interest in not suffering through its given experience would be completely unsatisfied in both cases, but stronger in the case of worse suffering.

Then, denote by an outcome in which the mind is suffering, and by the outcome in which the mind is not suffering (or does not exist). If we can use the independence of irrelevant alternatives (IIA), transitivity, completeness, and claim the existence of a hypothetical outcome in which the mind (or a replacement) would be suffering less, then we would get . To start by our choice of , we have . Since is suffering, we would get by completeness, and then by transitivity and IIA, , so it would be better for the mind to not suffer (or not exist). Unfortunately, if there is a minimum amount of suffering in a suffering experience (over all hypothetical outcomes), this argument wouldn't apply to it.

Only Actual Interests: Interests provide reasons for their further satisfaction, but neither an interest nor its satisfaction provides reasons for the existence of that interest over its nonexistence.
It follows from this that a mind with no interests at all is no worse than a mind with interests, regardless of how satisfied its interests might have been. In particular, a joyless mind with no interest in joy is no worse than one with joy. A mind with no interests isn't much of a mind at all, so I would say that this effectively means it's no worse for the mind to not exist.

If you make this argument that "it's no worse for the joyful mind to not exist," you can make an exactly symmetrical argument that "it's not better for the suffering mind to not exist." If there was a suffering mind they'd have an interest in not existing, and if there was a joyful mind they'd have an interest in existing.

In either case, if there is no mind then we have no reason to care about whether the mind exists, and if there is a mind then we have a reason to act---in one case we prefer the mind exist, and in the other case we prefer the mind not exist.

To carry your argument you need an extra principle along the lines of "the existence of unfulfilled interests is bad." Of course that's what's doing all the work of the asymmetry---if unfulfilled interests are bad and fulfilled interests are not good, then existence is bad. But this has nothing to do with actual interests, it's coming from very explicitly setting the zero point at the maximally fulfilled interest.

If you make this argument that "it's no worse for the joyful mind to not exist," you can make an exactly symmetrical argument that "it's not better for the suffering mind to not exist.

If I make these claims without argument, yes, but I am giving arguments for the first and against the second, based on a more general claim which is intuitively asymmetric and a few intuitive assumptions about the ordering of outcomes, which together imply "the existence of unfulfilled interests is bad", but not on their own.

The negation of "neither an interest nor its satisfaction provides reasons for the existence of that interest over its nonexistence" would mean pulling interests up by their bootstraps (for at least one specific interest):

"an interest or its satisfaction provides reasons for the existence of that interest over its nonexistence". I think this is far less plausible, see my section "Why Only Actual Interests".

The symmetric claim also seems less plausible:

"neither an interest nor its unsatisfaction provides reasons for the nonexistence of that interest over its existence"

For example, the fact that you would fail to keep a promise is indeed a reason not to make it in the first place. Or, that fact that you would not climb mount Everest successfully is a reason to not try to do so in the first place.

Fehige defends the asymmetry between preference satisfaction and frustration on rationality grounds. This is my take:

Let's consider a given preference from the point of view of a given outcome after choosing it, in which the preference either exists or does not:

1. The preference exists:

a. If there's an outcome in which the preference exists and is more satisfied, and all else is equal, it would have been irrational to have chosen this one (over it, and at all).

b. If there's an outcome in which the preference exists and is less satisfied, and all else is equal, it would have been irrational to have chosen the other outcome (over this one, and at all).

c. If there's an outcome in which the preference does not exist, and all else is equal, the preference itself does not tell us if either would be irrational to have chosen.

2. The preference doesn't exist:

a. If there's an outcome in which the preference exists, regardless of its degree of satisfaction, and all else equal, the preference itself does not tell us if either would have been irrational to have chosen.

So, all else equal besides the existence or degree of satisfaction of the given preference, it's always rational to choose an outcome in which the preference does not exist, but it's irrational to choose an outcome in which the preference exists but is less satisfied than in another outcome.

(I made the same argument here, but this is a cleaner statement.)

Michael wrote this:

Asymmetry between pleasure and suffering: In the absence of an interest in further pleasure, there's no reason to increase pleasure, but suffering by its very definition implies an interest in its absence, so there is a reason to prevent it.

And you write:

If a mind exists and suffers, we'd think it better had it not existed (by virtue of its interest in not suffering). And if a mind exists and experiences joy, we'd think it worse had it not existed (by virtue of its interest in experiencing joy)

A question here is whether "interests to not suffer" are analogous to "interests in experiencing joy." I believe that Michael's point is that, while we cannot imagine suffering without some kind of interest to have it stop (at least in the moment itself), we can imagine a mind that does not care for further joy.

So maybe we could sum up the claim that there's an asymmetry in this way: More suffering is always worse; more happiness isn't always better.

Let's say I see a cute squirrel and it makes me happy. Is it bad that I'm not in virtual reality experiencing the greatest joys imagineable? Maybe it would be bad, if I was the sort of person who had the life goal to experience as much pleasure as possible. But what if I just enjoy going for walks in the real world and occasionally encountering a squirrel? For whom, and why exactly, is it bad that I'm "only" glad and excited to see the squirrel, as opposed to being blissed out of my mind in virtual reality?

I believe that Michael's point is that, while we cannot imagine suffering without some kind of interest to have it stop (at least in the moment itself), we can imagine a mind that does not care for further joy.

The relevant comparison, I think, is between (1) someone who experiences suffering and wants this suffering to stop and (2) someone who experiences happiness and wants this happiness not to stop. It seems that you and Michael think that one can plausibly deny only (2), but I just don't see why that is so, especially if one focuses on comparisons where the positive and negative experiences are of the same intensity. Like Paul, I think the two scenarios are symmetrical.

[EDIT: I hadn't seen Paul's reply when I first posted my comment.]

(2) someone who experiences happiness and wants this happiness not to stop.

Some sleeping pills can give you a positive feeling of intense comfort. And yet people fall asleep on them rather than fighting their tiredness in order to enjoy the feeling a bit longer. I suppose you can point out an analogous-seeming case with depressed people who lack the willpower to improve anything about their low mood. But in the case of depression, there's clearly something broken about the system. Depression does not feel reflectively stable to depressed people (at least absent unfortunate beliefs like "I'm bad and I deserve this"). In the case of me going to sleep rather than staying up, I can be totally reflectively comfortable with going to sleep. Is this example confounded? It seems to me that in order to decide that it's confounded, you have to import some additional intuition(s) which I simply don't share with you. (The same might be true for arguing that it's not confounded, but I always point out that people with different foundational intuitions may not necessarily end up in ethical agreement.)



Interesting example. I have never taken such pills, but if they simply intensify the ordinary experience of sleepiness, I'd say that the reason I (as a CU) don't try to stay awake is that I can't dissociate the pleasantness of falling asleep from actually falling asleep: if I were to try to stay awake, I would also cease to have a pleasant experience. (If anyone knows of an effective dissociative technique, please send it over to Harri Besceli, who once famously remarked that "falling asleep is the highlight of my day.")

More generally, I think cases of this sort have rough counterparts for negative experience, e.g. the act of scratching an itch, or of playing with a loose tooth, despite the concomitant pain induced by those activities. I think such cases are sufficiently marginal, and susceptible to alternative explanations, that they do not pose a serious problem to either (1) or (2).

I'd say that the reason I (as a CU) don't try to stay awake is that I can't dissociate the pleasantness of falling asleep from actually falling asleep

That makes sense. But do you think that the impulse to prolong the pleasant feeling (as opposed to just enjoying it and "laying back in the cockpit") is a component of the pleasure-feeling itself? To me, they seem distinct! I readily admit that we often want to do things to prolong pleasures or go out of our way to seek particularly rewarding pleasures. But I don't regard that as a pure feature of what pleasure feels like. Rather, it's the result of an interaction between what pleasure feels like and a bunch of other things that come in degrees, and can be on or off.

Let's say I found a technique to prolong the pleasure. Assuming it does take a small bit of effort to use it, it seems that whether I'm in fact going to use it depends on features such as which options I make salient to myself, whether I might develop fear of missing out, whether pleasure pursuit is part of my self-concept, the degree to which I might have cravings or the degree to which I have personality traits related to constantly optimizing things about my personal life, etc.

And it's not only "whether I'm in fact going to use the technique" that depends on those additional aspects of the situation. I'd argue that even "whether I feel like wanting to use the technique" depends on those additional, contingent factors!

If the additional factors are just right, I can simply loose myself in the positive feeling, "laying back in the cockpit." That's why the experience is a positive one, why it lets me lay back. Losing myself in the pleasant sensation means I'm not worrying about the future and whether the feeling will continue. If pleasure was intrinsically about wanting a sensation to continue, it would kind of suck because I'd have to start doing things to make that happen.

My brain doesn't like to have do things.

(This could be a fundamental feature of personality where there are large interpersonal differences. I have heard that some people always feel a bit restless and as though they need to do stuff to accomplish something or make stuff better. I don't have that, my "settings" are different. This would explain why many people seem to have troubles understanding the intuitive appeal tranquilism has for some people.)

Anyway, the main point is that "laying back in the cockpit" is something one cannot do when suffering. (Or it's what experienced meditators can maybe do – and then it's not suffering anymore.) And the perspective where laying back in the cockpit is actually appealing for myself as a sentient being, rather than some kind of "failure of not being agenty enough," is what fuels my stance that suffering and happiness are very, very different from one another. The hedonist view that "more happiness is always better" means that, in order to be a good egoist, one needs to constantly be in the cockpit to maximize one's long-term pleasure maximization. That's way too demanding for a theory that's supposed to help me do what is best for me.

Insofar as someone's hedonism is justified solely via introspection about the nature of conscious experience, I believe that it's getting something wrong. I'd say that hedonists of this specific type reify intuitions they have about pleasure (specifically, an interrelated cluster of intuitions about more pleasure always being better, that pleasure is better than non-consciousness, that pleasure involves wanting the experience to continue, etc.) as intrinsic components to pleasure. They treat their intuitions as the way things are while shrugging off the "contentment can be perfect" perspective as biased by idiosyncratic intuitions. However, both intuitions are secondary evaluative judgments we ascribe to these positive feelings. Different underlying stances produce different interpretations.

(And I feel like there's a sense in which the tranquilism perspective is simpler and more elegant. But at this point I'd already be happy if more people started to grant that hedonism is making just as much of a judgment call based on a different foundational intuition.)

Finally, I don't think all of ethics should be about the value of different experiences. When I think about "Lukas, the sentient being," then I care primarily about the "laying back in the cockpit" perspective. When I think about "Lukas, the person," then I care about my life goals. The perspectives cannot be summed into one thing because they are in conflict (except if one's life goals aren't perfectly selfish). If people have personal hedonism as one of their life goals, I care about them experiencing posthuman bliss out of my regard for the person's life goals, but not out of regard of this being the optimal altruistic action regardless of their life goals.

Anecdatally, I've taken medication for insomnia before and ended up trying to stay awake for longer because I was enjoying the sensation of sleepiness. Unfortunately fighting to stay awake was kind of unpleasant, and negated the enjoyment.

>>> I suppose you can point out an analogous-seeming case with depressed people who lack the willpower to improve anything about their low mood.

This reminds me of the 'Penfield mood organ' in Philip K. Dick's 'Do Android's Dream of Electric Sheep?'

>>> From the bedroom Iran's voice came. "I can't stand TV before breakfast." "Dial 888," Rick said as the set warmed. "The desire to watch TV, no matter what's on it." "I don't feel like dialing anything at all now," Iran said. "Then dial 3," he said. "I can't dial a setting that stimulates my cerebral cortex into wanting to dial! If I don't want to dial, I don't want to dial that most of all, because then I will want to dial, and wanting to dial is right now the most alien drive I can imagine; I just want to sit here on the bed and stare at the floor."


(though this description is of someone who stays in a bad mood because they don't have a desire to change it rather than lacking the willpower to)


A question here is whether "interests to not suffer" are analogous to "interests in experiencing joy." I believe that Michael's point is that, while we cannot imagine suffering without some kind of interest to have it stop (at least in the moment itself), we can imagine a mind that does not care for further joy.

I don't think that's the relevant analogy though. We should be comparing "Can we imagine suffering without an interest in not having suffered?" to "Can we imagine joy without an interest in having experienced joy?"

Let's say I see a cute squirrel and it makes me happy. Is it bad that I'm not in virtual reality experiencing the greatest joys imagineable?

I can imagine saying "no" here, but if I do then I'd also say it's not good that you are not in a virtual reality experiencing great suffering. If you were in a virtual reality experiencing great joy it would be against your interests to prevent that joy, and if you were in a virtual reality experiencing great suffering it would be in your interests to prevent that suffering.

You could say: the actually existing person has an interest in preventing future suffering, while they may have no interest in experiencing future joy. But now the asymmetry is just coming from the actual person's current interests in joy and suffering, so we didn't need to bring in all of this other machinery, we can just directly appeal to the claimed asymmetry in interests.

I think it is generally worth seeing population ethics scenarios (like the repugnant conclusion) as being intuition pumps of some principle or another. The core engine of the repugnant conclusion is (roughly) the counter-intuitive implications of how a lot of small things can outweigh a large thing. Thus a huge multitude of 'slightly better than not' lives can outweigh a few very blissful ones (or, turning the screws as Arrhenius does, for any number of blissful lives, there some - vastly larger - number of 'slightly better than not' lives for which it would be worth making these lives terrible for.)

Yet denying lives can ever go better than neutral (counter-intuitive to most - my life isn't maximally good, but I think it is pretty great and better than nothing) may evade the repugnant conclusion, but doesn't avoid this core engine of 'lots of small things can outweigh a big thing'. Among a given (pre-existing, so possessing actual interests, not that this matters much) population, it can be worth torturing a few of these to avert sufficiently many pin-pricks/minor thwarted preferences to the rest.

I also think negative leaning views (especially with stronger 'you can't do better than nothing' ones as suggested here) generally fare worse with population ethics paradoxes, as we can construct examples which not just share the core engine driving things like the repugnant conclusion, but are amplified further by adding counter-intuitive aspects of the negative view in question.

E.g. (and owed to Carl Shulman): suppose A is a vast population (say Tree(9), whatever) of people who are much happier than we are now, and live lives of almost-perfect preference satisfaction, but for a single mild thwarted preference (say they have to wait in a queue bored for an hour before they get into heaven). Now suppose B is a vast (but vastly smaller, say merely 10^100) population living profoundly awful lives. The view outlined in the OP above seems to recommend B over A (as a lot of small thwarted preferences among those in B can trade off each awful life in B), and generally that that any number of horrendous lives can be outweighed if you can abolish a slightly imperfect utopia of sufficient size, which seems to go (wildly!) wrong both in the determination and the direction (as A gets larger and larger, B becomes a better and better alternative).

The core engine of the repugnant conclusion is (roughly) the counter-intuitive implications of how a lot of small things can outweigh a large thing.

I disagree that this is the core engine. I know lots of people who find the repugnant conclusion untenable, while they readily bite the bullet in "dust specks vs. torture".

I think the part that's the most unacceptable about the repugnant conclusion is that you go from an initial paradise where all the people who exist are perfectly satisfied (in terms of both life goals and hedonics) to a state where there's suffering and preference dissatisfaction. A lot of people have the intuition that creating new happy people is not in itself important. That's what the repugnant conclusion runs against.

I think the part that's the most unacceptable about the repugnant conclusion is that you go from an initial paradise where all the people who exist are perfectly satisfied (in terms of both life goals and hedonics) to a state where there's suffering and preference dissatisfaction.

I hesitate to exegete intuitions, but I'm not convinced this is the story for most. Parfit's initial statement of the RP didn't stipulate the initial population were 'perfectly satisfied' but that they 'merely' had a "very high quality of life" (cf.). Moreover, I don't think most people find the RP much less unacceptable if the initial population merely enjoys very high quality of life versus perfect satisfaction.

I agree there's some sort intuition that 'very good' should be qualitatively better than 'barely better than nothing', so one wants to resist being nickel-and-dimed into the latter (cf. critical level util, etc.). I also agree there's person-affecting intuitions (although there's natural moves like making the addition of A+ also increase the welfare of those originally in A, etc.)

Okay, I agree that going "from perfect to flawed" isn't the core of the intuition.

Moreover, I don't think most people find the RP much less unacceptable if the initial population merely enjoys very high quality of life versus perfect satisfaction.

This seems correct to me too.

I mostly wanted to point out that I'm pretty sure that it's a strawman that the repugnant conclusion primarily targets anti-aggregationist intuitions. I suspect that people would also find the conclusion strange if it involved smaller numbers. When a family decides how many kids they have and they estimate that the average quality of life per person in the family (esp. with a lot of weights on the parents themselves) will be highest if they have two children, most people would find it strange to go for five children if that did best in terms of total welfare.

For what it's worth, that example is a special case of the Sadistic Conclusion (perhaps the Very Sadistic Conclusion?), which I do mention towards the end of the section "Other theoretical implications". Given the impossibility theorems, like the one I cite there, claiming negative leaning views generally fare worse with population ethics paradoxes is a judgment call. I have the opposite judgment.

There's a more repugnant version of the Repugnant Conclusion called the Very Repugnant Conclusion, in which your population A would be worse than a population with just the very bad lives in B, plus a much larger number of lives barely worth living, but still worth living, because their total value can make up for the harms in B and the loss of the value in A. If we've rejected the claim that these lives barely worth living do make the outcome better (by accepting the asymmetry or the more general claims I make and from which it follows) or can compensate for the harm in these bad lives, then the judgment from the Very Repugnant Conclusion would look as bad.

Furthermore, if you're holding to the intuition that A doesn't get worse as more people are added, then you couldn't demonstrate the Sadistic Conclusion with your argument in the first place, so while the determination might clash with intuition (a valid response), it seems a bit question-begging to add that it goes wrong in "the direction (as A gets larger and larger, B becomes a better and better alternative)."

However, more importantly, this understanding of wellbeing conflicts with how we normally think about interests (or normative standards, according to Frick), as in Only Actual Interests and No Transfer (in my reply to Paul Christiano): if those lives never had any interest in pleasure and never experienced it, this would be no worse. Why should pleasure be treated so differently from other interests? So, the example would be the same as a large number of lives, each with a single mild thwarted preference (bad), and no other preferences (nothing to make up for the badness of the thwarted preference).

If you represent the value in lives as real numbers, you can reject either Independence/Separability (that what's better or worse should not depend on the existence and the wellbeing of individuals that are held equal) or Continuity to avoid this problem. How this works for Continuity is more obvious, but for Independence/Separability, see Aggregating Harms — Should We Kill to Avoid Headaches? by Erik Carlson and his example Moderate Trade-off Theory. Basically, you can maximize the following social welfare function, for some fixed , with the utilities sorted in increasing (nondecreasing) order, (and, with the views I outline here, all of these values would never be positive):

Note that this doesn't actually avoid the Sadistic Conclusion if we do allow positive utilities, because adding positive utilities close to 0 can decrease the weight given to higher already existing positive utilities in such a way as to make the sum decrease. But it does avoid the version of the Sadistic Conclusion you give if we're considering adding a very large number of very positive lives vs a smaller number of negative (or very negative) lives to a population which has lives that are much better than the very positive ones we might add. If there is no population you're adding to, then a population of just negative lives is always worse than one with just positive lives.

(I'm not endorsing this function in particular.)

For what it's worth, that example is a special case of the Sadistic Conclusion

It isn't (at least not as Arrhenius defines it). Further, the view you are proposing (and which my example was addressed to) can never endorse a sadistic conclusion in any case. If lives can only range between more or less bad (i.e. fewer or more unsatisfied preferences, but the amount/proportion of satisfied preferences has no moral bearing), the theory is never in a position to recommend adding 'negative welfare' lives over 'positive welfare' ones, as it denies one can ever add 'positive welfare' lives.

Although we might commonsensically say people in A, or A+ in the repugnant conclusion (or 'A' in my example) have positive welfare, your view urges us that this is mistaken, and we should take them to be '-something relatively small' versus tormented lives which are '- a lot': it would still be better for those in any of the 'A cases' had they not come into existence at all.

Where we put the 'zero level' doesn't affect the engine of the repugnant conclusion I identify: if we can 'add up' lots of small positive increments (whether we are above or below the zero level), this can outweigh a smaller number of much larger negative shifts. In the (very/) repugnant conclusion, a vast multitude of 'slightly better than nothing' lives can outweigh very large negative shifts to a smaller population (either to slightly better than nothing, or, in the very repugnant case, to something much worse). In mine, avoiding a vast multitude of 'slightly worse than nothing' lives can be worth making a smaller group have 'much worse than nothing' lives.

As you say, you can drop separability, continuity (etc.) to avoid the conclusion of my example, but these are resources available for (say) a classical utilitarian to adopt to avoid the (very/) repugnant conclusion too (naturally, these options also bear substantial costs). In other words, I'm claiming that although this axiology avoids the (v/) repugnant conclusion, if it accepts continuity etc. it makes similarly counter-intuitive recommendations, and if it rejects them it faces parallel challenges to a theory which accepts positive utility lives which does the same.

Why I say it fares 'even worse' is that most intuit 'an hour of boredom and (say) a millenia of a wonderfully happy life' is much better, and not slightly worse, than nothing at all. Thus although it seems costly (for parallel reasons for the repugnant conclusion) to accept any number of tormented lives could be preferable than some vastly larger number of lives that (e.g.) pop into existence to briefly experience mild discomfort/preference dissatisfaction before ceasing to exist again, it seems even worse that the theory to be indifferent to that each of these lives are now long ones which, apart from this moment of brief preference dissatisfaction experience unalloyed joy/preference fulfilment, etc.

Ok.

Why I say it fares 'even worse' is that most intuit 'an hour of boredom and (say) a millenia of a wonderfully happy life' is much better, and not slightly worse, than nothing at all.

Most also intuit that the (Very) Repugnant Conclusion is wrong, and probably that people are not mere vessels or receptacles for value (which isn't avoided by classical utilitarians by giving up continuity or independence/separability), too. Why is the objection you raise stronger? There are various objections to all theories of population ethics; claiming some are worse than others is a personal judgment call, and you seem to be denying the possibility that many will find the objections to other views even more compelling without argument.

I claim we can do better than simply noting 'all theories have intuitive costs, so which poison you pick is a judgement call'. In particular, I'm claiming that the 'only thwarted preferences count' poses extra intuitive costs: that for any intuitive population ethics counter-example C we can confront a 'symmetric' theory with, we can dissect the underlying engine that drives the intuitive cost, find it is orthogonal to the 'only thwarted preferences count' disagreement, and thus construct a parallel C* to the 'only thwarted preferences count' view which uses the same engine and is similarly counterintuitive, and often a C** which is even more counter-intuitive as it turns the screws to exploit the facial counter-intuitiveness of 'only thwarted preferences count' view. I.e.

Alice: Only counting thwarted preferences looks counter-intuitive (e.g. we generally take very happy lives as 'better than nothing', etc.) classical utilitarianism looks better.

Bob: Fair enough, these things look counter-intuitive, but theories are counter-intuitive. Classical utilitarianism leads to the very repugnant conclusion (C) in population ethics, after all, whilst mine does not.

Alice: Not so fast. Your view avoids the very repugnant conclusion, but if you share the same commitments re. continuity etc., these lead your view to imply the similarly repugnant conclusion (and motivated by factors shared between our views) that any n lives tormented are preferable to some much larger m of lives which suffer some mild dissatisfaction (C*).

Furthermore, your view is indifferent to how (commonsensically) happy the m people are, so (for example) 10^100 tormented lives are better than TREE(9) lives which are perfectly blissful, but for a 1 in TREE(3) chance [to emphasise, this chance is much smaller than P(0.0 ...[write a zero on every plank length in the observable universe]...1)] of suffering an hour of boredom once in their life. (C**)

Bob can adapt his account to avoid this conclusion (e.g. dropping continuity), but Alice can adapt her account in a parallel fashion to avoid the very repugnant conclusion too. Similarly, 'value receptacle'-style critiques seem a red herring, as even if they are decisive for preference views over hedonic ones in general, they do not rule between 'only thwarted preferences count' and 'satisfied preferences count too' in particular.

I don't think the cases between asymmetric and symmetric views will necessarily turn out to be so ... symmetric (:P), since, to start, they each have different requirements to satisfy to earn the names asymmetric and symmetric, and how bad a conclusion will look can depend on whether we're dealing with negative or positive utilities or both. To be called symmetric, it should still satisfy Mere Addition, right?

Dropping continuity looks bad for everyone, in my view, so I won't argue further on that one.

However, what are the most plausible symmetric theories which avoid the Very Repugnant Conclusion and are still continuous? To be symmetric, it should still accept Mere Addition, right? Arrhenius has an impossibility theorem for the VRC. It seems to me the only plausible option is to give up General Non-Extreme Priority. Does such a symmetric theory exist, without also violating Non-Elitism (like Sider's Geometrism does)?

EDIT: I think I've thought of such a social welfare function. Do Geometrism or Moderate Trade-off Theory for the negative utilities (or whatever an asymmetric view might have done to prioritize the worst off), and then add the term for the rest, where is strictly continuous, increasing and bounded above.

Similarly, 'value receptacle'-style critiques seem a red herring, as even if they are decisive for preference views over hedonic ones in general, they do not rule between 'only thwarted preferences count' and 'satisfied preferences count too' in particular.

Why are value receptacle objections stronger for preferences vs hedonism than for thwarted only vs satisfied too?

If it's sometimes better to create new individuals than to help existing ones, then we are, at least in part, reduced to receptacles, because creating value by creating individuals instead of helping individuals puts value before individuals. It should matter that you have your preferences satisfied because you matter, but as value receptacles, it seems we're just saying that it matters that there are more satisfied preferences. You might object that I'm saying that it matters that there are fewer satisfied preferences, but this is a consequence, not where I'm starting from; I start by rejecting the treatment of interest holders as value receptacles, through Only Actual Interests (and No Transfer).

Is it good to give someone a new preference just so that it can be satisfied, even at the cost of the preferences they would have had otherwise? How is convincing someone to really want a hotdog and then giving them one doing them a service if they had no desire for one in the first place (and it would satisfy no other interests of theirs)? Is it better for them even in the case where they don't sacrifice other interests? Rather than doing what people want or we think they would want anyway, we would make them want things and do those for them instead. If preference satisfaction always counts in itself, then we're paternalists. If it doesn't always count but sometimes does, then we should look for other reasons, which is exactly what Only Actual Interests claims.

Of course, there's the symmetric question: does preference thwarting (to whatever degree) always count against the existence of those preferences, and if it doesn't, should we look for other reasons, too? I don't find either answer implausible. For example, is a child worse off for having big but unrealistic dreams? I don't think so, necessarily, but we might be able to explain this by referring to their other interests: dreaming big promotes optimism and wellbeing and prevents boredom, preventing the thwarting of more important interests. When we imagine the child dreaming vs not dreaming, we have not made all else equal. Could the same be true of not quite fully satisfied interests? I don't rule out the possibility that the existence and satisfaction of some interests can promote the satisfaction of other interests. But if, they don't get anything else out of their unsatisfied preferences, it's not implausible that this would actually be worse, as a rule, if we have reasonable explanations for when it wouldn't be worse.

My very tentative view is that we're sufficiently clueless about the probability distribution of possible outcomes from "Risks posed by artificial intelligence" and other x-risks, that the ratio between [the value one places on creating a happy person] and [the value one places on helping a person who is created without intervention] should have little influence on the prioritization of avoiding existential catastrophes.

I would guess that extinction would have more permanent and farther reaching effects than the other outcomes in existential catastrophes, especially if the population were expected to grow otherwise, so with a symmetric view, extinction could look much worse than the rest of the distribution (conditioning on extinction, and conditioning on existential risk not causing extinction).

Curated and popular this week
Relevant opportunities