Suffering-Focused Ethics (SFE) FAQ

by EdisonY36 min read16th Oct 202123 comments

61

Negative utilitarianismSuffering-focused ethicsS-risk
Frontpage

This FAQ is meant to introduce suffering-focused ethics to an EA-aligned audience. Section 2 is the most important part of the FAQ, and will help the reader get a good grasp of the conceptual landscape of SFE. The FAQ is designed to allow the reader to skip around and head directly to questions that they are personally interested in, so please don’t feel the need to read the entire FAQ sequentially.

1. Common Misconceptions

1.1 Is SFE the same as negative utilitarianism (NU)?

No, SFE is a broader category of ethical views that includes NU, see 2.1 for details.

1.2 Does SFE assume that there is more suffering than happiness in most people’s lives?

No, SFE’s core claim is that reducing suffering is more morally important than increasing happiness. This normative claim does not hinge on the empirical quantity of suffering and happiness in most people’s lives. 

1.3 Are people who hold suffering-focused views much more concerned about mild suffering than, say, the average ethically concerned person?

This varies between versions of SFE. Most versions of SFE do not attach extreme moral importance to mild suffering. Threshold lexical SFE (see 2.8), for example, is compatible with the claim that mild suffering is morally symmetrical to mild happiness.

1.4 Isn’t SFE a fringe moral theory? 

People often hold that SFE is a fringe moral theory because they conflate SFE with negative utilitarianism (see 1.1). However, many more moderate views in the SFE family are arguably very widely held. See what we call the common sense SFE view here. It is also notable that even negative utilitarianism has had more supporters than it’s often given credit for. For more on philosophers and philosophical traditions that have supported SFE, see here.

1.5 Is prioritarianism a version of SFE?

No. Suppose no one in the world suffers, but some people are happier than others. Prioritarianism implies that we ought to prioritize those who are less happy, while SFE does not entail this implication. See 4.10 for more on the relationship between prioritarianism and SFE.

2. What views fall under Suffering-Focused Ethics?

2.1 What is Suffering-focused Ethics (SFE)?

SFE is an umbrella term for ethical views that attach primary or special moral importance to the prevention of suffering. Although a large variety of possible ethical views fall under the category of SFE, they all share at least one common claim, which we may call the happiness/suffering asymmetry thesis:

The happiness/suffering asymmetry thesis: Suffering has higher ethical priority than happiness.

Different SFE theories will have different views on the extent and nature of the asymmetry (for details, please see 2.6-2.11 for this FAQ).

2.2 What is suffering?

Like happiness, suffering is ambiguous between two senses. Understood in the descriptive sense, suffering refers to mental states that are typically associated with pain, the feeling of dissatisfaction, etc. Since this sense of suffering is purely descriptive, there is a further value-oriented question as to whether suffering is morally bad. Alternatively, we can understand suffering in the normative sense. In this latter sense, suffering is a type of disvalue. Suffering in this sense can be understood as the most general type of prudential disvalue (that which is bad for the subject). 

SFE theorists adopt the descriptive sense of ‘suffering’. Some SFE theorists make the claim that an equal amount of suffering is worse than an equal amount of happiness. This claim is coherent only if one is employing the descriptive sense of suffering. 

2.3 What account of suffering should SFE subscribe to?

Suffering can be characterized as ‘an overall bad feeling or state of consciousness'. However, this needs to be made more precise. Accounts of suffering can be divided into two camps. The first type of account views suffering as a non-reducible intrinsic quality of mental states.  We may call this type of view non-reductionism about suffering. This view arguably captures our ordinary understanding of ‘suffering’ quite well. Mayerfeld argues that this notion of suffering allows us to make sense of sentences like the following: ‘I would rather give up four hours of happiness of intensity x than endure one hour of suffering of the same intensity x.’

The second type of account, championed by Parfit, can be called reductionism about suffering. According to Parfit, suffering can be reduced to experiences (or states of consciousness) that are unwanted (that is, the subject wishes for her experience to end or change). Parfit presents the following case in support of this view: 

  • After taking certain kinds of drug, people claim that the quality of their sensations has not altered, but they no longer disprefer these sensations. We would regard such drugs as effective analgesics. 

Reductionism captures the intuition that, after taking these drugs, the sensations no longer constitute suffering. Further, it is plausible that the reductionist account of suffering is more morally relevant, since they believe that the drug would improve the subject’s well-being. 

2.4 What theories of well-being are closely aligned with SFE?

A theory of well-being should tell us what makes someone’s life go best. One theory of well-being that is closely associated with SFE is tranquilism, according to which tranquility, or the absence of suffering, is the best possible mental state (non-instrumentally speaking). This view is closely associated with Buddhism as well as Epicurus. In contrast with standard hedonism, this view holds that a mental state free of suffering cannot be improved any further. Tranquilists hold that pleasure has no intrinsic value, but they do recognize that it can have important instrumental value, as pleasure can take our minds off suffering, allowing us to temporarily reach tranquility. 

Another theory of well-being that is closely associated with SFE is antifrustrationism, defended by Christoph Fehige. This view states that ‘we don't do any good by creating satisfied extra preferences. What matters about preferences is not that they have a satisfied existence, but that they don't have a frustrated existence.’ 

Notably, adopting either antifrustrationism or tranquilism allows us to avoid the repugnant conclusion, which is seen as ‘one of the cardinal challenges of modern ethics’. For more on this, please see Fehige (1998).

Although these two theories of well-being are often associated with SFE, one does not need to hold either view to accept SFE, as will become obvious in later sections.

2.5 How does antifrustrationism relate to preference-satisfaction theory?

Unlike antifrustrationism, preference satisfaction theory implies that we have moral reasons for creating new preferences and satisfying them. Due to this difference, antifrustrationism is arguably superior to preference satisfaction theory in at least one respect, as it avoids the latter’s counterintuitive implications in the drug addiction thought experiment:

  • Imagine being offered a drug, which gives us an incredibly strong desire to have another injection of the same drug each morning. Since we are offered ample supplies, we will always be able to fulfill our desire each morning. However, the drug has no effect other than causing the desire, so it will be neither pleasant nor painful, nor will the drug impact our lives in any other way. 

Standard preference satisfaction theory implies not only that we should take up the offer, it also states that our lives would have been greatly improved after taking it. This seems implausible. In contrast, antifrustrationism gives the correct verdict that taking the drug does not improve our lives.

2.6 What distinctions are there between SFE views?

As stated in 2.1, SFE includes a very diverse cast of ethical positions. Different SFE views disagree on a number of key dimensions, which are often orthogonal to each other. Please find explanations for the major distinctions between SFE views from section 2.7 to section 2.11. 

2.7 What is the difference between principled and practical SFE views?

An SFE theorist can either hold that suffering has special priority as a matter of fundamental moral principle or as a secondary moral rule (e.g. as a decision procedure for indirect utilitarians). Principled SFE views hold the former view, and have the asymmetry between happiness and suffering embedded either in its axiology or fundamental principles. A well-known example of principled SFE is negative utilitarianism.

Practical SFE views, on the other hand, are compatible with a vast range of ethical theories. To adopt a practical SFE view, one just needs to believe that suffering has a particularly high practical priority. For example, a classical utilitarian might hold this view if they believe that, empirically speaking, it is generally much easier to reduce large amounts of suffering than to produce large amounts of happiness (see Vinding, 2020, sections 1.2 and 1.3). One may also hold practical SFE views for decision-theoretic reasons: one may be more confident in the claim that suffering is bad than the claim that happiness is good (see Vinding, 2020, sections 1.5). Notably, a number of other popular ethical theories likely commit one to practical versions of SFE. Prioritarians and sufficientarians hold that it is particularly important to help (However, see 4.9 for an objection against SFE based on prioritarian intuitions) those who are the least well-off. If the best way to help the least well-off is to reduce or prevent their suffering, as is likely the case, then prioritarians and sufficientarians are committed to practical versions of SFE. 

It is plausibly the case that most ethically-minded people already accept practical SFE, since they likely hold that reducing suffering is more morally urgent than increasing happiness, at least in the current world (consider 80,000 hours’ list of priorities, most are at least indirectly aimed at preventing suffering and few aims at increasing happiness).

2.8 What is the difference between lexical and weak SFE views?

The distinction between lexical and weak SFE views is a critical one. Lexical versions of SFE hold that suffering has lexical priority over happiness. Importantly, this means that lexical versions of SFE are compatible with the claim that happiness has some moral value. 

Lexical SFE can be further distinguished between several positions. Strong lexical SFE holds that no amount of happiness can outweigh any amount of suffering. Threshold lexical SFE holds that there is some magnitude/kind of suffering that no amount of happiness can outweigh. The threshold may, for example, be determined by what a person would ideally consent to (see chapter 4 of Vinding’s book Suffering-Focused Ethics). Some have also suggested that there is some magnitude/kind of extreme suffering that no amount of mild suffering can outweigh. As we will see in sections 4.4 to 4.9, lexical versions of SFE avoid most of the common objections against SFE. 

A common misconception is that lexical SFE views attach infinite disvalue to (some magnitude/kind of) suffering. This is not the case. For example, one can hold lexical SFE by believing that values other than suffering have diminishing marginal value such that their moral values asymptote to certain thresholds, but suffering does not have diminishing marginal disvalue (or the disvalue of suffering diminishes slower). As we will see in 4.2 and 4.3, this version of lexical SFE avoids some of the most common objections against SFE. 

On the other hand, weak SFE views hold that ‘there is an exchange rate between suffering and happiness or perhaps some nonlinear function which shows how much happiness would be required to outweigh any given amount of suffering’ (Ord, 2013).

2.9 What is the difference between monist and pluralist SFE views?

Monist versions of SFE hold that only suffering holds non-instrumental moral value. If you accept either antrifrustrationism or tranquilism, and you hold welfarism, you are committed to Monist SFE. Monist SFE trivially entails lexical and principled SFE. However, not all principled or lexical SFE views entail monism. For example, one can hold that both suffering and happiness have non-instrumental moral value, but suffering has lexical priority over happiness (see 2.8). 

Pluralist versions of SFE, on the other hand, hold that things other than suffering (such as happiness) also have non-instrumental moral value. Pluralist versions of SFE may be regarded as less parsimonious, but it avoids many common objections against SFE (see section 4 of the FAQ).

2.10 What is the difference between consequentialist and deontic SFE views?

According to consequentialist versions of SFE, suffering has special moral significance because a high disvalue is attached to states of affairs that contain (certain types of) suffering. Since states of affairs that involve suffering are very bad, and we ought to maximize the value of the consequences of our actions, we should prioritize preventing suffering. Negative utilitarianism is a famous example of a consequentialist SFE view. 

Deontic versions of SFE, on the other hand, hold that there are asymmetries between suffering and happiness in our duties as agents. For example, one may hold that we are obligated to prevent suffering but not obligated to increase happiness. Deontic SFE can also be more specific about the asymmetry. For example, it can hold that there is an obligation to prevent suffering in other people, but not in oneself. A closely related deontic asymmetry is what is known as the procreation asymmetry, see 3.2 for details.

Unlike consequentialist SFE, deontic versions of SFE need not hold an axiology (or theory of well-being) that privileges suffering. To count as deontic SFE, a view just needs to hold, as a criterion of rightness, that we ought to adhere to certain moral rules which give special importance to suffering. Deontic SFE in the present sense need not be grounded in any specific deontological moral theory. In fact, negative utilitarianism implies deontic SFE, since the former holds that we are obligated to prevent suffering but not obligated to increase happiness. With that said, most deontic forms of SFE do not require maximization of value and are, thus, less demanding. 

2.11 What is the difference between interpersonal and intrapersonal versions of SFE?

Interpersonal versions of SFE claim that the moral asymmetry between suffering and happiness holds across different persons. Consider an action that creates some amount of happiness in person A but an equivalent amount of suffering in person B. Interpersonal versions of SFE would hold that such actions are wrong. Versions of SFE that combine lexicality and interpersonality will hold that creating a certain amount of suffering in someone is wrong no matter how much happiness is created in another person. 

Intrapersonal versions of SFE claim that there is still a moral asymmetry even if the suffering and happiness occur within the same person. Lexical versions of this imply that it is wrong for me to increase my happiness at the cost of my suffering.

Note that an SFE view can be both interpersonal and intrapersonal. 

2.12 Can you give some specific examples of SFE views, which combine these dimensions together?

One specific version of SFE that has already been mentioned is negative utilitarianism (NU). NU combines consequentialism and monism. It is also lexical, principled, and permits both intrapersonal and interpersonal tradeoffs. Notably, NU is implied if one combines either antifrustrationism or tranquilism with standard utilitarianism. NU likely has to bite the bullet on a number of common objections, but proponents of NU have argued that these implications are not as unintuitive as they initially appear. For more on NU, a helpful resource is the NU FAQ

Another important type of SFE is practical, deontic, pluralist, and interpersonal. This results in arguably the least demanding form of SFE, in the sense that it is compatible with a wide range of ethical views. Call this type of view common sense SFE. Since it’s practical, one need not commit to suffering-focused axiologies or theories of well-being. Since it’s deontic, it doesn’t commit us to maximize consequences, and thus avoids radical implications (see 4.4 to 4.7). The pluralism component avoids the claim that nothing other than the prevention of suffering has moral value. Lastly, since it only prohibits certain forms of interpersonal trade-offs, it avoids radical implications in how one should live one’s own life. A rule consequentialist may adopt this type of SFE if she believes that adopting certain suffering-focused rules maximizes value. This version of SFE avoids most of the common objections against SFE. 

Lastly, I wish to highlight a form of SFE that is pluralist, principled, consequentialist, and threshold lexical. Call this standard lexical SFE. Such an SFE view adopts an axiology that contains multiple values, but holds that suffering has lexical priority over at least some other values. For example, it may hold that most (dis)values other than suffering have asymptotic moral returns (see 2.8). In practice, it may share virtually all the prescriptions from standard versions of consequentialism, except when potential outcomes contain extreme suffering. This version of SFE also avoids most of the common objections against SFE. When its implications differ from the implications of standard consequentialism, it seems to do better at tracking our intuitions (see 3.6). 
 

3. Arguments for SFE

3.1 Why should I believe in SFE?

Reasons for adopting SFE can be divided into three categories. First, we argue that SFE captures a number of common moral intuitions (please see 3.2 and 3.4). Second, SFE arguably avoids a number of pressing objections against alternative moral theories like total utilitarianism (see 3.5 and 3.6). Lastly, if you are a classic utilitarian and accept certain empirical claims, then you are already committed to practical SFE (see 3.3).

3.2 Why should we think that there is a moral asymmetry between happiness and suffering?

It is intuitively clear to most of us that intense suffering has high intrinsic disvalue, but most of us are a lot less confident that pleasure, however intense, can have comparable intrinsic value. In Principia Ethica, G. E. Moore writes: 

[T]he consciousness of intense pain … can be maintained to be a great evil. … The case of pain thus seems to differ from that of pleasure: for the mere consciousness of pleasure, however intense, does not, by itself, appear to be a great good, even if it has some slight intrinsic value. In short, pain (if we understand by this expression, the consciousness of pain) appears to be a far worse evil than pleasure is a good.

The moral asymmetry is most intuitively compelling when it is interpersonal. Most of us judge that it is wrong to make a person suffer even if it would make another person happy, or trade the intense suffering of a single person for the mild enjoyment of a large crowd, however large the crowd is. 

Furthermore, these thought experiments would be much less compelling had they been reversed. It does not seem obviously wrong to reduce a person’s happiness to prevent someone’s suffering. Neither does it seem wrong to prevent intense pleasure for a single person in order to stop a large number of people’s mild suffering. This suggests that the intuitive force behind these thought experiments is driven by an asymmetry between suffering and happiness, rather than a moral prohibition against instrumentalization. 

3.3 Is reducing suffering easier in practice than increasing happiness?

As an empirical claim, it is clear that reducing suffering is generally easier to achieve than increasing happiness. For example, GiveWell estimates that it costs around $1 to deworm a child. It is clear that a comparable increase in happiness will have a much higher marginal cost. In general, we also have a much better idea about how to alleviate suffering than how to cultivate happiness. As Ord acknowledges, this presents a strong reason for consequentialists to accept practical versions of SFE. For more on this, see section 1.3 from Vinding (2020).

3.4 Why should I believe that suffering has lexical priority?

Ask yourself this: would you endure the worst possible form of torture for one week in exchange for forty years of bliss? Many people say that no amount of blissful years can compensate for extreme forms of suffering. 

You might not trust your own judgment on this question, since most of us have not experienced extreme forms of suffering ourselves. In the absence of direct experience with extreme suffering, our most reliable source of information about its badness is arguably the testimonies of those who have directly experienced it. Victims of direct suffering report that no amount of happiness can compensate for what they went through. For more detailed arguments for suffering having lexical priority, see Chapter 4 of Vinding (2020).

Moreover, lexicality is a good way for consequentialists to incorporate deontological intuitions. For example, by adopting the view that extreme suffering has lexical priority over happiness, consequentialists can easily explain why it is not morally permissible to torture an individual for the entertainment of a large crowd.

3.5 What is the procreation asymmetry? How does it support SFE?

The procreation asymmetry is a widely shared intuition in population ethics. Jeff McMahan distinguishes between two versions of procreation asymmetry. Strong procreation asymmetry states that 1) there is strong moral reason to refrain from creating lives not worth living; however, 2) there is no moral reason to create lives worth living. McMahan himself endorses weak procreation asymmetry, according to which our reasons to create lives worth living are weaker than our reasons to refrain from creating lives not worth living. 

A life worth living is standardly understood as a life that contains more suffering than happiness. Standard utilitarian views have a hard time accommodating either version of the procreation asymmetry. In contrast, it is easy for most versions of SFE to explain the procreation asymmetry. 

Monist versions of SFE such as negative utilitarianism explain strong procreation asymmetry by denying that we have moral reasons to pursue anything other than the prevention of suffering. Strong procreation asymmetry is also explained by deontic versions of SFE which state that we have an obligation to refrain from creating suffering but no obligation to create happiness (if you are worried that these versions of SFE also imply anti-natalism, please head to 4.6). On the other hand, the weak procreation asymmetry can be naturally explained by most pluralist versions of SFE, since they claim that suffering has higher moral disvalue than happiness has value. 

However, threshold lexical versions of SFE may have trouble accommodating the procreation asymmetry. Since it behaves like standard utilitarianism except when extreme suffering is involved, it cannot explain why reasons to not create a life that is mildly bad are stronger than reasons to create a life that is mildly good.

3.6 How does SFE avoid the repugnant conclusion?

The repugnant conclusion is a central problem in population ethics. Parfit shows that, given a set of plausible initial assumptions, we must conclude that ‘For any possible population of at least ten billion people, all with a very high quality of life, there must be some much larger imaginable population whose existence, if other things are equal, would be better even though its members have lives that are barely worth living’. As the name suggests, most people, including Parfit, find this conclusion unacceptable. 

One way to avoid the repugnant conclusion is to accept either antifrustrationism or tranquilism. These views reject the premise that there exists a contrast between very high quality lives and lives that are ‘barely worth living’. Even though average, critical-level, lexical and other non-standard versions of utilitarianism could avoid the repugnant conclusion, it is arguably the case that accepting either antifrustrationism or tranquilism as one’s theory of well-being is the only way to avoid the repugnant conclusion without rejecting other standard features of total utilitarianism. For more on this, please see Fehige (1998).

Another SFE-aligned strategy to avoid the repugnant conclusion is to say that the moral value of happiness diminishes asymptotically. For more on this strategy, see section 2.1.2 here. If one combines this with the plausible view that the moral disvalue of suffering does not diminish, or diminishes slower, then one is committed to lexical SFE. 


 

4. Objections and Responses

Since SFE is very diverse, it is often impossible to detail how each possible version of SFE will respond to each objection. Where appropriate, this FAQ will focus on objections against the three specific versions of SFE outlined in 2.12, and how each would respond to the objections.

4.1 Intrapersonal versions of SFE rule out trading suffering for happiness within a person’s life. Surely, this is absurd. After all, we seem to do this all the time.

Only strong lexical versions of intrapersonal SFE will prohibit all trade-offs between suffering and happiness within the same person. However, it is true that all versions of intrapersonal SFE rule out at least some intrapersonal trade-offs between suffering and happiness, even though the magnitude of happiness involved is greater than that of suffering. 

This may not have the counterintuitive implications as it first appears to have. Many have pointed out that many cases where we seem to be trading suffering for happiness can be redescribed as cases where we are trading suffering for the prevention of more suffering. To use an example from Ord, people sometimes sprint for the bus to make it to the theatre on time. A person who holds intrapersonal SFE need not say that this is wrong, since missing the theatre show might cause more suffering than sprinting. 

More generally, all SFE views hold that positive experiences can have instrumental value. For more on why various conventional values are instrumentally important even for monist versions of SFE, see this article.

4.2 Don’t lexical versions of SFE imply abrupt discontinuities in value? 

An abrupt discontinuity exists when there is a pair of intuitively adjacent goods (or bads) such that no amount of one can outweigh the tiniest amount of the other. Such discontinuities are unintuitive since the goods (or bads) involved are intuitively adjacent in value ex hypothesi. Strong lexical SFE (as defined in 2.8) does claim that there is a discontinuity between happiness and suffering, but it begs the question against strong lexical SFE to call this an ‘abrupt discontinuity’, since its proponents would deny that suffering and happiness are intuitively adjacent in value. 

However, the abrupt discontinuity objection is more troubling for positions that posit lexicality between different types of suffering. For every two types of suffering, we might think that we can construct a sequence of intermediate bads in which adjacent elements do not differ greatly in badness. In fact, this doesn’t present a problem for lexical views, since it has been formally shown that there are forms of lexical priority (such as asymptotic lexical priority introduced in 2.8) that don’t entail abrupt breaks between adjacent elements. 

However, a related challenge for threshold lexical views may be more difficult to overcome. Threshold lexical views hold that there is some magnitude of suffering such that it cannot be outweighed by any amount of happiness. This implies that there must be some magnitude of suffering just below the threshold that can be outweighed by a finite amount of happiness. But this results in another kind of abrupt break, since it implies that the marginal difference in moral value of a smallest imaginable increase in suffering at the near-threshold point is so great that even an infinite amount of happiness can’t make up for it. 

4.3 Theories that incorporate lexical priority generally face a trilemma of being either irrelevant to decision making, absurdly demanding, or paradox-generating. Are lexical versions of SFE vulnerable to this?

Lexical versions of SFE state that there are some types of suffering such that it is never permissible to bring them about in order to create happiness (or reduce less intense kinds of suffering), regardless of the latter’s magnitude. This creates a trilemma when probability is involved. Suppose an action has two possible outcomes A and B. The action will always bring about outcome A, and it will bring about outcome B with probability P. Outcome A is an arbitrarily large amount of some lesser good (whether happiness or reduction in less intense suffering), while outcome B is a given amount of suffering that has lexical priority. There are three options worth considering for lexical SFE, each of which has unattractive consequences:

  1. The action is always wrong as long as P > 0. However, this is likely to result in an absurdly demanding theory, since nearly all contingent propositions have a nonzero probability on one’s evidence at any given time.
  2. The action is wrong only if P = 1. However, this renders the theory toothless, since almost nothing has a probability of 1 on one’s evidence at any given time.
  3. There is some threshold of probability, above which the action is always wrong. However, this generates paradoxes when the action is repeatable. Specifically, when either action is considered individually (whether or not one does the other action), the theory implies that it is permissible, but when both actions are considered in conjunction, the theory implies that it is impermissible (for more detail, please see Huemer, 2010).

The objection contends that, since all three options appear unattractive, lexical versions of SFE must be rejected. 

Although traditional forms of lexical priority may be vulnerable to this objection, the asymptotic version of lexical SFE is not (see 2.8). Asymptotic lexical SFE states that the moral value of happiness (or other goods) can only add up to a finite amount but the moral disvalue of suffering is potentially infinite. Since the moral value of outcome A is finite, there will always be some magnitude of suffering such that the expected cost from outcome B outweighs the moral value of outcome A, as long as P > 0. It is, therefore, not toothless. Since the magnitude of suffering necessary for lexical priority is inversely related to risk, the theory also avoids being absurdly demanding or paradox-generating. Since asymptotic lexical SFE is also the best candidate for avoiding the objection from 4.2, we have strong reasons to think that it is currently the most plausible version of lexical SFE.

4.4 Doesn’t SFE imply that we should seek the (painless) destruction of the world?

In short, SFE views do not generally have this implication. Pluralist versions of SFE like common sense SFE (as introduced in 2.12) would hold that we should not seek the destruction of the world, since it holds that many things in the world have positive intrinsic value. A soft procreation asymmetry, as discussed here, would generally not entail extinction is good, unless the future bad lives outweighed the future good ones.

Negative utilitarians who accept antifrustrationism would hold that we have strong reasons against destroying the world painlessly, since this would create a large amount of thwarted desires. For more on NU’s response to this objection, see the NU FAQ

Lexical versions of SFE may advocate for the painless destruction of the world if it is the best way to end extreme forms of suffering which have lexical priority, but it is highly unlikely that this is actually the case. 

It is also notable that a similar objection can be made against standard versions of utilitarianism, since they would be in favor of killing everyone and replacing them with a marginally happier population. For an in-depth essay on SFE’s responses to world destruction argument, see here.

4.5 Doesn’t SFE imply that death is good all things considered?

No version of SFE holds that death is always good. However, almost all versions of SFE would hold that death is good given some condition (as do standard versions of utilitarianism). Pluralist versions of SFE like common sense SFE would not hold that death is good in most circumstances, since it accepts that many things in the life have positive intrinsic value. 

Threshold lexical versions of SFE will hold that death is bad for most people, since most people don’t experience extreme levels of suffering that have lexical priority.

Lastly, since death thwarts all of one’s desires, negative utilitarians who accept antifrustrationism would only hold that death is strongly negative in most cases. However, antifrustrationist negative utilitarians may face more difficulty when death is likely to prevent more thwarted preferences on the balance. This may be the case for young infants and temporarily depressed individuals, who don’t have many preferences in the present, but will likely have a lot of preferences (which will foreseeably be thwarted) in the future.

For more on SFE’s implications on death and murder, see here.

4.6 Does SFE imply anti-natalism? Isn’t anti-natalism unacceptably unintuitive?

Anti-natalism is the view that it is always morally impermissible to procreate. Only very specific versions of SFE imply anti-natalism. For reasons similar to those covered in 4.5, neither common sense SFE nor threshold lexical SFE imply anti-natalism. 

Negative utilitarians are more likely to accept anti-natalism, since the suffering created by bringing someone into existence likely outweighs the suffering that potential parents will endure by not having children.  

Whether anti-natalism is unacceptable is a controversial topic. David Benatar is a well-known defender of the view. See Benatar’s book Better Never to Have Been for details on his view.

4.7 Doesn’t SFE imply that a life filled with joy is no better than a life that contains neither joy nor suffering?

Few versions of SFE have this implication. Pluralist versions of SFE like common sense SFE and standard lexical SFE would reject this claim since they hold that there is positive intrinsic value in joy. 

Although monist versions of SFE do imply this conclusion, some theorists have argued that this implication is not as counterintuitive as it first appears. From the perspective of a life that contains neither joy nor suffering, she is perfectly content. This means that, from her perspective, a life filled with joy is not better than hers. To hold that her life is worse than a life filled with joy is to privilege our own perspective over hers. Unless we have good reasons to do so, this is epistemically arbitrary. For more on this, see section 2.4 from Vinding (2020).

4.8 Don’t antifrustrationism and tranquilism imply that a life with 99% happiness and 1% suffering is not worth living? Isn’t this absurd?

It should first be noted that you don’t have to hold antifrustrationism or tranquilism to accept SFE. Pluralist versions of SFE reject these views. 

Whether or not antifrustrationism and tranquilism have this conclusion depends on what you mean by ‘a life not worth living’. They do have this conclusion if it means a life that is not worth creating, at least if we ignore the possibility that the person will prevent more suffering in other people. According to at least some philosophers, this is the correct conclusion to reach (see 4.6).

On the other hand, if you mean a life that is not worth continuing for its own sake, then antifrustrationism does not entail this conclusion (see 4.5). While tranquilism would say that such a life is not worth continuing for its own sake, see here for arguments that this is not as counterintuitive as it might appear. 

4.9 Empirical evidence shows that most people are happy. Isn’t this evidence against many versions of SFE?

SFE does not make any claims about the absolute quantity of happiness and suffering in people’s lives. However, practical versions of SFE do hinge on empirical facts about the relative ease of reducing suffering and increasing happiness. On this question, empirical evidence is on the side of SFE. It is likely that one can be much more effective by trying to reduce suffering than trying to increase happiness in the world.

4.10 Doesn’t SFE conflict with the intuition that we should choose making a badly off person happy over reducing an already-happy person’s suffering?

Ord asks us to consider a choice between bestowing happiness to some unfortunate person whose life has been truly wretched, or you could help someone who has been extremely fortunate and is very happy avoid minor suffering. If you think you ought to make the former choice, Ord suggests that this indicates that you support prioritarianism, sufficientarianism, or egalitarianism, rather than SFE.

Cases like this involve a lot of confounding factors. As mentioned in 4.1, happiness has a lot of instrumental value even for monist versions of SFE. The fact that giving a wretched person happiness generally also reduces a lot of suffering for him may explain our intuitions in this case.

More generally, this argument is misleading because it implies a false dichotomy between prioritarianism/sufficientarianism/egalitarianism and SFE. In fact, those who hold an SFE view are free to embrace a prioritarian/sufficientarian/egalitarian axiology: they can hold that suffering is more important than happiness, and that the welfare of people who are worse off matters more than the welfare of people who are better off.

4.11 We do interpersonal trade-offs in policy-making all the time. Doesn’t interpersonal SFE rule out such trades?

While interpersonal trade-offs are common in policy-making, most instances of such trade-offs are not simply trading some people’s suffering for other people’s happiness. Most often, policies that are intuitively permissible cause some amount of suffering in some to prevent more suffering in others. Even in those cases, many of us have the intuition that those harmed by the policy ought to be compensated.

We should also note that most consequentialist versions of interpersonal SFE don’t completely rule out interpersonal trade-offs between happiness and suffering. What we call standard lexical SFE only rules out trading extreme suffering in some for happiness in others. This seems to be in line with ordinary moral intuitions. Weak lexical SFE also allows interpersonal trade-offs, it just requires that the happiness gained is higher than the suffering caused. 

4.12 Isn’t weak SFE incoherent?

For weak SFE to be coherent, there needs to be a non-arbitrary and non-evaluative way to compare the magnitude of suffering and happiness. At present, it is admittedly not clear what the best way to do this is. 

A reductionist about suffering can compare suffering and happiness by the degree to which they are preferred or dispreferred. However, this strategy fails for interpersonal comparisons. Another strategy is to say a unit of suffering/happiness is a just noticeable difference. However, it is not clear if this really works for interpersonal comparisons either, since just noticeable differences might be different across individuals. The last strategy, suggested by Mayerfeld (1999), is to say that we can simply intuitively determine which specific instance of suffering is of roughly the same magnitude as some specific instance of happiness. This strategy might still be unsatisfactory, since it doesn’t tell us what makes it the case that some specific suffering is of the same magnitude as some specific happiness. It might also be the case that our intuitions are simply informed by our own preferences, in which case this strategy would share the same problems as the preference-based one.

Nevertheless, this difficulty is not uniquely faced by weak SFE. Any view that claims symmetry between happiness and suffering also depends on there being a non-arbitrary and non-evaluative way to compare the magnitude of suffering and happiness. If such comparisons cannot be made, then ascribing symmetry to the two types of mental states is just as incoherent as ascribing weak asymmetry.


 

5. Other questions

5.1 I often find it very depressing to deeply think about suffering-focused ethics. Is there a way to think about suffering-focused topics so that it is less depressing and more motivated by positive feelings?

As Magnus Vinding has answered here:

Research suggests that these meditation practices [i.e. compassion and loving-kindness meditation] not only increase compassionate responses to suffering, but that they also help to increase life satisfaction and reduce depressive symptoms for the practitioner, as well as to foster better coping mechanisms and increased positive affect in the face of suffering.

Other helpful resources can be found here.

5.2 What does SFE say about the suffering of non-human animals?

Although SFE does not commit someone to a particular view about the moral urgency of the suffering of non-human animals, It is natural for an SFE theorist to think that the suffering of non-human animals has special moral importance just as the suffering of humans does. For more on this topic, see:

Brian Tomasik’s articles on The Importance of Wild-Animal Suffering and The Importance of Insect Suffering.

5.3 What are s-risks?

S-risks are possible adverse events that contain an astronomical amount of suffering, resulting in scenarios that are much worse than extinction. We think that the probabilities of such events are not negligible, and thus s-risks should be considered a top-priority for effective altruists. Good introductory articles on s-risks include:

Max Daniel’s EAG 2017 talk

Tobias Baumann’s S-risks: An introduction

5.4 Are there major factual disagreements between suffering-focused EAs and non-suffering-focused EAs?

Since SFE is a type of position in ethics, it is not inherently wedded to any particular empirical claim. Regarding the beliefs of actual suffering-focused EAs, anecdotal reports seem to suggest that their empirical beliefs have a high degree of overlap with non-suffering-focused EAs. 

5.5 What would an ideal world look like for suffering-focused ethicists?

David Pearce famously champions the idea of suffering eradication, where biotechnology would be used to completely eradicate suffering in all sentient beings. 

We should note that it might not be practically optimal to shoot for the best-case scenarios.

Instead, it may be best to focus on preventing the worst-case scenarios (see 5.4). 

5.6 I want to look into SFE more, where can I find further readings on SFE?

An SFE reading list compiled by Richard Ngo can be found here. A more extensive bibliography compiled by Max Daniel can be found here.

5.7 I am not convinced yet, but I want to know how much credence I should place in SFE.

One important factor to consider is that we are generally biased against suffering-focused views. Examples of such biases include wishful thinking, bias toward optimism, existence bias, etc. For more detail on biases against SFE, see chapter 7 of Vinding (2020).

5.8 What are some open research questions in SFE? 

We recommend going to this page on open research questions compiled by the Center for Reducing Suffering. 

5.9 What EA-aligned Organisations are working on suffering-focused issues?

Most EA-aligned organisations are working on cause areas that at least indirectly contribute to the reduction of suffering. Notable EA-aligned institutions that are directly associated with SFE include the Center for Reducing Suffering and the Center on Long-term Risk.

5.10 Which philosophers have endorsed suffering-focused ethical views?

It is notable that more moderate SFE positions like common sense SFE are arguably endorsed by most ethicists - few ethicists think that, in practice, our reasons for creating happiness is as strong as our reasons to prevent suffering. Mainstream philosophers who have expressed support for more demanding versions of SFE like negative utilitarianism include Gustaf Arrhenius, Krister Bykvist, J. W. N. Watkins, Clark Wolf, Thomas Metzinger, Ingemar Hedenius, and Joseph Mendola. For more detail on this topic, see here.

5.11 If suffering-focused ethics is correct, what should the EA community do differently?

Given SFE, we argue that effective altruists should prioritize reducing s-risks. For more on s-risks, see 5.3.

5.12 What priority should EA attach to SFE research?

In general, fundamental values research has high priority. As argued by 80,000 hours, this type of research can help us understand which cause areas have the highest priority and help discover new cause areas.

The priority of SFE research depends on 1) our credence in SFE, and 2) the extent to which SFE’s practical implications are different from those of popular ethical theories (within EA). Like I’ve argued above, most ethically minded people likely hold some version of SFE (with practical versions of SFE being most popular, see 2.6). However, most popular versions of SFE also have practical implications that are already widely accepted (many of 80,000 hours’ priorities already aim at preventing suffering), making them less urgent to study. In contrast, stronger versions of SFE have practical implications that are far from widely accepted. More research is thus needed to understand these implications in order to account for them in decision-making. It is also important to understand what common grounds these positions share with popular ethical theories (for moral cooperation). 

5.13 What does SFE say about extinction risks?

As the NU FAQ points out, there are strong practical reasons for suffering-focused EAs to cooperate with non-suffering-focused EAs to reduce future-risks such as uncontrolled AIs. This is not least because there is substantial overlap between s-risks and x-risks.

In theory, however, many suffering-focused ethicists will disagree with many longtermists about the relative moral urgency of extinction risks. For example, when faced with the choice between ending factory farming and increasing the number of future happy people by 10 trillion, many suffering-focused ethicists would choose the former, since they believe that extreme suffering has moral priority. Arguably, this better fits common-sense intuitions. If presented with an option to create 10 trillion happy people and 70 billion sentient beings that must endure some of the worst forms of torture, few people would take this option.

 

Acknowledgments: I thank Teo Ajantaival for his extremely helpful comments on this FAQ, and the users of this forum who suggested changes to this document.

61

22 comments, sorted by Highlighting new comments since Today at 8:02 PM
New Comment

I strongly encourage you/everyone to not call "practical SFE" SFE. It's much better (analytically) to distinguish the value of causing happiness and preventing suffering from empirical considerations. Under your definition, if (say) utilitarianism is true, then SFE is true given certain empirical circumstances but not others. This is an undesirable definition. Anything called SFE should contain a suffering-focused ranking of possible worlds (for a SF theory of the good) or ranking of possible actions (for a SF theory of the right), not merely a contingent decision procedure. Otherwise the fact that someone accepts SFE is nearly meaningless; it does not imply that they would be willing to sacrifice happiness to prevent suffering, that they should be particularly concerned with S-risks, etc.

Practical SFE views . . . are compatible with a vast range of ethical theories. To adopt a practical SFE view, one just needs to believe that suffering has a particularly high practical priority.

This makes SFE describe the options available to us, rather than how to choose between those options. That is not what an ethical theory does. We could come up with a different term to describe the practical importance of preventing suffering at the margin, but I don't think it would be very useful: given an ethical theory, we should compare different specific possibilities rather than saying "preventing suffering tends to be higher-leverage now, so let's just focus on that." That is, "practical SFE" (roughly defined as the thesis that the best currently-available actions in our universe generally decrease expected suffering much more than they increase expected happiness) has quite weak implications: it does not imply that the best thing we can do involves preventing suffering; to get that implication, we would need to have the truth of "practical SFE" be a feature of each agent (and the options available to them) rather than the universe.

Edit: there are multiple suffering-related ethical questions we could ask. One is "what ought we—humans in 2021 in our particular circumstances—to do?" Another is "what is good, and what is right?" The second question is more general (we can plug empirical facts into an answer to the second to get an answer to the first), more important, and more interesting, so I want an ethical theory to answer it.

I strongly agree with this. I've had lots of frustrating conversations with SFE-sympathetic people that slide back and forth between ethical and empirical claims about the world, and I think it's quite important to carefully distinguish between the two.

The whole "practical SFE" thing also seems to contradict this statement early in the OP:

1.2 Does SFE assume that there is more suffering than happiness in most people’s lives?

No, SFE’s core claim is that reducing suffering is more morally important than increasing happiness. This normative claim does not hinge on the empirical quantity of suffering and happiness in most people’s lives. 

This may be true for some diehard suffering-focused EAs, but in my practical experience many  people adduce arguments like this to explain why they are sympathetic to SFE. This is quite frustrating, since AFAICT (and perhaps the author would agree) these contingent facts have absolutely no bearing on whether e.g. total utilitarianism is true.

Yeah, "downside-focused" is probably a better term for this.

I think prioritarianism and sufficientarianism are particularly likely to prioritize suffering, though, and being able to talk about this is useful, but maybe we should just say they are more suffering-focused than classical utilitarianism, not that they are suffering-focused or practical SFE.

Strong upvote. Most people who identify with SFE I have encountered seem to subscribe to the practical interpretation. The core writings I have read (e.g. much of Gloor & Mannino's or Vinding's stuff) tend to make normative claims but mostly support them using interpretations of reality that do not at all match mine.  I would be very happy if we found a way to avoid confusing personal best guesses with metaphysical truth. 

Also, as a result of this deconfusion, I would expect there to  be very few to no decision-relevant cases of divergence between "practically SFE" people and others, if all of them subscribe to some form of longtermism or suspect that there's other life in the universe.

I didn't vote on your comment, but I think you made some pretty important claims (critical of SFE-related work) with little explanation or substantiation:

The core writings I have read (e.g. much of Gloor & Mannino's or Vinding's stuff) tend to make normative claims but mostly support them using interpretations of reality that do not at all match mine.  I would be very happy if we found a way to avoid confusing personal best guesses with metaphysical truth. 

What interpretations are you referring to? When are personal best guesses and metaphysical truth confused?

very few to no decision-relevant cases of divergence between "practically SFE" people and others

Do you mean between "practically SFE" people and people who are neither "practically SFE" nor SFE?

Also, as a result of this deconfusion, I would expect there to  be very few to no decision-relevant cases of divergence between "practically SFE" people and others, if all of them subscribe to some form of longtermism or suspect that there's other life in the universe.

What do you mean? People working specifically to prevent suffering could be called "practically SFE" using the definition here. This includes people working in animal welfare pretty generally, and many of these do not hold principled SFE views. I think there are at least a few people working on s-risks who don't hold principled SFE views, e.g. some people working at or collaborating with the Center on Long-Term Risk (s-risk-focused AI safety) or Sentience Institute (s-risk-focused moral circle expansion; I think Jacy is a classical utilitarian).

Why is suspecting that there's other life in the universe relevant? And do you mean the accessible/observable universe?

(I've edited this comment a bunch for wording and clarity.)

Thank you (and an anonymous contributor) very much for this!

you made some pretty important claims (critical of SFE-related work) with little explanation or substantiation

If that's what's causing downvotes in and of itself, I would want to caution people against it - that's how we end up in a bubble.

What interpretations are you referring to? When are personal best guesses and metaphysical truth confused?

E.g. in his book on SFE, Vinding regularly cites people's subjective accounts of reality in support of SFE at the normative level. He acknowledges that each individual has a limited dataset and biased cognition but instead of simply sharing his and the perspective of others, he immediately jumps to normative conclusions. I take issue with that, see below.

 Do you mean between "practically SFE" people and people who are neither "practically SFE" nor SFE?

Between "SFE(-ish) people" and "non-SFE people", indeed.

What do you mean [by "as a result of this deconfusion ..."]?

I mean that, if you assume a broadly longtermist stance, no matter your ethical theory, you should be most worried about humanity not continuing to exist because life might exist elsewhere and we're still the most capable species known, so we might be able to help currently unkown moral patients (either far away from us in space or in time).

So in the end, you'll want to push humanity's development as robustly as possible to maximize the chances of future good/minimize the chances of future harm. It then seems a question of empirics, or rather epistemics, not ethics, which projects to give which amount of resources to. 

In practice, we practically never face decisions where we would be sufficiently certain about the possible results to have choices dominated by our ethics. We need collective authoring of decisions and, given moral uncertainty, this decentralized computation seems to hinge on a robust synthesis of points of view. I don't see a need to appeal to normative theories.

Does that make sense?

I mean that, if you assume a broadly longtermist stance, no matter your ethical theory, you should be most worried about humanity not continuing to exist because life might exist elsewhere and we're still the most capable species known, so we might be able to help currently unkown moral patients (either far away from us in space or in time).

So in the end, you'll want to push humanity's development as robustly as possible to maximize the chances of future good/minimize the chances of future harm. It then seems a question of empirics, or rather epistemics, not ethics, which projects to give which amount of resources to.

 

  1. Why do you think SFE(-ish) people would expect working to prevent extinction to be the best way to reduce s-risks, including better than s-risk prioritization research, s-risk capacity building, incidental s-risk-focused work, and agential s-risk-focused work?
  2. The argument for helping aliens depends on their distance from us over time. Under what kinds of credences do you think it dominates other forms of s-risk reduction? Do you think it's unreasonable to hold credences under which it doesn't dominate? Or that under most SFE(-ish) people's credences, it should dominate? Why?
  3. Why would they expect working to prevent extinction to prevent more harm/loss than it causes? It sounds like you're assuming away s-risks from conflicts and threats, or assuming that we'll prevent more of the harm from these (and other s-risks) than we'll cause as we advance technologically, expand and interact with others. Why should we expect this? Getting closer to aliens may increase s-risk overall. EDIT: Also, it assumes that our descendants won't be the ones spreading suffering.

E.g. in his book on SFE, Vinding regularly cites people's subjective accounts of reality in support of SFE at the normative level. He acknowledges that each individual has a limited dataset and biased cognition but instead of simply sharing his and the perspective of others, he immediately jumps to normative conclusions. I take issue with that, see below.

 

I have two interpretations of what you mean:

  1. What should he have done before getting to normative conclusions? Or do you mean he shouldn't discuss what normative conclusions (SFE views) would follow if we believed these accounts? Should he use more careful language? Give a more balanced discussion of SFE (including more criticism/objections, contrary accounts) rather than primarily defend and motivate it? I think it makes sense to discuss the normative conclusions that come from believing accounts that support SFE in a book on SFE (one at a time, in different combinations, not necessarily all of them simultaneously, for more robust conclusions).
  2. Since you say "see below" and the rest of your comment is about what SFE(-ish) people should do (reduce extinction risk to help aliens later), do you mean specifically that such a book shouldn't both motivate/defend SFE and make practical recommendations about what to do given SFE, so that these should be done separately?

Intrigued by which part of my comment it is that seems to be dividing reactions. Feel free to PM me with a low effort explanation. If you want to make it anonymous, drop it here.

It's good, I think, for some sort of public document like this to exist on the Forum.

That said, I'd prefer if it distinguished more carefully between the parts that aim to describe SFE, and those parts that aim to defend it. For example, section 2 seems mostly descriptive, but then goes on to say (to paraphrase) that the relationship between antifrustrationism and preference-satisfaction theory is that the former is better.

(As described elsewhere, I also really don't like the whole "practical SFE" thing, and think adding that sort of term to the discourse is unhelpful.)

(Disclaimer: I'm unsympathetic to SFE, which may make my comments here come across as a bit more adversarial than I'd really endorse.)

FYI/FWIW -- links are going to an external google doc (at least the first one) not to the internal parts of this post

Also, you don't need to insert the table of contents into the post itself because the EA Forum automatically generates a TOC in the sidebar.

The moral asymmetry is most intuitively compelling when it is interpersonal. Most of us judge that it is wrong to make a person suffer even if it would make another person happy, or trade the intense suffering of a single person for the mild enjoyment of a large crowd, however large the crowd is.

Furthermore, these thought experiments would be much less compelling had they been reversed. It does not seem obviously wrong to reduce a person’s happiness to prevent someone’s suffering. Neither does it seem wrong to prevent intense pleasure for a single person in order to stop a large number of people’s mild suffering. This suggests that the intuitive force behind these thought experiments is driven by an asymmetry between suffering and happiness, rather than a moral prohibition against instrumentalization.

These are important points that I think often get missed in discussions of SFE - thanks for including them!

You mention the Repugnant Conclusion (I'd prefer to call it the Mere Addition Paradox for neutrality, though I'm guilty of not always doing this) as something that SFE escapes. I think this depends on the formulation, though in my estimation the form of RC that SFE endorses is really not so problematic, as many non-SFE longtermists seem to agree. The Very Repugnant Conclusion (also not the most neutral name :)) also strikes me as far worse and worth more attention in population ethics discourse, much as SFE has its own counterintuitive implications that make me put some weight on other views.

Including Arrhenius and Bykvist as examples of supporting negative utilitarianism might be a bit misleading. In Knutsson's sources, they do claim to put more weight on suffering than happiness, but I think that when most people use the term "negative utilitarianism" they mean more than this, something like the set of views that hold at least some forms of suffering cannot be morally outweighed by any happiness / other purported goods. In the context of at least Arrhenius's other writings (I'm less familiar with Bykvist), as I understand them, he doesn't fall in that group. Though, Arrhenius did propose the VRC as an important population ethics problem, and that seems to afflict most non-NU views.

What is the SFE response to the following point, which is mostly made by Carl Shulman here? Pain/pleasure asymmetry would be really weird in the technological limit (Occam’s razor), and that it makes sense that evolution would develop downside-skewed nervous systems when you think about the kinds of events that can occur in the evolutionary environment (e.g. death, sex) and the delta “reproductive fitness points” they incur (i.e. the worst single things that can happen to you, as a coincidental fact about evolution, the evolutionary environment, and what kinds of "algorithms" are simple for your nervous system to develop, are way worse from evolution's perspective than the best single things that can happen to you), but that our nervous systems aren’t much evidence of the technological possibilities of the far-future?

My response is that my own SFE intuitions don't rely on comparing the worst things people can practically experience with the best things we can practically experience. I see an asymmetry even when comparing roughly equal intensities, difficult though it is to define that, or when the intensity of suffering seems smaller than the happiness. To me it really does seem morally far worse to give someone a headache for a day than to temporarily turn them into a P-zombie on their wedding day. "Far worse" doesn't quite express it - I think the difference is qualitative, i.e. it doesn't appear to be a problem for a person not to be experiencing more intensely happy versions of suffering-free states.

I think Shulman's argument does give a prima facie cause for suspicion of suffering-focused intuitions, and it's a reason for some optimism about the empirical distribution of happiness and suffering. (Whether that's really comforting depends on your thoughts on complexity of value.) But it's not overwhelming as a normative argument, and I think the "asymmetry is a priori weird" argument only works against forms of "weak NU" (all suffering is commensurate with happiness, just not at a 1:1 ratio).

Could you try to expand upon what you mean by “equal intensities”?

Basically the same thing other people mean when they use that term in discussions about the ethics of happiness and suffering. I introspect that different valenced experiences have different subjective strengths; without any (moral) value judgments, it seems not very controversial to say the experience of a stubbed toe is less intense than that of depressive episode, and that of a tasty snack is less intense than that of a party with close friends. It seems intuitive to compare the intensities of happy and suffering experiences, at least approximately.

The details of these comparisons are controversial, to be sure. But I don't think it's a confused concept, and if we didn't have a notion of equal intensities, non-SFE views wouldn't have recourse to the criticism that SFE involves a strange asymmetry.

I feel like my question wasn't answered. For instance, Carl suggests using units such that when a painful experience and pleasurable experience are said to be of "equal intensity" then you are morally indifferent between the two experiences. This seems like a super useful way to define the units (the units can then be directly used in decision calculus). Using this kind of definition, you can then try to answer for yourself things like "do I think a day-long headache is more units of pain than a wedding day is units of pleasure?" or "do I think in the technological limit, creating 1 unit of pain will be easier than creating 1 unit of pleasure?"

What I meant by my original question was: do you have an alternative definition of what it means for pain/pleasure experiences to be of "equal intensity" that is analogous to this one?

Carl suggests using units such that when a painful experience and pleasurable experience are said to be of "equal intensity" then you are morally indifferent between the two experiences

I think that's confusing and non-standard. If your definition of intensities is itself a normative judgment, how do you even define classical utilitarianism versus suffering-focused versions? (Edit: after re-reading Carl's post I see he proposes a way to define this in terms of energy. But my impression is still that the way I'm using "intensity," as non-normative, is pretty common and useful.)

What I meant by my original question was: do you have an alternative definition of what it means for pain/pleasure experiences to be of "equal intensity" that is analogous to this one?

Analogous in what way? The point of my alternative definition is to provide a non-normative currency so that we can meaningfully ask what the normative ratios are (what David Althaus calls N-ratios here). So I guess I just reject the premise that an analogous definition would be useful.

ETA: If it helps to interpret my original response, I think you can substitute (up to some unit conversion) energy for intensity. In other words, my SFE intuitions aren't derived from a comparison of suffering experiences that require a lot of energy with happy experiences that don't require much energy. I see an asymmetry even when the experiences seem to be energetically equivalent. I don't know enough neuroscience to say if my intuitions about energetic equivalence are accurate, but it seems to beg the question against SFE to assume that even the highest-energy happy experiences that humans currently experience involve less energy than a headache. (Not saying you're necessarily assuming that, but I don't see how Carl's argument would go through without something like that.)

I think I meant analogous in the sense that I can then see how statements involving the defined word clearly translate to statements about how to make decisions.

Thanks for this, I found section 4 in particular useful.

"A life worth living is standardly understood as a life that contains more suffering than happiness." Not quite!