Hide table of contents

Summary

Among EAs, it has become de facto or even obligatory to “buy” Longtermism. As I understand it, Longtermism relies primarily on a combination of two predictions: that (1) the future will contain an enormous number of moral subjects and (2) that the gradual improvement of technology will make the lives of those subjects overwhelmingly “net-positive.” From these two predictions, we can infer that safeguarding humanity’s (potentially) infinite future will produce an unthinkable amount of welfare, for which we should be prepared to make significant sacrifices. However, I believe that EAs who do not subscribe to “total welfarist utilitarianism” are not obligated to find this future uniquely motivating. Furthermore, I believe that adopting a “negative utilitarian” position has significant consequences for funding allocation and career decisions, and deserves more serious consideration by the movement.

 

What is Negative Utilitarianism?

As I understand it, consequentialism is the practice of judging an action by examining its consequences. Utilitarianism is a subset of consequentialism, and refers to the practice of judging an action’s consequences by two metrics:

 

  1. Whether it maximizes “pleasure,” or positive mental states, and
  2. Whether it minimizes “pain,” or negative mental states.

 

A “total welfarist utilitarian” is someone who uses both of those metrics at once. A “negative utilitarian,” however, treats these two injunctions as independent. Under this worldview, a given action should be evaluated first and foremost on whether it causes suffering—a “strong” responsibility. The maximizing of pleasure is a secondary or “weak” responsibility.

A central tenet of negative utilitarianism, for me, is the understanding that “pleasure” and “pain” are not perfect opposites. There are a variety of “pleasurable” states, which sometimes blur with one another, and are often defined in opposition to periods of discomfort. However, all of these negative, neutral, and positive states are immediately destroyed by the severe phenomenon of mental or physical pain, which is unmistakable and all-consuming. This may have to do with human biology, which is primarily concerned with survival. But whatever the reason, it seems both ethically and empirically more straightforward to minimize “pain” than it is to maximize “pleasure.”

One way to see the difference in kind between pleasure and pain is to notice that the suppression of pain is typically dealt with by expensive, sophisticated science: medicine, psychiatry, therapy. By contrast, it is philosophy and literature that deal with the delicate rhythms of positive mental states—how they come and go, what they should look like, and what they do look like. It is not that happiness is illusory or insignificant, only that it is awkward to produce at scale; there are no emergency rooms for happiness.

Another way to see this is to ask yourself what trade-offs you would be willing to make in your own life. Many people will accept small amounts of “pain” in exchange for future reward. However, there are certain types of suffering which seem to be worth no amount of later “happiness.” What reward could I offer you in exchange for the deaths of your parents? Or your children? Or for a month of continuous physical torture? Would any number of free cookies or houses or yachts suffice? Your “happiness” would be destroyed by trauma—happiness is just not the same kind of commodity.

Even if the pain were to be assigned to someone else, most of us would experience a profound discomfort (see Bob Fischer’s “Tortured Tim”). It is one thing to selectively reduce one people’s suffering, as in the case of supplying bednets to one country rather than another. But to me it feels like an intuitively different activity to purposefully inflict pain to make a happy person happier.

At this point, you might feel tempted to invent something like a “happiness machine,” which could dial up a person’s happiness to such unbelievable levels as to completely counterbalance someone else’s suffering, but no such machine currently exists. Furthermore, there is as yet no evidence that it could exist. A negative utilitarian would say that this is due to the moral status of happiness—that it is simply not as “important” as pain.

 

Longtermism and Cluelessness

Longtermism is a subset of utilitarianism that emphasizes the importance of potential future lives. According to Longtermism, an action should be judged based on two principles:

 

  1. Whether it maximizes “pleasure” for the inhabitants of the long-term future, and
  2. Whether it minimizes “pain” for the inhabitants of the long-term future.

 

As many EAs have observed, Longtermism is recurrently troubled by the problem of “cluelessness”—that is, our inability to observe the consequences of our actions in the long-term future. This is a serious objection. In fact, it almost seems to undermine our claim to being “consequentialists.”

However, I do not believe that cluelessness is nearly as damning for the first principle as it is for the second. 

EAs that are working to prevent “x-risks”—that is, working to safeguard the long-term happiness of infinite human generations—are usually trying to prevent disasters that will happen in humans living in the short-term. Global pandemics are relatively common, and researchers can determine with a high degree of certainty how the modern-day global public health apparatus will operate during the next one. AI safety researchers can monitor the progress of current models and negotiate with regulatory bodies in real time. From conversations I’ve had with AI researchers, it seems very likely that the alignment problem will either be solved or left fatally unsolved within our lifetimes. The “x-risk” field is therefore less plagued by clueless forecasts for the infinite future.

Preventing “s-risks” is slightly trickier. Preserving western democracy might positively affect humans now, but what about humans in one billion years? Would those humans want us to settle Mars, or incite a socialist revolution? Trying to increase the welfare of future people seems significantly more clueless than simply ensuring their existence (by protecting people in the present).

And devoting one’s career solely to long-term s-risks looks especially strange next to the massive sources of suffering taking place today, whether on factory farms or in undeveloped countries. As far as we know, ending factory farming or global poverty today may genuinely be the single best thing we can do for the long-term future—besides helping an enormous number of beings today. For a negative utilitarian, therefore, exclusively concerned with s-risks, Longtermism may not be a very useful paradigm.

 

Conclusion

By reassigning funding and career capital towards x-risks, EA is sacrificing present-day suffering for future positive welfare. This is a philosophically significant decision which is not necessarily required by general utilitarianism. A negative utilitarian, who does not believe in the overriding importance of increasing positive welfare (whether in the short- or long-term), is not obligated to sacrifice those suffering today in favor of anticipated happiness—such a program would appear dubious in light of the emotional structure of “pleasure” and its moral status relative to pain. It seems reasonable that a negative utilitarian should be concerned with s-risks. However, I believe that our cluelessness about the long-term future presents a significant problem for someone concerned solely with s-risks. In light of these considerations, I do not think an emphasis on Longtermism is especially productive for a negative utilitarian. Furthermore, I think if EA wants to fully commit itself to Longtermism, it must first stop treating total welfarism as the default and give negative utilitarianism its due as an alternative.

8

2
1

Reactions

2
1

More posts like this

Comments1
Sorted by Click to highlight new comments since:

I agree with your title, but I don't think negative utilitarianism is the answer. I like Toby Ord's essay on this, "Why I'm Not a Negative Utilitarian": https://www.amirrorclear.net/academic/ideas/negative-utilitarianism/ 

On your argument about tradeoffs, people make choices all the time where they accept some very small risk of some very severe suffering in order to increase their happiness by a modest amount. For example: cycling along a busy road to visit their friend. If you say that no amount of happiness can make up for the trauma of being involved in a serious accident, then it seems like you are forced to say that this choice is wrong. That seems like a strange conclusion to me.

Curated and popular this week
Relevant opportunities