Negative utilitarianism has some traction with effective altruists. This is in my view a shame given that it is false. I shall spell out why I hold this view.
The most basic version of negative utilitarianism which says that only the avoidance of pain is morally relevant is trivially false. Preventing a pinprick is less valuable than bringing about googolplex utils. However, this view is not widely believed and thus not particularly worth discussing.
A more popular form of negative utilitarianism takes the form of Lexical Threshold views, according to which certain forms of suffering are so terrible that they cannot be outweighed by any amount of happiness. This view is defended by people like Simon Knutsson, Brian Tomasik, and others. My main objection to this view is that it falls prey to the sequencing objection. Suppose we believe that the badness of a horrific torture cannot be outweighed by any amount of happiness. Presumably we believe that the badness of a mild headache can be outweighed by some amounts of happiness. Therefore, the badness of horrific torture can't be outweighed by any amount of headaches (or similar harms, headaches were just the example that I picked.)
This view runs into a problem. There are certainly some types of extreme headaches whose badness are as bad as brutal tortures at least in theory. Suppose that the badness of these horrific headaches are 100,000 units of pain and that benign headaches contain 100 units of pain. Presumably 5 headaches with 99,999 units of pain would be in total worse than 1 headache with 100,000 units of pain. Additionally, presumably 25 headaches with 99,998 units of pain would be worse than 5 headaches with 99,999 units of pain. We can keep decreasing the amount of pain and making it affect more people, until 1 headache with 100,000 units of pain is found to be less bad than some vast number of headaches with 100 units of pain. The Lexical Threshold Negative Utilitarian view would have to say that there's some threshold of pain below which no amount of pain experienced can outweigh any amount of pain above the threshold, regardless of how many people experience the pain. This is deeply implausible. If the threshold is set at 10,000 units of pain, then 10^100^100 people experiencing 9,999 units of pain would be preferable to one person experiencing 10,001 units of pain.
The negative utilitarian might object that there is no neat cutoff. However, this misunderstands the argument. If there is no neat cutoff point then the gradual decrease in pain, despite being applied to an increasing number of people, would always be preferrable to the previous point with far fewer people experiencing marginally more pain.
The negative utilitarian might say that pain can't be neatly delineated into precise units. However, precise units are only used to represent pain. It's very intuitive that pain that is very bad can be made gradually less bad until it's reduced to being only a little bit bad. This process requires the negative utilitarian to declare that at some point along the continuum, they've passed a threshold whereby no amount of the things below the threshold can ever outweigh the things above the threshold. Being scalded in boiling water can be made gradually less unpleasant by lowering the temperature of the water until it's reduced to merely a slight inconvenience.
Simon Knutsson responds to this basic objection saying "Third, perhaps Ord overlooks versions of Lexical Threshold NU, according to which the value of happiness grows less and less as the amount of happiness increases. For example, the value of happiness could have a ceiling, say 1 million value “units,” such that there is some suffering that the happiness could never counterbalance, e.g., when the disvalue of the suffering is 2 million disvalue units." However, the way I've laid out the argument proves that even the most extreme forms of torture are only as bad as large amounts of headaches. If this is the case, then it seems strange and ad hoc to say that no amount of happiness above 1 million units can outweigh the badness of a headache. Additionally, a similar approach can be done on the positive end. Surely googol units of happiness for one person and 999,999 units for another is better than 1,000,000 units for two people.
The main argument given for negative utilitarianism is the intuition that extreme suffering is very bad. When one considers what it's like to starve to death, it's hard to imagine how any amount of happiness can outweigh it. However, we shouldn't place very much stock in this argument for a few reasons.
First, it's perfectly compatible with positive utilitarianism (only in the sense of being non negative, not in the sense of saying that only happiness matters) to say that suffering is in general far more extreme than happiness. Given the way the world works right now, there is no way to experience as much happiness as one experiences suffering when they get horrifically tortured. However, this does not imply that extreme suffering can never be counterbalanced--merely that it's very difficult to counterbalance it. No single thing other than light travels at the speed of light, but that does not mean that light speed is lexically separate from separate speeds, such that no number of other speeds can ever add up to greater than light speed. Additionally, transhumanism opens the possibility for extreme amounts of happiness, as great as the suffering from brutal torture.
Second, it's very hard to have an intuitive grasp of very big things. The human brain can't multiply very well. Thus, when one has an experience of immense misery they might conclude it's balance can't be counterbalanced by anything, when in reality they're just perceiving that it's very bad. Much like how people confuse things being astronomically improbable with impossible, people may have inaccurate mental maps, and perceive extremely bad things as bad in ways that can't be counterbalanced.
Third, it would be very surprising a priori for suffering to be categorically more relevant than well-being. One can paint a picture of enjoyable experiences being good and unenjoyable experiences being bad. It's hard to imagine why unenjoyable experiences would have a privileged status, being unweighable against positive experiences.
I'd be interested in hearing replies from negative utilitarians to these objections.
Thanks for the replies. :)
If people knew in advance that this would happen, it would relieve a great deal of suffering during people's lives. People could be much less afraid of death because the very end of their lives would be so nice. I imagine that anxiety about death and pain near the end of life without hope of things getting better are some of the biggest sources of suffering in most people's entire lives, so the suffering reduction here could be quite nontrivial.
So I think we'd have to specify that no one would know about this other than the person to whom it suddenly happened. In that case it still seems like probably something most people would strongly prefer. That said, the intuition in favor of it gets weaker if we specify that someone else would have to endure a pinprick with no compensation in order to provide this joy to a different person. And my intuition in favor of doing that is weaker than my intuition against torturing one person to create happiness for other people. (This brings up the open vs empty individualism issue again, though.)
When astronomical quantities of happiness are involved, like one minute of torture to create a googol years of transhuman bliss, I begin to have some doubts about the anti-torture stance, in part because I don't want to give in to scope neglect. That's why I give some moral credence to strongly suffering-focused weak NU. That said, if I were personally facing this choice, I would still say: "No way. The bliss isn't worth a minute of torture." (If I were already in the throes of temptation after a taste of transhuman-level bliss, maybe I'd have a different opinion. Conversely, after the first few seconds of torture, I imagine many people might switch their opinions to saying they want the torture to stop no matter what.)
I agree, assuming we count their magnitudes the way that a typical classical utilitarian would. It's plausible that the expected happiness of the future as judged by a typical classical utilitarian could be a few times higher than expected suffering, maybe even an order of magnitude higher. (Relative to my moral values, it's obvious that the expected badness of the future will far outweigh the expected goodness -- except in cases where a posthuman future would prevent lots of suffering elsewhere in the multiverse, etc.)