Negative utilitarianism has some traction with effective altruists. This is in my view a shame given that it is false. I shall spell out why I hold this view.
The most basic version of negative utilitarianism which says that only the avoidance of pain is morally relevant is trivially false. Preventing a pinprick is less valuable than bringing about googolplex utils. However, this view is not widely believed and thus not particularly worth discussing.
A more popular form of negative utilitarianism takes the form of Lexical Threshold views, according to which certain forms of suffering are so terrible that they cannot be outweighed by any amount of happiness. This view is defended by people like Simon Knutsson, Brian Tomasik, and others. My main objection to this view is that it falls prey to the sequencing objection. Suppose we believe that the badness of a horrific torture cannot be outweighed by any amount of happiness. Presumably we believe that the badness of a mild headache can be outweighed by some amounts of happiness. Therefore, the badness of horrific torture can't be outweighed by any amount of headaches (or similar harms, headaches were just the example that I picked.)
This view runs into a problem. There are certainly some types of extreme headaches whose badness are as bad as brutal tortures at least in theory. Suppose that the badness of these horrific headaches are 100,000 units of pain and that benign headaches contain 100 units of pain. Presumably 5 headaches with 99,999 units of pain would be in total worse than 1 headache with 100,000 units of pain. Additionally, presumably 25 headaches with 99,998 units of pain would be worse than 5 headaches with 99,999 units of pain. We can keep decreasing the amount of pain and making it affect more people, until 1 headache with 100,000 units of pain is found to be less bad than some vast number of headaches with 100 units of pain. The Lexical Threshold Negative Utilitarian view would have to say that there's some threshold of pain below which no amount of pain experienced can outweigh any amount of pain above the threshold, regardless of how many people experience the pain. This is deeply implausible. If the threshold is set at 10,000 units of pain, then 10^100^100 people experiencing 9,999 units of pain would be preferable to one person experiencing 10,001 units of pain.
The negative utilitarian might object that there is no neat cutoff. However, this misunderstands the argument. If there is no neat cutoff point then the gradual decrease in pain, despite being applied to an increasing number of people, would always be preferrable to the previous point with far fewer people experiencing marginally more pain.
The negative utilitarian might say that pain can't be neatly delineated into precise units. However, precise units are only used to represent pain. It's very intuitive that pain that is very bad can be made gradually less bad until it's reduced to being only a little bit bad. This process requires the negative utilitarian to declare that at some point along the continuum, they've passed a threshold whereby no amount of the things below the threshold can ever outweigh the things above the threshold. Being scalded in boiling water can be made gradually less unpleasant by lowering the temperature of the water until it's reduced to merely a slight inconvenience.
Simon Knutsson responds to this basic objection saying "Third, perhaps Ord overlooks versions of Lexical Threshold NU, according to which the value of happiness grows less and less as the amount of happiness increases. For example, the value of happiness could have a ceiling, say 1 million value “units,” such that there is some suffering that the happiness could never counterbalance, e.g., when the disvalue of the suffering is 2 million disvalue units." However, the way I've laid out the argument proves that even the most extreme forms of torture are only as bad as large amounts of headaches. If this is the case, then it seems strange and ad hoc to say that no amount of happiness above 1 million units can outweigh the badness of a headache. Additionally, a similar approach can be done on the positive end. Surely googol units of happiness for one person and 999,999 units for another is better than 1,000,000 units for two people.
The main argument given for negative utilitarianism is the intuition that extreme suffering is very bad. When one considers what it's like to starve to death, it's hard to imagine how any amount of happiness can outweigh it. However, we shouldn't place very much stock in this argument for a few reasons.
First, it's perfectly compatible with positive utilitarianism (only in the sense of being non negative, not in the sense of saying that only happiness matters) to say that suffering is in general far more extreme than happiness. Given the way the world works right now, there is no way to experience as much happiness as one experiences suffering when they get horrifically tortured. However, this does not imply that extreme suffering can never be counterbalanced--merely that it's very difficult to counterbalance it. No single thing other than light travels at the speed of light, but that does not mean that light speed is lexically separate from separate speeds, such that no number of other speeds can ever add up to greater than light speed. Additionally, transhumanism opens the possibility for extreme amounts of happiness, as great as the suffering from brutal torture.
Second, it's very hard to have an intuitive grasp of very big things. The human brain can't multiply very well. Thus, when one has an experience of immense misery they might conclude it's balance can't be counterbalanced by anything, when in reality they're just perceiving that it's very bad. Much like how people confuse things being astronomically improbable with impossible, people may have inaccurate mental maps, and perceive extremely bad things as bad in ways that can't be counterbalanced.
Third, it would be very surprising a priori for suffering to be categorically more relevant than well-being. One can paint a picture of enjoyable experiences being good and unenjoyable experiences being bad. It's hard to imagine why unenjoyable experiences would have a privileged status, being unweighable against positive experiences.
I'd be interested in hearing replies from negative utilitarians to these objections.
As antimonyanthony noted, I think we have conflicting intuitions regarding these issues, and which intuitions we regard as most fundamental determine where we end up. Like antimonyanthony, I regard it as more obvious that it's wrong to allow a single person to be tortured in order to create a thousand new extremely blissful people who didn't have to exist at all than it's obvious that pleasure can outweigh a pinprick. In my own life I tend to act as though pleasure can outweigh a pinprick, but (1) I'm not sure if I endorse this as the right thing to do; it might be an instance of akrasia and (2) given that I already exist, I'd experience more than a pinprick's worth of suffering from not having certain positive experiences. If we're considering de novo pleasure that wouldn't appease any existing cravings, then my intuition that creating new pleasure can outweigh a pinprick isn't even that strong to begin with.
I would probably create extra people to feel bliss if doing so caused literally no harm. But even if it only caused a moderate harm rather than torture, I'm not sure creating the bliss would be worth it. There's no need for the extra bliss to come into existence. The universe is fine without it. But the people who would be harmed, even if only moderately, in order to create that extra bliss would not be fine.
You wrote:
But I think this framing really favors views according to which pleasure can outweigh suffering, because most ethicists feel that pleasure can outweigh suffering within a given life, but many of them do not think it's right to harm one person for the greater benefit of another person. If instead of your "open individualism" standpoint we take the standpoint of "empty individualism", meaning that each moment of conscious experience is a different person from each other one, then it's no longer clearly okay to force significant suffering upon yourself for greater reward, at least if there are moments of suffering so bad that you temporarily regret having made that tradeoff. (If you never regret having made the tradeoff, then maybe it's fine, just like it may be fine for one person to voluntarily suffer for the greater benefit of someone else.)
One possible resolution of our conflicting intuitions on these matters could be a quite suffering-focused version of weak NU. We could hold that suffering, including torture, can theoretically be outweighed by bliss, but it would take an astronomical amount of bliss to do so. This view accepts that there could be enough happiness to outweigh a pinprick while also rejecting the seemingly cruel idea that one instance of torture could be outweighed by just a small number of transhumanly blissful experiences. Weak NUs who give enough weight to suffering in practice tend to act the same as strong NUs or lexical-threshold NUs. The expected amount of torture in the far future is not vastly smaller than the expected amount of transhuman-level bliss, so a sufficiently suffering-focused weak NU will still be extremely pessimistic about humanity's future and will still prioritize reducing s-risks.
Yeah, that's a fair position to hold. :) The main reason I reject it is that my motivation to prevent torture is stronger than my motivation to care about how my values might change if I were to experience that bliss. Right now I feel the bliss isn't that important, while torture is. I'd rather continue caring about the torture than allow my loyalty to those enduring horrible experiences to be compromised by starting to care about some new thing that I don't currently find very compelling.
There's always a bit of a tricky issue regarding when moral reflecti... (read more)