I agree with your title, but I don't think negative utilitarianism is the answer. I like Toby Ord's essay on this, "Why I'm Not a Negative Utilitarian": https://www.amirrorclear.net/academic/ideas/negative-utilitarianism/
On your argument about tradeoffs, people make choices all the time where they accept some very small risk of some very severe suffering in order to increase their happiness by a modest amount. For example: cycling along a busy road to visit their friend. If you say that no amount of happiness can make up for the trauma of being involved in a serious accident, then it seems like you are forced to say that this choice is wrong. That seems like a strange conclusion to me.
Sorry for the very delayed reply to this. I meant to reply at the time and then it slipped my mind!
Yes, you've summarised my position perfectly, I like those diagrams!
I guess my deeper point was that I wasn't sure there was any meaningful way to say something like "X is twice as painful as Y" without defining it via choices among gambles or durations. You say for humans it seems real, but does it? I can definitely introspect and discover that X is more painful than Y, but I'm not sure I can introspect and discover that it is N times as painful. Where does that number come from?
Although as I was thinking more about how to justify this, I started thinking about other sensory experiences, like sound. Is it meaningful to say that "X feels twice as loud as Y", in a sense that doesn't have to line up with the intensity of the physical sound wave? And then I remembered my physics lessons from way back, and realised the answer might be yes. I was definitely taught that the reason we measure sound volume on a log scale (decibels) is because it lines up better with our sensory perception of it (you have to square the intensity of the sound wave in order to double the perceived intensity). But if this is true then it means there is some sense in which we can introspect and say "X sounds twice as loud as Y", even though the underlying sound wave might not be twice as intense. And if that is the case then maybe this should also be true for pain.
I'm still very uncertain about this though. If I listened to different sounds and tried to place them on a numerical scale, I'm not really sure what it is that I'd actually be doing.
Thank you for your reply and clarification!
If the claim is that the gap between 'Disabling' and 'Excruciating' should be larger than the gap between 'Annoying' and 'Hurtful', then that makes sense to me, and seems interesting.
But it sounds like this wasn't a numerical scale to begin with? So this again just feels like a claim about how we should go about assigning numbers to those categories (if we need numbers), rather than a claim that pain unpleasantness is 'superlinear' in some objective sense?
Defining what a numerical score for pain means seems like a hard problem. From my perspective, it seems like it should be defined so that the being concerned would be indifferent between a day of 2*x and 2 days of x. I think this is the notion you are referring to as 'unpleasantness'. The question then for any other pain metric is just: "how well does it measure this?". I'm still not sure it makes sense to ask "How does pain intensity scale with unpleasantness?", since then we would first have to define a numerical scale for pain intensity in some different way, and I'm still not sure how we begin to do that?
I suppose there is another ineresting complication here, which is that you could also try to define your pain scale in terms of preferences among gambles. For example, the pain scale should be defined so that a rational being is indifferent between 100% chance of x and a 50% chance of 2*x. And then you're confronted with the question of whether this should give you the same answer as defining it in terms of preferences among durations. My feeling is that it should be the same (something about personal identity not being a 'further fact' and applying standard utilitarian aggregation approach to person-moments rather than persons..?) but would be interesting to explore points of view where those two potential scale definitions are different. That doesn't feel quite the same as 'intensity' vs 'unpleasantness' though. More like two different definitions of 'unpleasantness'.
I'm confused about what "superlinearity" is even supposed to mean here.
In the intro you distinguish "unpleasantness" and "intensity", and say that one grows superlinearly with the other, but how are these two things even defined to begin with? And what is the difference between them? Defining one scale for measuring pain is hard enough, but before we can evaluate this "superlinear" claim we first need to define two!
In the examples with humans, I can see what the claim is. There are at least two ways you could try to define a pain scale: (i) self-report on a scale of 1-10, and (ii) something that more consistently tracked actual preferences with respect to gambles or experiences of different duration, and in this example the claim is that (ii) grows super-linearly with (i).
But this just seems like a claim about the limitations of the self-report 1-10 scale, which is only relevant for humans (think I'm probably agreeing with the summary of Bob Fischer's take here).
In the case of non-humans, it's not that I disagree, but I don't even understand what the claim is that is being made?
If I understand right, the claim you're making here is that if I give £10 to a Givewell charity, I cause Dustin Muskovitz to give £10 less to that Givewell charity, and do something else with it instead. What else does he do with it?
The second two possibilities seem surprising and important if true, and I'd be interested to hear more justification for this! Is there some evidence that this is really what happens?
Why do you expect it to be worse environmentally to order online?
If the alternative is driving, it seems much less efficient to have 10 people independently drive to the shop and back than to have one van deliver all their food in a single round trip.
If the alternative is public transport, I guess it's less clear, but ordering online probably allows bigger shops in that case, which I'd guess would be more efficient again?
The only way I can see it clearly making things worse is if the alternative is walking to the shops. But in that case, I'd still guess that the environmental costs of the products themselves would be much more important than the environmental costs of their transport (just because this is a claim that seems to be made a lot, and I think must factor in the transport costs of getting it from the shop to your home as well!)
Maybe, although an election being tied is about the only way that particular example can be fuzzy, and there is a well defined process for what happens in that situation (like flipping a coin). There is ultimately only one winner, and it is possible for a single vote to make the difference.
Whether an experience is painful or not is extremely unclear, but if your metric is just something like "number of animals killed for meat each year" then again that is something well defined and precise, and it must in principle be possible to change it with an individual purchase.
Ironically I might also be guilty of using some technical terminology incorrectly here!
I had in mind the discussion on valuing actions with imperceptible effects from the "Five Mistakes in Moral Mathematics" chapter in Reasons+Persons (relevant to all the examples mentioned in the IVT section of this post), where if I remember right Parfit makes an explicit comparison with the "paradox of the heap" (I think this is where I first came across the term).
It feels the same in that for both cases we have a function from natural numbers (number of grains of sand in our potential heap, or number of people voting/buying meat) to some other set (boolean 'heap' vs 'not heap', or winner of election, or number of animals harmed). And the point is that mathematically, this function must at some point change with the addition of a single +1 to the input, or it can never change at all. Moreover, the sum of the expected value of lots of potential additions must equal the expected value of all of them being applied together, so that if the collective has a large effect, the individual effects can't be smaller, on average, than the collective effect divided by the number of consituents.
I suppose the point is that this paradox is non-trivial and possibly unsolved when the output is fuzzy (like whether some grains of sand are a heap or not) but trivially true when the output is precise or quantitative (like who wins an election or how many animals are harmed)?
I think this misunderstands what people mean when they compare arguments about the importance of AI safety to a Pascal's wager.
Pascal's wager refers to situations where a tiny probability of enormous value seemingly leads to ridiculous conclusions if you try to do naive expected value calculations with it. When people say that strong longtermism is a Pascal's wager, the "small probability" they are talking about is not the probability of extinction, which as you point out, is significant. The small probability is the probability that the future will contain "septillions of future sapients". That is the probability that is small. And it gets even smaller if the probability of extinction soon is high! So a large probability of extinction this century makes the Pascal's wager comparison more relevant as a critique of strong longtermism, not less. It is multiplying this small probability by the value of those septillions of potential "sapients" that gives you the astronomical value that says existential risk reduction should almost automatically dominate our concerns.
I think you're completely right to point out that people should care a lot about things which might carry a 10% chance of causing human extinction, even ignoring their stance on longtermism. But some people believe that existential risk has astronomically more value than just the impact it will have on the next few generations, and that therefore tiny changes in the probability of existential risk almost automatically trump any other concern, however small those changes are. When people talk about Pascal's wager in the context of strong longtermism or AI safety, I think it is this claim that they are challenging, not the claim that we should care about extinction at all. And that criticism is just as valid, actually more valid, if the probability of extinction from AI safety is high (though I of course agree that if there are people who use the Pascal's Wager argument to dismiss all work on AI risk then they are making a serious mistake).