That's unclear. However, we often conclude that something is beneficial before we know the mechanism.
That's true of your current intuitions but I care about what we would care about if we were fully rational and informed. If there was bliss so good that it would be worth experiencing ten minutes of horrific torture for ten minutes of this bliss, it seems that creating this bliss for ungodly numbers of sentient beings is quite an important ethical priority.
Yeah, to some degree I have egalitarian intuitions pre reflection and some other small non utilitarian intuitions.
I don't have the intuition that scientific discoveries are valuable independent of their use for sentient beings.
When I reflect about the nature of torture it seems obvious that it's very bad. But I'm not sure how by the nature of reflection on the experience alone we can conclude that there's no amount of positive bliss that could ever outweigh it. We literally can't conceive of how good transhuman bliss might be and any case of trying to add up trillions of positive minor experiences seems very sensitive to scope neglect.
I get why that would appeal to a positive utilitarian but I'm not sure why that would be relevant to a negative utilitarians' view. Also, we could make it so that this only applies to babies who died before turning two, so they don't have sophisticated preferences about a net positive QOL.
Hmm, this may be just a completely different intuition about suffering versus well-being. To me it seems obvious that an end of life pinprick for ungodly amounts of transhuman bliss would be worth it. Even updating on the intuitions of negative utilitarians I still conclude that the amount of future transhuman bliss would outweigh the suffering of the future.
Sidenote, I really enjoy your blog and have cited you a bunch in high school debate.
Well I think I grasp the force of the initial intuition. I just abandon it upon reflection. I have a strong intuition that extreme suffering is very very bad. I don't have the intuition that it's badness can't be outweighed by anything else, regardless of what the other thing is.
Does your view accept lexicality for very similar welfare levels?
Okay. One question would be whether you share my intuitions in the case I posed to Brian Tomasik. For reference here it is. "Hmm, this may be a case of divergent intuitions but to me it seems very obvious that if we could make it so that at the end of people's lives they have an experience of unfathomable bliss right before death, containing more well-being than the sum total of all positive experiences that humans have experienced so far, at the cost of one pinprick, it would be extremely good to do so. In this case it avoids the objection that well-being is only desirable instrumentally, because this is a form of well-being that would have otherwise not been even been considered. That seems far more obvious than any more specific claims about the amount of well-being needed to offset a unit of suffering, particularly because of the trickiness of intuitions dealing with very large numbers. "