Hide table of contents

Note: I wrote this post before the FTX collapse resulted in lots of people questioning utilitarian principles, so it isn't a response to any of that, although I think it is relevant. It may be a bit more speculative than some people are comfortable with, and isn't a fully fleshed out philosophical argument. But, if you take the reality of consciousness experience seriously, I think these conclusions are unavoidable. Most ideas here are heavily inspired by work from @algekalipso and QRI

No conscious being can deny that suffering is real.

If somebody claims otherwise, I’ll bet they’re confusing it with pain. It’s true that pain usually leads to suffering, but it’s always possible to drug or meditate it away. This is because pain is just an intense feeling, while suffering is something else, a kind of discordance in consciousness. Consider emotional suffering, for example. A bad breakup or a loved one’s death doesn’t cause any physical pain, but it does cause suffering. It’s a real and unfortunate part of the universe, something that is undeniably bad.

The flip side of this though is that joy, happiness and love are real as well. They aren’t just a byproduct of physical pleasure or the evolutionary drive to reproduce, but things that really exist as part of reality. If anything is good, it’s these conscious experiences.

If we accept the reality of experience, it’s hard to deny that the universe has value built in. The good must derive from reducing suffering and promoting bliss, every coherent conception of it must ultimately point to this fact. In other words, if we buy the self-evident fact that conscious valence is real, we get an ought from an is.


An ethical theory of everything

Given this strong argument for moral realism, I find it hard to imagine an ultimate ethical theory that isn’t based on some form of utilitarianism. Deontological and virtue ethics may provide good tools for achieving good outcomes, but they don’t get to the heart of the matter. Any coherent ethical theory must aim to attain a world-state with less suffering. This ultimately reduces to a form of utilitarianism that is based on measuring the quality of conscious experience, often referred to as valence utilitarianism.1

Of course, things are more complicated than just this. It’s possible to be in both positive and negative valence states at the same time, or have to choose from many different types of positive experiences. Can we say that peace is always better than joy? Or that physical pain is worse than emotional pain? Deciding exactly how to formulate utilitarianism may get a bit tricky (as expertly explored by Andrés Gómez Emilsson on his blog) but that doesn’t mean that there isn’t a general direction along which we can move towards a better world. Creating more moments of joy or peace and fewer moments of suffering is always the right way to go.

In some cases, though, the details really do matter. As it is normally formalised, utilitarianism runs into a series of repugnant conclusions. But, if you look at them closely from the point of view of valence utilitarianism, some of them start to dissolve.

For example, take the the rogue surgeon thought experiment. If you only care about maximising the number of living people, it could make sense for surgeons to go around kidnapping healthy people and butchering them for their organs, which can then be transplanted into terminal patients, ultimately saving more people than are killed. However, this doesn’t take into account all the collateral effects caused by the fear and insecurity that this kind of practice would unleash on the general population, not to mention the violent deaths of the victims. If we are trying to maximise positive conscious valence, we need to take into account the fact that feeling secure has a very meaningful effect on the experience of our lives. The terror unleashed wouldn’t balance out the gains, so this scenario wouldn’t make sense under valence utilitarianism. Any modification to the thought experiment to get around these problems would require a fantastical level of secrecy, which wouldn’t play out in reality.

There are many other ways people try to point out holes in utilitarianism, with thought experiments about experience machines and fictitious utility monsters. But if you look at the details of a real world scenario from the perspective of valence utilitarianism, they can usually be explained away.2

Dissolving the ultimate critique

However, we are still left with the original repugnant conclusion, introduced by Derek Parfit. It states that with a naïve formulation of utilitarianism, you can always conceive of a world filled with many lives barely worth living that has higher utility than another world filled with a modest number of extremely happy people. This seems like an unacceptable possibility. We would all rather be one of the few in the world of extreme bliss than one of the just-surviving masses.

This problem can be dissolved by valence utilitarianism as well, but you have to be willing to accept two truths about the way the universe is configured. Firstly, that the degree of enjoyment or suffering one can experience is likely logarithmic — the best (or worse) experience is many thousands of times better (or worse) than a mildly pleasant (or unpleasant) experience. Something that has been argued for convincingly by the Qualia Research Institute.

The other, likely more controversial, truth is that closed individualism — the common-sense belief that we are all metaphysically separate observers — is almost certainly false. Believing that we have real separate and essential selves is a useful evolutionary tool, but doesn’t really stand up to scientific scrutiny or sustained introspection. This leaves us with open individualism, the belief that we are all one consciousness, or empty individualism, in which we are just a single moment of experience. Neither of the latter beliefs reify the existence of an individual self, and allow us to sum across the experiences of many different conscious beings, regardless of who they belong to.

Then, given logarithmic scales of valence and open or empty individualism, it’s always going to be easier to achieve a high utility world-state with a few beings enjoying spectacular experiences, rather than filling the universe with miserable people. This is especially true when you play out a realistic scenario, and take the resources available into account.


Repugnancy signals an error in reasoning

I actually think that the reason we find Parfit’s repugnant conclusion so problematic is that our intuitions already get the above argument. We intuitively know that the best day of our lives was exponentially better than an average drizzly November afternoon. We also take for granted that the suffering of any being is equally bad, it doesn’t matter who. It’s just that we’re not used to thinking in logarithmic scales or outside of closed individualism. So, when we start trying to carry out rough utility calculations, we don’t take these points into account.

In fact, it seems like all repugnant conclusions point to a problem with a particular formulation of utilitarianism applied to an unrealistic scenario, not utilitarianism itself. From the standpoint of valence utilitarianism, if a solution brings us to a feeling of repugnancy, it itself is inducing a negative conscious experience. Given that the people living in an imagined world-state are probably still going to have fairly similar sensibilities to us, if that world state is unappealing to us, it would be unappealing to them. We can therefore never end up in a repugnant conclusion if we apply valence utilitarianism correctly.

Now, I realise it is always possible to push back against this with specialist thought experiments. But, applying it in realistic scenarios actually has pretty reasonable outcomes. This is helped by the fact that the human nervous system is set up in a way that actually keeps us in quite a narrow range of conscious valence. This is especially true for people free from the struggle to survive, with enough resources to keep them comfortable. We may be able to enjoy a few peak experiences in our life, but even the richest people in the world are firmly stuck in the human condition most of the time.

The practical application of utilitarianism in the real world is therefore probably best achieved with a sustainable and careful application of humanist values. Slowly increasing the number of people living meaningful and secure lives. For this purpose, I don’t disagree with many of the criticisms of misconceived versions of utilitarianism that don’t pay enough attention to conscious experience. In most cases, the application of deontological and virtue ethics can work well.

The other reason we have to be cautious when following valence utilitarianism is that there’s no way to measure conscious experience. You know it when you have it, but that’s it. We should therefore be incredibly careful when designing a future utopia, given that we have no idea the suffering we could unwittingly inflict. These kind of considerations will likely come to a head as AI gets more powerful. Not really ever being sure if they are actually conscious should give us pause before we aim for a future dominated by AGIs.


Consciousness must come first

So, even though utilitarianism is probably true, we have to be very thoughtful about how we formulate and apply it. If we are too ideological and insensitive, we will end up in repugnancy. But, if we are willing to take the reality of our conscious experience seriously, there should be a way to make it work. After all, it is the only thing we are sure of and the only thing that could possibly bring value. Many of the bad outcomes of history came from forgetting of this fact and forcefully applying artificial and inhuman values.

These considerations are now more important than ever. As it’s looking like the future is going to get quite crazy quite quickly, our moral intuitions will likely start breaking down. We therefore need a concrete ethical framework to know how should we respond to transhuman technologies, or the possibility of producing artificial consciousnesses. It is therefore now incredibly important to get a strong understanding of the real nature of consciousness. But, we also need a clear way of thinking about value. Valence utilitarianism is the only system that stands up to this task. If we don’t want to lose ourselves in the coming future, we have to always bear one thing in mind —consciousness comes first.

For further reading or listening:





More posts like this

Sorted by Click to highlight new comments since:

As someone who is not a hedonistic utilitarian, most of the arguments in this post strike me as incredibly weak. For example it can certainly be argued, and I personally believe, that negative experiences are not bad in such a way that a world without them would be superior. Grief is unpleasant, but I would not prefer a world without grief. I realise that this is not itself an argument, but the possibility of dissent does undermine the idea that the elimination of suffering follows so obviously from its existence that it can violate the is-ought gap.

The post is filled with the same sort of logical leaps, where the author's beliefs "must" be true with no argument as to why. Most academic philosophers are not consequentialists. If you "find it hard to imagine an ultimate ethical theory that isn’t based on some form of utilitarianism" then you probably don't have a very strong understanding of normative ethics. 

I may be missing the argument in the post, and would welcome a clear restatement of the premises, but as far as I can tell there is no serious attempt to address criticisms or alternatives to hedonistic utilitarianism other than "if you thought about it hard enough, you'd agree with me".

edit: I hadn't read it before making this comment, but this other post from today seems to provide a much better answer to the central premise of this post than I would be able to provide. 


As someone who leans towards hedonistic utilitarianism, I would agree with this impression. It seemed like the post asserted that utilitarianism must be true and that alternative intuitions could be dismissed without any good corresponding argument.

I would also add that there are many different flavors of utilitarianism, and it's unclear which, if any, is the correct theory to hold. This podcast has a good breakdown of the possibilities.


If you "find it hard to imagine an ultimate ethical theory that isn’t based on some form of utilitarianism" then you probably don't have a very strong understanding of normative ethics. 

FWIW I'd like to take this opportunity to advertise my list of recommended readings about non-utilitarian normative ethics, which some utilitarians may find educational.

Maybe someone can write a similar list for metaethics.

Curated and popular this week
Relevant opportunities