No conscious being can deny that suffering is real.
If somebody claims otherwise, I’ll bet they’re confusing it with pain. It’s true that pain usually leads to suffering, but it’s always possible to drug or meditate it away. This is because pain is just an intense feeling, while suffering is something else, a kind of discordance in consciousness. Consider emotional suffering, for example. A bad breakup or a loved one’s death doesn’t cause any physical pain, but it does cause suffering. It’s a real and unfortunate part of the universe, something that is undeniably bad.
The flip side of this though is that joy, happiness and love are real as well. They aren’t just a byproduct of physical pleasure or the evolutionary drive to reproduce, but things that really exist as part of reality. If anything is good, it’s these conscious experiences.
If we accept the reality of experience, it’s hard to deny that the universe has value built in. The good must derive from reducing suffering and promoting bliss, every coherent conception of it must ultimately point to this fact. If we buy the self-evident fact that conscious valence is real, we get an ought from an is.
An ethical theory of everything
Given this strong argument for moral realism, I find it hard to imagine an ultimate ethical theory that isn’t based on some form of utilitarianism. Deontological and virtue ethics may provide good tools for achieving good outcomes, but they don’t get to the heart of the matter. Any coherent ethical theory must aim to attain a world-state with less suffering. This ultimately reduces to a form of utilitarianism that is based on measuring the quality of conscious experience, often referred to as valence utilitarianism.1
Of course, things are more complicated than just this. It’s possible to be in both positive and negative valence states at the same time, or have to choose from many different types of positive experiences. Can we say that peace is always better than joy? Or that physical pain is worse than emotional pain? Deciding exactly how to formulate utilitarianism may get a bit tricky (as expertly explored by Andrés Gómez Emilsson on his blog) but that doesn’t mean that there isn’t a general direction along which we can move towards a better world. Creating more moments of joy or peace and fewer moments of suffering is always the right way to go.
In some cases, though, the details really do matter. As it is normally formalised, utilitarianism runs into a series of repugnant conclusions. But, if you look at them closely from the point of view of valence utilitarianism, some of them start to dissolve.
For example, take the the rogue surgeon thought experiment. If you only care about maximising the number of living people, it could make sense for surgeons to go around kidnapping healthy people and butchering them for their organs, which can then be transplanted into terminal patients, ultimately saving more people than are killed. However, this doesn’t take into account all the collateral effects caused by the fear and insecurity that this kind of practice would unleash on the general population, not to mention the violent deaths of the victims. If we are trying to maximise positive conscious valence, we need to take into account the fact that feeling secure has a very meaningful effect on the experience of our lives. The terror unleashed wouldn’t balance out the gains, so this scenario wouldn’t make sense under valence utilitarianism. Any modification to the thought experiment to get around these problems would require a fantastical level of secrecy, which wouldn’t play out in reality.
There are many other ways people try to point out holes in utilitarianism, with thought experiments about experience machines and fictitious utility monsters. But if you look at the details of a real world scenario from the perspective of valence utilitarianism, they can usually be explained away.2
Dissolving the ultimate critique
However, we are still left with the original repugnant conclusion, introduced by Derek Parfit. It states that with a naïve formulation of utilitarianism, you can always conceive of a world filled with many lives barely worth living that has higher utility than another world filled with a modest number of extremely happy people. This seems like an unacceptable possibility. We would all rather be one of the few in the world of extreme bliss than one of the just-surviving masses.
This problem can be dissolved by valence utilitarianism as well, but you have to be willing to accept two truths about the way the universe is configured. Firstly, that the degree of enjoyment or suffering one can experience is likely logarithmic — the best (or worse) experience is many thousands of times better (or worse) than a mildly pleasant (or unpleasant) experience. Something that has been argued for convincingly by the Qualia Research Institute.
The other, likely more controversial, truth is that closed individualism — the common-sense belief that we are all metaphysically separate observers — is almost certainly false. Believing that we have real separate and essential selves is a useful evolutionary tool, but doesn’t really stand up to scientific scrutiny or sustained introspection. This leaves us with open individualism, the belief that we are all one consciousness, or empty individualism, in which we are just a single moment of experience. Neither of the latter beliefs reify the existence of an individual self, and allow us to sum across the experiences of many different conscious beings, regardless of who they belong to.
Then, given logarithmic scales of valence and open or empty individualism, it’s always going to be easier to achieve a high utility world-state with a few beings enjoying spectacular experiences, rather than filling the universe with miserable people. This is especially true when you play out a realistic scenario, and take the resources available into account.
Repugnancy signals an error in reasoning
I actually think that the reason we find Parfit’s repugnant conclusion so problematic is that our intuitions already get the above argument. We intuitively know that the best day of our lives was exponentially better than an average drizzly November afternoon. We also take for granted that the suffering of any being is equally bad, it doesn’t matter who. It’s just that we’re not used to thinking in logarithmic scales or outside of closed individualism. So, when we start trying to carry out rough utility calculations, we don’t take these points into account.
In fact, it seems like all repugnant conclusions point to a problem with a particular formulation of utilitarianism applied to an unrealistic scenario, not utilitarianism itself. From the standpoint of valence utilitarianism, if a solution brings us to a feeling of repugnancy, it itself is inducing a negative conscious experience. Given that the people living in an imagined world-state are probably still going to have fairly similar sensibilities to us, if that world state is unappealing to us, it would be unappealing to them. We can therefore never end up in a repugnant conclusion if we apply valence utilitarianism correctly.
Now, I realise it is always possible to push back against this with specialist thought experiments. But, applying it in realistic scenarios actually has pretty reasonable outcomes. This is helped by the fact that the human nervous system is set up in a way that actually keeps us in quite a narrow range of conscious valence. This is especially true for people free from the struggle to survive, with enough resources to keep them comfortable. We may be able to enjoy a few peak experiences in our life, but even the richest people in the world are firmly stuck in the human condition most of the time.
The practical application of utilitarianism in the real world is therefore probably best achieved with a sustainable and careful application of humanist values. Slowly increasing the number of people living meaningful and secure lives. For this purpose, I don’t disagree with many of the criticisms of misconceived versions of utilitarianism that don’t pay enough attention to conscious experience. In most cases, the application of deontological and virtue ethics can work well.
The other reason we have to be cautious when following valence utilitarianism is that there’s no way to measure conscious experience. You know it when you have it, but that’s it. We should therefore be incredibly careful when designing a future utopia, given that we have no idea the suffering we could unwittingly inflict. These kind of considerations will likely come to a head as AI gets more powerful. Not really ever being sure if they are actually conscious should give us pause before we aim for a future dominated by AGIs.
Consciousness must come first
So, even though utilitarianism is probably true, we have to be very thoughtful about how we formulate and apply it. If we are too ideological and insensitive, we will end up in repugnancy. But, if we are willing to take the reality of our conscious experience seriously, there should be a way to make it work. After all, it is the only thing we are sure of and the only thing that could possibly bring value. Many of the bad outcomes of history came from forgetting of this fact and forcefully applying artificial and inhuman values.
These considerations are now more important than ever. As it’s looking like the future is going to get quite crazy quite quickly, our moral intuitions will likely start breaking down. We therefore need a concrete ethical framework to know how should we respond to transhuman technologies, or the possibility of producing artificial consciousnesses. It is therefore now incredibly important to get a strong understanding of the real nature of consciousness. But, we also need a clear way of thinking about value. Valence utilitarianism is the only system that stands up to this task. If we don’t want to lose ourselves in the coming future, we have to always bear one thing in mind —consciousness comes first.
The ideas in this post have been gestating in my head for quite a while. During the course of exploring them I’ve come across many people expressing similar views, many of whom have provided significant inspiration for this post. For further reading or listening please check them out.
Andrés Gómez Emilsson’s writing on Qualia Computing, especially this, this and this
The above ideas are put much more thoroughly by Sharon Hewitt Rawlette, both on the Waking Cosmos and 80000 hours podcasts, and most completely in her book
I was in part responding to 's thought provoking criticisms of Effective Altruism in
Sam Harris makes similar arguments in The Moral Landscape
For thoughts on the potential for artificial suffering this piece over at is interesting
Technically it is also (more often) called hedonistic utilitarianism, but I think this term comes with too much connotational baggage to be representative. You don’t have to be a “hedonist” that only values pleasure to be a valence utilitarian.
Coming from more of a naturalistic virtue ethics perspective, this is a very interesting piece. I'm still kind of confused about the distinguishing feature of valence utilitarianism as opposed to naive utilitarianism. As best I can tell the "valence" part is supposed to handle the question of how utility is defined. One of the major problems with utilitarianism is that such things as "pleasure" or "happiness" are not monolithic-Plato's psychology correctly noticed that if the mind simultaneously desires different, contradictory things, then by the principle of noncontradiction, there must be different parts of the mind, and since all exist in one person, they can't all get everything they want, since their wants conflict. Thus the utilitarian preference for pleasure over suffering has the significant speedbump of needing to account for the fact that a person can simultaneously suffer in one part of the mind (the one which wants to eat the ice cream, for instance) while feeling pleasure in another (the part which seeks to lose weight, or the part which seeks to demonstrate the strength of character to abstain from dessert). How does valence utilitarianism tackle this problem?
"However, this doesn’t take into account all the collateral effects caused by the fear and insecurity that this kind of practice would unleash on the general population, not to mention the violent deaths of the victims. If we are trying to maximise positive conscious valence, we need to take into account the fact that feeling secure has a very meaningful effect on the experience of our lives. The terror unleashed wouldn’t balance out the gains, so this scenario wouldn’t make sense under valence utilitarianism."
I think this argument is flawed. First I totally reject your next sentence about how adding stipulations to this thought experiment would make it unrealistic-not least because it's already unrealistic, but mainly because the point of thought experiments is not to be realistic, it's to isolate a proposition, in this case an ethical claim for the importance of utility, and test it. The reason you wrote it is presumably that we can easily tweak the thought experiment to avoid your reasoning without failing to offend our moral intuition. If our rogue surgeon were able to commit his crimes without anyone being aware of the disappearance of his victims, and he were able to kill his victims without inflicting any fear or pain (which is certainly a practical possibility), would this make the behavior acceptable under valence utilitarianism?
Seemingly it would. But if anything, this clashes even more strongly with our moral intuitions-not only do we have the serial killer surgeon, but a conspiracy to aid and abet him and hide his crimes from society. This seems worse, and that doesn't really make sense if our problem with the rogue surgeon is actually that he's harming the public's average sense of safety and thus nullifying the objective good being achieved by his murders. That doesn't necessarily make it wrong, but the value of utility itself is being asserted because it's a commonly shared intuition that wellbeing is preferable to suffering. If the theory which stems from this doesn't account for conflicting moral intuitions, it seems fatally flawed.
"There are many other ways people try to point out holes in utilitarianism, with thought experiments about experience machines and fictitious utility monsters. But if you look at the details of a real world scenario from the perspective of valence utilitarianism, they can usually be explained away."
The problem is that real world applications of a philosophical system can work great and still be wrong epistemologically, the same way Newtonian physics is wrong but still answers physical problems extremely precisely, from a human perspective (whereas from an atomic perspective it's completely incoherent). The same can be said for virtue ethics or deontology-there are different circumstances where each ethical theory performs in a way we intuitively prefer. The aim is one that works in one place without seeming to be disastrous in another-utilitarianism performs unusually badly in this regard.
What do you think of the argument that says, "Yes there is a meaningful direction called good, that we want to move in, but it is computationally intractable to measure or compute this direction perfectly, so at best we can approximate it?"
I really like the argument you present against the rogue surgeon as an example here. To me, this argument leads to a conclusion that says It's impossible to measure how any one concrete change has affected everyone.
For example, did COVID vaccine mandates, on net, improve the utility of the world? I don't know how you could evaluate any answer to this question because it would require accumulating a bunch of data we can't accumulate, and then weighing different values against each other (freedom to choose what goes into my body vs. belief that I'm not going to get infected when i go out) using some criteria that is itself untestable.
This is why i think I'm in this rare camp that says yes, morality is a real thing, but all we can do is attempt to approximate it, and therefore it's better to have a huge diversity of moral agents all executing different approximations of the true morality.