6 Comments
Dec 13, 2022Liked by Adam Elwood

Coming from more of a naturalistic virtue ethics perspective, this is a very interesting piece. I'm still kind of confused about the distinguishing feature of valence utilitarianism as opposed to naive utilitarianism. As best I can tell the "valence" part is supposed to handle the question of how utility is defined. One of the major problems with utilitarianism is that such things as "pleasure" or "happiness" are not monolithic-Plato's psychology correctly noticed that if the mind simultaneously desires different, contradictory things, then by the principle of noncontradiction, there must be different parts of the mind, and since all exist in one person, they can't all get everything they want, since their wants conflict. Thus the utilitarian preference for pleasure over suffering has the significant speedbump of needing to account for the fact that a person can simultaneously suffer in one part of the mind (the one which wants to eat the ice cream, for instance) while feeling pleasure in another (the part which seeks to lose weight, or the part which seeks to demonstrate the strength of character to abstain from dessert). How does valence utilitarianism tackle this problem?

"However, this doesn’t take into account all the collateral effects caused by the fear and insecurity that this kind of practice would unleash on the general population, not to mention the violent deaths of the victims. If we are trying to maximise positive conscious valence, we need to take into account the fact that feeling secure has a very meaningful effect on the experience of our lives. The terror unleashed wouldn’t balance out the gains, so this scenario wouldn’t make sense under valence utilitarianism."

I think this argument is flawed. First I totally reject your next sentence about how adding stipulations to this thought experiment would make it unrealistic-not least because it's already unrealistic, but mainly because the point of thought experiments is not to be realistic, it's to isolate a proposition, in this case an ethical claim for the importance of utility, and test it. The reason you wrote it is presumably that we can easily tweak the thought experiment to avoid your reasoning without failing to offend our moral intuition. If our rogue surgeon were able to commit his crimes without anyone being aware of the disappearance of his victims, and he were able to kill his victims without inflicting any fear or pain (which is certainly a practical possibility), would this make the behavior acceptable under valence utilitarianism?

Seemingly it would. But if anything, this clashes even more strongly with our moral intuitions-not only do we have the serial killer surgeon, but a conspiracy to aid and abet him and hide his crimes from society. This seems worse, and that doesn't really make sense if our problem with the rogue surgeon is actually that he's harming the public's average sense of safety and thus nullifying the objective good being achieved by his murders. That doesn't necessarily make it wrong, but the value of utility itself is being asserted because it's a commonly shared intuition that wellbeing is preferable to suffering. If the theory which stems from this doesn't account for conflicting moral intuitions, it seems fatally flawed.

"There are many other ways people try to point out holes in utilitarianism, with thought experiments about experience machines and fictitious utility monsters. But if you look at the details of a real world scenario from the perspective of valence utilitarianism, they can usually be explained away."

The problem is that real world applications of a philosophical system can work great and still be wrong epistemologically, the same way Newtonian physics is wrong but still answers physical problems extremely precisely, from a human perspective (whereas from an atomic perspective it's completely incoherent). The same can be said for virtue ethics or deontology-there are different circumstances where each ethical theory performs in a way we intuitively prefer. The aim is one that works in one place without seeming to be disastrous in another-utilitarianism performs unusually badly in this regard.

Expand full comment
author

Thanks for this great comment, glad you found the piece thought provoking!

I can see your concerns about the practicalities of applying utilitarianism. I actually think practically virtue ethics or deontology work best in most cases -- especially as measuring subjective experience is currently impossible.

However, I still believe that there is something inherently good or bad, and it's rooted in the valence of conscious experience. So, you can argue about the practicalities of an ethical theory all you like, but in the end what you should always care about is reducing the net suffering in the world. Our moral intuitions usually track this quite well, but I don't think we should always follow them where they diverge.

"Thus the utilitarian preference for pleasure over suffering has the significant speedbump of needing to account for the fact that a person can simultaneously suffer in one part of the mind (the one which wants to eat the ice cream, for instance) while feeling pleasure in another"

This is a fair point, but I still think it's possible to make a comparison between two states and decide which one you probably would prefer, which is all you need in the end.

Expand full comment
Dec 6, 2022Liked by Adam Elwood

What do you think of the argument that says, "Yes there is a meaningful direction called good, that we want to move in, but it is computationally intractable to measure or compute this direction perfectly, so at best we can approximate it?"

I really like the argument you present against the rogue surgeon as an example here. To me, this argument leads to a conclusion that says It's impossible to measure how any one concrete change has affected everyone.

For example, did COVID vaccine mandates, on net, improve the utility of the world? I don't know how you could evaluate any answer to this question because it would require accumulating a bunch of data we can't accumulate, and then weighing different values against each other (freedom to choose what goes into my body vs. belief that I'm not going to get infected when i go out) using some criteria that is itself untestable.

This is why i think I'm in this rare camp that says yes, morality is a real thing, but all we can do is attempt to approximate it, and therefore it's better to have a huge diversity of moral agents all executing different approximations of the true morality.

Expand full comment
author

> morality is a real thing, but all we can do is attempt to approximate it

I think this is a really good way of thinking about it! I hadn't quite put it that way in my head, but it makes great sense.

My view is that morality depends on consequences in conscious experience, which is unmeasurable almost by definition. So yeah, we just have to figure out to move in that direction, but we can never really be sure

Expand full comment

Are you familiar with https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem ?

It seems relevant here...

Basically if an agent has a set of ordered preferences that obey some reasonable seeming axioms then there exists an objective function it is maximizing. Boom, utilitarianism. So utilitarianism is the right framework. Lying beneath the surface of any ethical system is a set of values people are interested in.. so any ethical system can be cast into a utilitarian framework, at least in theory.

Expand full comment
author

I am familiar with that! I actually linked it in a different context in this essay (related to loss functions in machine learning): https://pursuingreality.substack.com/p/if-robots-spoke-of-god

I hadn't thought about applying it here, but it makes perfect sense! I guess the only push back would be if it's always possible to get a set of ordered preferences, maybe some qualities are impossible to compare? Although I suspect not.

Expand full comment