I have recently written a series of articles about problems in utilitarian ethics that are highly relevant for effective altruism.

A first article describes why I became a utilitarian and what kind of utilitarianism I endorse (i.e. preference, rule, variable critical level, variance normalized,…).

The second article deals with the problems of population ethics and argues for a variable critical level utilitarianism, a kind of critical level utilitarianism where everyone is free to choose his or her own critical level in each different situation. Total, average, critical level, negative and person affecting utilitarianisms are all different special cases of variable critical level utilitarianism. With variable critical level utilitarianism, we can avoid counter-intuitive problems in population ethics. This issue becomes crucial when we have to choose between avoiding actual suffering (e.g. of factory farmed animals today) versus increasing well-being in the long-term future (e.g. avoiding existential risks).

The next two articles deal with the problem of interpersonal comparison of well-being. The first discusses a general method of utility normalization, based on an analogy between measuring utilities and measuring temperatures. This applies to utility functions that have continuous inputs (perceptions or experiences). When inputs are discrete another method is possible that counts the amount of just-noticeable differences in utility. The utility function now looks like a multidimensional staircase where the steps can have different widths. With this method we can compare the utilities of for example insects with humans.

Finally, I deal with the more exotic problem of counting persons and conscious experiences. This problem becomes important when we deal with future conscious artificial intelligence and whole brain emulations, but it is also relevant when we discuss insect sentience or split-brain patients.

4

0
0

Reactions

0
0
Comments5
Sorted by Click to highlight new comments since: Today at 6:49 PM

I enjoy posts like these, but it seems difficult to adapt to using them when I'm actually making a charitable donation (or taking other substantive action).

An idea along those lines: Examine the work of an EA organization that has public analysis of the benefits of various interventions (e.g. GiveWell) from the perspective of variable critical-level utilitarianism, and comment on how you'd personally change the way they calculate benefits if you had the chance.

(This may not actually be applicable to GiveWell; if no orgs fit the bill, you could also examine how donations to a particular charity might look through various utilitarian lenses. In general, I'd love to see more concrete applications of this kind of analysis.)

Now that's a suggestion :-) My intention is to do academic economic research about the implications of such population ethical theories for cost-benefit analysis. My preliminary, highly uncertain guess is that a variable critical level utilitarianism results in a higher priority for avoiding current suffering (e.g. livestock farming, wild animal suffering), because it is closer to a negative utilitarianism or person affecting views, compared to e.g. total utilitarianism which prioritizes the far future (existential risk reduction). And my even more uncertain guess is that variable critical level utilitarianism is less vulnerable than total utilitarianism to counterintuitive sadistic repugnant conclusions. This means that also future generations can be inclined to be variable critical levellers instead of totalists, and that means we should discount future generations more (i.e. prioritize current generations more and focus less on existential risk reduction). But this conclusion will be very senstitive on the critical levels chosen by current and future generations.

Interesting post. I wanted to write a substantive response, but ran out of energy. However, I have written previously on why I'm skeptical of the relevance of formally defined utility functions to ethics. Here's one essay about the differences between people's preferences and the type of "utility" that's morally valuable. Here's one about why there's no good way to ground preferences in the real world. And here's one attacking the underlying mindset that makes it tempting to model humans are agents with coherent goals.

Meta: your last link doesn't seem to point anywhere.

Thanks, I corrected the link

More from Stijn
Curated and popular this week
Relevant opportunities