There has been a long standing debate between utilitarians about what should be maximized. Most fall on either the side of ‘average utility’ or ‘total utility’. A select group even chooses ‘minimum utility’ or other lesser known methods.

Previously I have tried to solve this by charting different moral-theories on different axis and by prioritizing those actions that achieve success in most moral theories (see Q-balance utilitarianism).

I have come to the conclusion that this is just a bandaid on a more fundamental problem. Whether we should choose total, average or even median utility isn’t something we could objectively decide. So I suggest that we go up one level, and maximize what most people want to maximize.

Let’s say we were able to gauge everyone’s (underlying) preferences about how much they like certain methods of maximizing by holding a so called utilitarian vote.

A moral theory that violates your preferences completely would get a score of 0, one that encapsulates your preferences perfectly would get a score of 1 and one that you like to a certain extent but not completely gets e.g 0,732.

If there is such a thing as ‘meta-preference ambivalence’ we could gauge that too: “People who do not have any meta-preferences in their utility-function get a score of 0, people for whom the entire purpose in life is the promotion of average utilitarianism will get a score of 1 etc.

Just multiply the ambivalence with the meta-preference and then add all the scores of the individual methods together (add all the scores of the preferences for “median utility” together, add all the scores for “total utility” together etc) and compare.

Now one way of maximizing comes out on top, but should we pursue it absolutely or proportionally? If “total utilitarianism” wins with 70% and “average utilitarianism” loses with 30% of the vote, should we act as “total utilitarians” 100% of the time or 70% of the time? (with average utilitarianism 30% of the time at random intervals)

Well, we could solve that with a vote too: “Would you prefer pursuing the winning method 100% of the time or would you prefer pursuing it proportionally to it’s victory?”

And any other problem we’ll face after this will be decided by comparing people’s preferences about those problems as well. (This might, depending on how you interpret it, also solve the negative-positive utilitarianism debate)

This might be encroaching on contractualism, but I think this is a very elegant way to solve some of the meta-level disagreements in utilitarian philosophy.

EDIT: I'm trying to formalize this theory in such a way that it could reach a democratic consensus across all possible moral theories. Hopefully I can publish it in a couple years at which point this post will become outdated.

10

0
0

Reactions

0
0
Comments2
Sorted by Click to highlight new comments since: Today at 2:20 PM

This post is a crosspost from Less Wrong. Below I leave a comment by Lukas Gloor that explained the implications of that post way better than I did:

This type of procedure may look inelegant for folks who expect population ethics to have an objectively correct solution. However, I think it's confused to expect there to be such an objective solution. In my view at least, this makes the procedure described in the original post here look pretty attractive as a way to move forward.
Because it includes some very similar considerations as are presented in the original post here, I'll try to (for those who are curious enough to bear with me) describe the framework I've using to think about population ethics:
Ethical value is subjective in the sense that if someone's life goal is to strive toward state x, it's no one's business to tell them that they should focus on y instead. (There may be exceptions, e.g., in case someone's life goals are the result of brain washing).
For decisions that do not involve the creation of new sentient beings, preference utilitarianism or "bare minimum contractualism" seem like satisfying frameworks. Preference utilitarians are ambitiously cooperative/altruistic and scale back any other possible life goals at the expense of getting maximal preference satisfaction for everyone, whereas "bare-minimum contractualists" obey principles like do no harm while still mostly focusing on their own life goals. A benevolent AI should follow preference utilitarianism, whereas individual people are free to decide for anything on the spectrum between full preference utilitarianism and bare-minimum contractualism. (Bernard William's famous objection to utilitarianism is that it undermines a person's "integrity" by alienating them from their own life goals. By focusing all their actions on doing what's best from everyone's point of view, people don't get to do anything that's good for themselves. This seems okay if one consciously chooses altruism as a way of life, but it seems overly demanding as an all-encompassing morality).
When it comes to questions that affect the creation of new beings, the principles behind preference utilitarianism or bare-minimum contractualism fail to constrain all of the possibility space. In other words: population ethics is underdetermined.
That said, it's not the case that "anything goes." Just because present populations have all the power doesn't mean that it's morally permissible to ignore any other-regarding considerations about the well-being of possible future people. A bare-minimum version of population ethics could be conceptualized as a set of appeals or principles by which newly created beings can hold accountable their creators. This could include principles such as:
All else equal, it seems objectionable to create minds that lament their existence.
All else equal, it seems objectionable to create minds and place them in situations where their interests are only somewhat fulfilled, if one could have easily provided them with better circumstances.
All else equal, it seems objectionable to create minds destined to constant misery, yet with a strict preference for existence over non-existence.
(While the first principle is about which minds to create, the second two principles apply to how to create new minds.)
Is it ever objectionable to fail to create minds – for instance, in cases where they’d have a strong interest in their existence?
This type of principle would go beyond bare-minimum population ethics. It would be demanding to follow in the sense that it doesn't just tell us what not to do, but also gives us something to optimize (the creation of new happy people) that would take up all our caring capacity.
Just because we care about fulfilling actual people's life goals doesn't mean that we care about creating new people with satisfied life goals. These two things are different. Total utilitarianism is a plausible or defensible version of a "full-scope" population ethical theory, but it's not a theory that everyone will agree with. Alternatives like average utilitarianism or negative utilitarianism are on equal footing. (As are non-utillitarian approaches to population ethics that say that the moral value of future civilization is some complex function that doesn't scale linearly with increased population size.)
So what should we make of moral theories such as total utilitarianism, average utilitarianism or negative utilitarianism? They way I think of them, they are possible morally-inspired personal preferences, rather than personal preferences inspired by the correct all-encompassing morality. In other words, a total/average/negative utilitarian is someone who holds strong moral views related to the creation of new people, views that go beyond the bare-minimum principles discussed above. Those views are defensible in the sense that we can see where such people's inspiration comes from, but they are not objectively true in the sense that those intuitions will appeal in the same way to everyone.
How should people with different population-ethical preferences approach disagreement?
One pretty natural and straightforward approach would the proposal in the original post here.
Ironically, this would amount to "solving" population ethics in a way that's very similar to how common sense would address it. Here's how I'd imagine non-philosophers to think approach population ethics:
Parents are obligated to provide a very high standard of care for their children (bare-minimum principle).
People are free to decide against becoming parents (principle inspired by personal morality).
Parents are free to want to have as many children as possible (principle inspired by personal morality), as long as the children are happy in expectation (bare-minimum principle).
People are free to try to influence other people’s stances and parenting choices (principle inspired by personal morality), as long as they remain within the boundaries of what is acceptable in a civil society (bare-minimum principle).
For decisions that are made collectively, we'll probably want some type of democratic compromise.
I get the impression that a lot of effective altruists have negative associations with moral theories that leave things underspecified. But think about what it would imply if nothing was underspecfied: As Bernard Williams has noted, if the true morality left nothing underspecified, then morally-inclined people would have no freedom to choose what to live for. I no longer think it's possible or even desirable to find such an all-encompassing morality.
One may object that the picture I'm painting cheapens the motivation behind some people's strongly held population-ethical convictions. The objection could be summarized this way: "Total utilitarians aren't just people who self-orientedly like there to be a lot of happiness in the future! Instead, they want there to be a lot of happiness in the future because that's what they think makes up the most good."
I think this objection has two components. The first component is inspired by a belief in moral realism, and to that, I'd reply that moral realism is false. The second component of the objection is an important intuition that I sympathize with. I think this intuition can still be accommodated in my framework. This works as follows: What I labelled "principle inspired by personal morality" wasn't a euphemism for "some random thing people do to feel good about themselves." People's personal moral principles can be super serious and inspired by the utmost desire to do what's good for others. It's just important to internalize that there isn't just one single way to do good for others. There are multiple flavors of doing good.

Thanks for cross-posting this! That's a really good habit to be in when things are posted in multiple places and comments only show up in one of those.

Curated and popular this week
Relevant opportunities