There has been a long standing debate between utilitarians about what should be maximized. Most fall on either the side of ‘average utility’ or ‘total utility’. A select group even chooses ‘minimum utility’ or other lesser known methods.
Previously I have tried to solve this by charting different moral-theories on different axis and by prioritizing those actions that achieve success in most moral theories (see Q-balance utilitarianism).
I have come to the conclusion that this is just a bandaid on a more fundamental problem. Whether we should choose total, average or even median utility isn’t something we could objectively decide. So I suggest that we go up one level, and maximize what most people want to maximize.
Let’s say we were able to gauge everyone’s (underlying) preferences about how much they like certain methods of maximizing by holding a so called utilitarian vote.
A moral theory that violates your preferences completely would get a score of 0, one that encapsulates your preferences perfectly would get a score of 1 and one that you like to a certain extent but not completely gets e.g 0,732.
If there is such a thing as ‘meta-preference ambivalence’ we could gauge that too: “People who do not have any meta-preferences in their utility-function get a score of 0, people for whom the entire purpose in life is the promotion of average utilitarianism will get a score of 1 etc.
Just multiply the ambivalence with the meta-preference and then add all the scores of the individual methods together (add all the scores of the preferences for “median utility” together, add all the scores for “total utility” together etc) and compare.
Now one way of maximizing comes out on top, but should we pursue it absolutely or proportionally? If “total utilitarianism” wins with 70% and “average utilitarianism” loses with 30% of the vote, should we act as “total utilitarians” 100% of the time or 70% of the time? (with average utilitarianism 30% of the time at random intervals)
Well, we could solve that with a vote too: “Would you prefer pursuing the winning method 100% of the time or would you prefer pursuing it proportionally to it’s victory?”
And any other problem we’ll face after this will be decided by comparing people’s preferences about those problems as well. (This might, depending on how you interpret it, also solve the negative-positive utilitarianism debate)
This might be encroaching on contractualism, but I think this is a very elegant way to solve some of the meta-level disagreements in utilitarian philosophy.
EDIT: I have found a way to formalize this theory in such a way that it could reach a democratic consensus across all possible moral theories. School is keeping me busy, but I plan on writing down and posting it this year (2021).