Was doing a bit of musing and thought of an ethical concept I have not heard discussed before, though I'm sure it has been written about by some ethicist. 

It concerns average utilitarianism, a not-very-popular philosophy that I nonetheless find a bit plausible; it has a small place in my moral uncertainty. Most discussions of average utilitarianism (averagism) and total utilitarianism (totalism) begin and end with discussions of the Repugnant and Sadistic conclusions. For me, such discussions lead averagism seeming worse, but not entirely forgettable, relative to totalism.

There is more intricacy to average utilitarianism, however, that I think is overlooked. (Hedonic) total utilitarianism is easily defined: assuming that each sentient being s at point in time t has a "utility" value u(s,t) representing (amount of pleasure - amount of pain) in the moment, total utilitarianism is just:

Average utilitarianism requires specification of an additional value, the moral weight of an individual at a point in time w(s,t), corresponding to a sentient being's capacity for pleasure and pain, or their "degree of consciousness". Averagism is then (I think?) ordinarily defined as follows, where at any given time you divide the total utility by the total moral weight of beings alive:

Laying out the view like this makes clear another flaw, a flaw which is in my view worse than anything discussed in the Repugnant vs. Sadistic conclusion arguments: utility isn't time-independent. That is, if a population grows over time to (e.g.) 10x the size, each being's pain and pleasure counts 10x less than beings that came earlier. 

This leads to some really bad conclusions. Say you had a task that the above population needs to accomplish that will require immense suffering by one person. Instead of trying to reduce this suffering, this view says that you can dampen it simply by having this person born way in the future. The raw suffering that this being will experience is the same, but because there happen to be more people alive, this suffering just doesn't matter as much. In a growing population, offloading suffering to future generations now becomes an easy get-out-of-jail free card in ways that only make sense to someone who treats ethics as a big game.

After some thinking, I realized that the above expression is not the only way you can define averagism. You can instead divide the total amount of utility that will ever exist by the total amount of moral weight that will ever exist:

This expression destroys the time dependency discussed above. Instead of downweighing an individual's utility by the amount of other beings that currently exist, we instead downweigh it by the amount of other beings that have ever existed. We still avoid the Repugnant Conclusion on a global scale (which satisfies the "choose one of these two worlds" phrasing ordinarily used), though on local timescales you have a lot of repugnant behavior remaining that you don't with the previous definition.

The time-invariant expression also puts a bit of a different spin on average utilitarianism. By the end of the last sentient life, we want to be able to claim that the average sentient being was as happy as possible. If ever we get average utilitarianism to a level we can never match, the best option is just to have no more sentient life, to "turn off the tap" of sentience before the water gets too diluted with below-average (even if positive) utility. We are also obligated to learn about our history to determine whether ancient beings were miserable or ecstatic, to see at which level of utility it is still worth having life.

...Or at least, in theory. In practice, of course, it's really hard to figure out what the actual implications of different forms of averagism are, given how little we know about wild animal welfare and given the correlation between per-capita prosperity and population size. That being said, I think this form of averagism is at least interesting and merits a bit of discussion. I certainly don't give it too much credence, but it has found a bit of weight in my moral uncertainty space.

18

0
0

Reactions

0
0

More posts like this

Comments5


Sorted by Click to highlight new comments since:

Thanks for this. i just had a similar idea, and ofc I'm glad to see another EA had a similar insight before. I am no expert on the field, but I agree that this "atemporal avg utilitarianism" seems to be underrated; I wonder why. The greatest problem I see with this view, at first, is that it makes the moral goodness of future actions depend on the population and the goodness of the past. I suspect this would also make it impossible (or intractable) to model goodness as a social welfare function. But then... if the moral POV is the "POV of the universe", or the POV of nowhere, or of the impartial observer... maybe that's justified? And it'd explain the Asymmetry and the use of thresholds for adding people.

I suspect this view is immune to the repugnant conclusion / mere addition paradox. The most piercing objection from total view advocates against avg utilitarianism is that it implies a sadistic conclusion: adding a life worth living makes the world worse if this life is below the average utility; and adding a life with negative value is good if it is superior to the world average. But if the overall avg utility is positive, or if you add a constraint forbidding adding negative lives... this makes it less likely to find examples where this view implies a "sadistic" conclusion

As an aside, if both average-ism and totalism lead to results that seem discordant with our moral intuitions, why do we need to choose between them? Wouldn't it make sense to look for a function combining some elements of each of these?

There's a proof showing that any utilitarian ideology violates either the repugnant or sadistic conclusion (or anti-egalitarianism, incentivizing an unequal society), so you can't cleverly avoid these two conclusions with some fancy math. To add, any fancy view you create will be in some sense unmotivated - you just came with a formula that you like, but why would such a formula be true? Totalism and averagism seem to be the two most interpretable utilitarian ideologies, with totalism caring only about pain/pleasure (and not by whom this pain/pleasure is experienced) and averagism being the same except population-neutral, not incentivizing a larger population unless it has higher average net pleasure. Anything else is kind of an arbitrary view invented by someone who is too into math.

The anti-egalitarianism one seems to me to be the least obviously necessary of the three [1]. It doesn't seem obviously wrong that for this abstract concept of 'utility' (in the hedonic sense), there may be cases and regions in which it's better to have one person with a bit more and another with a bit less.

But more importantly, I think, why is it so bad that it is 'unmotivated'. In many domains we think that 'a balance of concerns' or 'a balance of inputs' yields the best outcome under the constraints.

So why shouldn't a reasonable moral valuation ('axiology') involve some balance of interest in total welfare and interest in average welfare? It's hard to know where that balancing point should lie (although maybe some principles could be derived). But that still doesn't seem to invalidate it... any more than my liking some combination of work and relaxation, or believing that beauty lies in a balance between predictability and surprise, etc.

I wouldn't think 'invented by someone too into math' (if that's possible :) ). If anything I think the opposite. I am accepting that a valuation of what is moral could be valid and defensible even if it can't be stated in as stark axiomatic terms as the extreme value systems.


    1. Although many EAs seem to be ok with the repugnant conclusion also. ↩︎

In other domains, when we combine different metrics to yield one frankenstein metric, it is because these different metrics are all partial indicators of some underlying measure we cannot directly observe. The whole point of ethics is that we are trying to directly describe this underlying measure of "good", and thus it doesn't make sense to me to create some frankenstein view. 

The only instance I would see this being ok is in the context of moral uncertainty, where we're saying "I believe there is some underlying view but I don't know what it is, so I will give some weight to a bunch of these plausible theories". Which maybe is what you're getting at? But in that case, I think it's necessary to believe that each of the views you are averaging over could be approximately true on its own, which IMO really isn't the case with a complicated utilitarianism formula, especially since we know there is no formula out there that will give us all we desire. Though this is another long philosophical rabbit hole, I'm sure.

Curated and popular this week
Ben_West🔸
 ·  · 1m read
 · 
> Summary: We propose measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under a decade, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks. > > The length of tasks (measured by how long they take human professionals) that generalist frontier model agents can complete autonomously with 50% reliability has been doubling approximately every 7 months for the last 6 years. The shaded region represents 95% CI calculated by hierarchical bootstrap over task families, tasks, and task attempts. > > Full paper | Github repo Blogpost; tweet thread. 
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr
Joris 🔸
 ·  · 5m read
 · 
Last week, I participated in Animal Advocacy Careers’ Impactful Policy Careers programme. Below I’m sharing some reflections on what was a really interesting week in Brussels! Please note I spent just one week there, so take it all with a grain of (CAP-subsidized) salt. Posts like this and this one are probably much more informative (and assume less context). I mainly wrote this to reflect on my time in Brussels (and I capped it at 2 hours, so it’s not a super polished draft). I’ll focus mostly on EU careers generally, less on (EU) animal welfare-related careers. Before I jump in, just a quick note about how I think AAC did something really cool here: they identified a relatively underexplored area where it’s relatively easy for animal advocates to find impactful roles, and then designed a programme to help these people better understand that area, meet stakeholders, and learn how to find roles. I also think the participants developed meaningful bonds, which could prove valuable over time. Thank you to the AAC team for hosting this! On EU careers generally * The EU has a surprisingly big influence over its citizens and the wider world for how neglected it came across to me. There’s many areas where countries have basically given a bunch (if not all) of their decision making power to the EU. And despite that, the EU policy making / politics bubble comes across as relatively neglected, with relatively little media coverage and a relatively small bureaucracy. * There’s quite a lot of pathways into the Brussels bubble, but all have different ToCs, demand different skill sets, and prefer different backgrounds. Dissecting these is hard, and time-intensive * For context, I have always been interested in “a career in policy/politics” – I now realize that’s kind of ridiculously broad. I’m happy to have gained some clarity on the differences between roles in Parliament, work at the Commission, the Council, lobbying, consultancy work, and think tanks. * The absorbe