Hide table of contents

Summary

  • This analysis estimates the expected moral weight of the beings of various species relative to humans for various types of moral weight distributions.
  • The mean moral weight is close to 1 for all the considered species, ranging from 0.5 to 5 excluding the lognormal and pareto distributions (for which it is even higher, but seemingly inaccurate).

I welcome comments about how to interpret the results.

Methodology

The expected moral weight of the beings of various species relative to humans was determined from the product between:

  • The probability of the beings of the species having moral patienthood, as defined by Luke Muehlhauser here, which was set to the values provided in this section of Open Philanthropy's 2017 Report on Consciousness and Moral Patienthood.
  • The mean of a distribution whose 10th and 90th percentiles were set to the lower and upper bounds of the "80 % prediction interval" guessed by Luke Muehlhauser here for the moral weight of various species relative to humans conditional on the respective beings having moral patienthood (see "Moral weights of various species").
    • The mean of the distribution was computed from the quantiles as described here.

The expected moral weight might depend on the theory of consciousness. The above product is implicitly assumed to represent the expected weighted mean of the moral weight distributions of the various theories of consciousness. These are, in turn, supposed to produce (summable) moral weight distributions. Potential concerns about calculating expected moral weights are discussed here.

Results

The mean and median moral weight of various species relative to humans for uniform, normal, loguniform, lognormal, pareto and logistic distributions were calculated here, and are presented in the tables below[1].

SpeciesMean moral weight relative to humans
UniformNormalLoguniformLognormalParetoLogistic
Chimpanzees0.9000.9000.4903.270.900
Pigs1.401.400.76513.11.40
Cows2.002.001.141322.00
Chickens4.004.002.411.50 k4.00
Rainbow trouts4.554.553.0028.4 k4.55
Fruit flies2.502.501.952.46 M2.50
SpeciesMedian moral weight relative to humans
UniformNormalLoguniformLognormalPareto[2]Logistic
Chimpanzees0.9000.9000.04020.04020.001110.900
Pigs1.401.400.03350.0335495 1.40
Cows2.002.000.01790.017999.1 2.00
Chickens4.004.000.01790.017949.5 4.00
Rainbow trouts4.554.550.007980.007988.67 4.55
Fruit flies2.502.500.001920.001920.310 2.50

Discussion

The results suggest animals and humans have a similar moral value. The mean moral weight is close to 1 for all the considered species, ranging from 0.5 to 5 excluding the lognormal and pareto distributions. 

The lognormal distributions do not seem to represent the moral weights accurately. Their heavy right tails imply high mean moral weights, which would arguably require frequent strong experiences. However, as noted here by Jason Schukraft, "it appears unlikely that evolution would select for animals with a non-contiguous range that was exclusively extraordinarily strong because extremely intense experiences are distracting in a way that appears likely to reduce fitness". 

The pareto distributions are not reasonable representions of the moral weights, as they lead to mean moral weights of infinity.

Loguniform distributions appear to be the best choice amongst the 6 studied types of distributions:

  • Being positive, they prohibit negative moral weights.
  • Having mean larger than the median, they are compatible with the intuition that the moral weight is a product (not a sum) of multiple dimensions (for example, clock speed of consciousness, unity of consciousness, and unity-independent intensity of valenced aspects of consciousness).
  • Being bounded, they prevent unreasonably large mean moral weights.
  1. ^

    The probability of pigs being moral patients is not provided in this section of Open Philanthropy's 2017 Report on Consciousness and Moral Patienthood. However, it was assumed to be equal to that of cows and chickens (80 %).

  2. ^

     equals .

Comments10


Sorted by Click to highlight new comments since:

Since this exercise is based on numbers I personally made up, I would like to remind everyone that those numbers are extremely made up and come with many caveats given in the original sources. It would not be that hard to produce numbers more reasonable than mine, at least re: moral weights. (I spent more time on the "probability of consciousness" numbers, though that was years ago and my numbers would probably be different now.)

To pick a bit on the notion from this article which establishes the range of moral weights in question.

They say fruit flies range from a moral weight of .000001 - 20 times the moral significance of human experience. In log space, that's between 10^-6 and 10^1.3. The mean log uniform distribution, as you mention, is at 1.95. I find the significant probability mass being above 1 as implausible for fruit flies, and I will go on to explain why I think except for species like dogs, pigs,  elephants, octopuses, or other long living intelligent social creatures it would be difficult to argue that they are plausibly more moral weight than a human.


Arguments for fruit flies being about as likely to be more morally significant than humans as less:

A fruit fly may experience things much faster than a human, meaning their short lives may be experienced longer than how long a human may perceive their lives to be.

They may experience things more intensely as well, given their single pointed focus on conscious experience. This means although it's possible they experience suffering more fully, it may also be possible that they can completely forget and move their focus to some other task if focusing on the pain does not confer an advantage.

They may be less "distracted" than humans, in that they experience the world more fully and in full awareness.

They are also typically considered innocent of other moral wrongdoings, so perhaps that makes them more morally valuable in some moral systems.


Arguments against:

There are a whole host of reasons to think that they could not possibly be as morally significant as a human.

I think it's reasonable to say a fruit fly cannot remember things in the long term, and it cannot contemplate or ruminate, which is one of the worst aspects of negative experiences and pain. I think most people would prefer to have experiences of extreme pain and trauma erased from their lives.

A fruit fly lives a tiny fraction of the duration of a human's life, so it would have to experience its own life much faster.

A human can be considered an ensemble or family of different personalities and conscious processes. Each one of these may have moral significance, increasing the relative moral significance of a human.

The more complex something is, typically, it is more valued in generic terms.

Humans form a network of social connections and social connections. When a human is lost, their loss is understood and grieved by many other humans, thus greatly increasing the overall negative effect of harm to a human compared to a fruit fly.

Humans have very few children relative to fruit fly, so they are likely value higher on an individual level by their families and communities.
 

In summary the most relevant factors for moral significance are likely the degree of social embeddedness, the experience of higher order emotions and complexity in general, the ability to grieve, long lives, and long memories, which strongly implies that humans are more morally significant than all or most other animals.

A final thought is that we don't know with very high confidence that animals are conscious in the way that we care about morally, but we know this for sure with humans. For that reason, we would be safer to prefer to save humans first, in case we were wrong about animals having conscious experiences in the first place.

Arguments for fruit flies being about as likely to be more morally significant than humans as less

Note the median moral weight for fruit flies assuming a loguniform distribution (the type I prefer) is 0.00192 << 1. So I do not think the moral weight of fruit flies relative to humans being smaller than 1 is as likely as it being larger than 1.

I think it's reasonable to say a fruit fly cannot remember things in the long term, and it cannot contemplate or ruminate, which is one of the worst aspects of negative experiences and pain. I think most people would prefer to have experiences of extreme pain and trauma erased from their lives.

Based on this analysis from Jason Schukraft, "mental time travel ["the capacity to remember past events and imagine future events"] seems to reduce the intensity of experiences in some circumstances and amplify the intensity of experiences in other circumstances. It is thus unclear whether animals that possess this ability have characteristically more or less intense valenced experiences overall" (see this section for details).

A fruit fly lives a tiny fraction of the duration of a human's life, so it would have to experience its own life much faster.

The moral weights presented here have units QALY/aQALY (QALY per "animal QALY"), and therefore they are not affected by differences in life expectancy between species. For example, a moral weight of 2 QALY/cQALY (QALY per "chicken QALY") means that 2 T years of fully healthy human life are as valuable as T years of fully healthy chicken life.

A human can be considered an ensemble or family of different personalities and conscious processes. Each one of these may have moral significance, increasing the relative moral significance of a human.

I tend to agree. From Jason's analysis (see here), "species that experience a greater variety and/or greater complexity of emotional states are, all else equal, capable of more intense positive and negative experiences".

The more complex something is, typically, it is more valued in generic terms.

From the "Key Highlights" of Jason's analysis:

  • "Some aspects of cognitive sophistication appear to be positively correlated with intensity range; other aspects of cognitive sophistication appear to be negatively correlated with intensity range".
  • "Affective complexity [diversity and depth of emotional sensations an animal can experience] generally appears to be positively correlated with intensity range".

So I tend to agree with your point, and think this is a good argument for not trusting mean moral weights which are much larger than 1. For the luguniform distributions, my maximum mean moral weight is 3, which is not much larger than 1.

Humans form a network of social connections and social connections. When a human is lost, their loss is understood and grieved by many other humans, thus greatly increasing the overall negative effect of harm to a human compared to a fruit fly.

Humans have very few children relative to fruit fly, so they are likely value higher on an individual level by their families and communities.

I agree, and think this should be considered when comparing interventions. That being said, these points do not influence the moral weight, which is the ratio between the value of T years of fully healthy animal life and T years of fully healthy animal life (i.e. the duration of the experiences is normalised).

A final thought is that we don't know with very high confidence that animals are conscious in the way that we care about morally, but we know this for sure with humans. For that reason, we would be safer to prefer to save humans first, in case we were wrong about animals having conscious experiences in the first place.

This is taken into account here by multiply the moral weight given moral patienthood by:

The probability of the beings of the species having moral patienthood, as defined by Luke Muehlhauser here, which was set to the values provided in this section of Open Philanthropy's 2017 Report on Consciousness and Moral Patienthood.

In terms of your summary:

In summary the most relevant factors for moral significance are likely the degree of social embeddedness, the experience of higher order emotions and complexity in general, the ability to grieve, long lives, and long memories, which strongly implies that humans are more morally significant than all or most other animals.

I think your conclusion may well be right, but there is lots of uncertainty, so I do not think there is a "strong implication". For example, I think the likelihood of the moral weight being larger than 1 is at least 10 %, so the mean moral weight should be larger than 0.1.

As a disclaimer, I came in with the preconception that one should assign near-zero probability of animals being of more moral relevance than humans. 

After reading the arguments, I have found little to no convincing arguments contradicting this. 

It's true that we should be uncertain as to how animals experience the world. However, I don't feel that the uncertainty in moral value should be thought of as ever exceeding human's moral value.

To illustrate my current understanding of the best way to think about this topic, I think all your probability distributions should probably be modeled as never exceeding 1 for every animal, as the probability of such an outcome is so low it's not worth considering. I think of it like the probability that you can build a perpetual energy-creating machine violating the laws of physics, or the probability that tomorrow the sun does not rise because the earth stopped rotating.

Perhaps, it could analogized as the same moral probability that causing suffering is a good thing, all things considered. One might argue that the human brain is extremely complicated, and morality is complicated, so we should put some  weight on moral views that prefer to cause infinite suffering for eternity. Perhaps one could argue that some people enjoy causing others to suffer, and they might be right, and so suffering might be intrinsically good. I think this argument has about as much supporting evidence as the concept that animals could be more morally relevant than people. However, again, I would say the probability of such an outcome is so low it's not worth considering.

Although it's true we do not  know the details of how animals experience consciousness, this is not enough to overturn the intuition all humans share about the morality of killing people versus animals -- one is simply entirely different than another, and there is no instance in which it is better to kill an animal than a person. This conception has apparently been held constant for many cultures throughout human history. In some cases some animals were revered as gods, but this was less about the animals and more about the gods. In some cases animals and living things were seen as equally valuable as humans. I think this is unlikely, but not impossible, but the key point is that killing was seen as wrong in all cases, and not that animals were seen as more valuable than humans.

Suffering is not the only relevant moral consideration. See "The Righteous Mind" by Jonathon Haidt -- humans probably share a few more moral foundations than purely care/harm, including authority, fairness, sanctity, etc. Some may view these as equally morally relevant. My point is here, it's questionable whether we have equal moral responsibility  over nonhuman animals as we have to humans, depending on how you construct your moral frameworks. If you look at how human brains are wired, the foundations of our conceptions of morality are built with in-group vs out-group.  So, the moral status of animals based on understanding of human psychology which is our best way to guess at a "correct" moral framework would indicate that as things become less like us, our moral intuitions will guide us as valuing these things less.
 

I think you may have come to your probability distributions because you are a sequence thinker and are using your intuitions to argue for each part of a sequence which comes to some conclusion, where the proper thing to do when coming to some conclusion about whether to spend on an animal welfare charity or not is to use cluster-style thinking.

I hope that this is seen as a respectful difference in perspective and not at all a personal attack. I think it is useful to question these sort of assumptions to make moral progress, but I also think we need a lot of evidence to overturn the assumption that humans are more or equally morally relevant than animals, in large part due to the pre-existing moral intuitions we all probably share. There don't appear to be sufficient arguments out there to overturn this position. 

Okay, that was enough philosophizing, let me put in a few more points in favor of my position here:

  • Most people I know that are smarter than me believe humans are more morally significantly than animal. I know of zero people seriously arguing the opposite side
  • If morality is actually all fake and a human invention with no objective truth to it, then humans and animals will both be worth zero, and I will still be correct.
  • The actual actions of people who argue animals are more morally relevant than humans is not to kill people to save animals, so there's probably no-one who sincerely, deep down believes this
  • People tend to anthropomorphize other things like teddy bears and Roombas and things like that, and mistakenly assign them some moral worth until they think about it more. Therefore, our intuitions can tend to guide us to incorrect conclusions about what is morally worthwhile.

Thanks for clarifying!

This is an interesting exercise. I imagine that Luke’s estimates were informed by his uncertainty about multiple incompatible theories / considerations and so any smooth distribution won’t properly reflect the motivations that lead to those estimates. Do you think these results suggest anything about what a lumpy distribution would say?

Thanks for commenting!

If the moral weight distribution is based on 2 theories A and B which produce distributions (lumps) MWA and MWB, and we think A and B are equally valid, the moral weight and expected moral weight would be:

  • MW = 0.5 MWA + 0.5 MWB.
  •  E(MW) = 0.5 E(MWA) + 0.5 E(MWB)

It is unclear whether this expected moral moral weight would be smaller/larger than the one of the continuous case. Luke only provides 2 quantiles, but MWA and MWB are defined by 4 parameters (assuming 2 for each). 

Eventually, to derive the 4 paramers, one could further assume that the variance of MWA equals that of MWB, and that the median of MW equals the arithmetic/geometric mean between Luke's lower and upper bound. However, I do not know whether these assumptions make sense.

It's awkward to interpret mathematical judgements about a value that is described as an unknown and then as a supposition about one's internal process of deciding an arbitrary value for the unknown and finally as a possible range varying over a large magnitude for that unknown. That is what I decided that the report on consciousness (and the speculation about moral weights) describes.

I would like to learn more about how EA folks typically assign evidence for the presence of different kinds of consciousness or moral weight of different species. In particular, what evidence helps you decide the presence of different aspects of consciousness in specific amounts? What evidence helps you decide the moral weight of a person of one species relative to another?

Finally, What is EA speculation about more traditional models of morality that rely on a moral identity, judgements of right and wrong, and in particular, the symbolic importance of actions, even  when they have (potentially) minimal verifiable consequences for others (for example, catching a fly and releasing it outside)? 

Thanks for commenting!

In particular, what evidence helps you decide the presence of different aspects of consciousness in specific amounts?

What evidence helps you decide the moral weight of a person of one species relative to another?

In Luke's post, clock speed of consciousness, unity of consciousness, and unity-independent intensity of valenced aspects of consciousness are the factors based on which the quantiles for the moral weight were defined, I think.

Finally, What is EA speculation about more traditional models of morality that rely on a moral identity, judgements of right and wrong, and in particular, the symbolic importance of actions, even  when they have (potentially) minimal verifiable consequences for others (for example, catching a fly and releasing it outside)?

Personally, I put more than 90 % of weight on total hedonic utilitarianism (classical utilitarianism). However, in practice, the full consequences of my actions are really hard to measure, so I very often (always?) rely on heuristics to decide what to do (especially when there are "minimal verifiable [or measurable] consequences").

Note that cost-effectiveness analyses or other quantitative methods are still heuristics, not definitive answers, because they are always incomplete.

I took the clock speed, unity, and intensity factors to be the aspects of consciousness about which one gathered evidence.

Total hedonic utilitarianism is mathematically interesting.  I should explore its logical implications.

I appreciate what you describe as heuristics. In my everyday life I apply heuristics.

Morality is informed by heuristics that determine consequences of actions or by heuristics that determine the symbolic content of actions (their subjective or intersubjective meaning). 

EDIT: morality is also informed by heuristics that determine intentions of actions, irrespective of consequences of actions, but that was not my interest here.

I wonder what heuristics the EA community officially acknowledge as relevant to understanding the level of consciousness or moral weight of beings from other species.

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr