Interview with Christine M. Korsgaard: Animal Ethics, Kantianism, Utilitarianism

Only seeing this now, but she does have sections in the book on thinking about species, habitat loss, eliminating predation and what she calls "creation ethics" among other things. I didn't get the feeling reading the book that she would be against welfare reform, but leafing through the pages now I couldn't find any passage that covers that topic explicitly. Thanks for the resources.

Interview with Christine M. Korsgaard: Animal Ethics, Kantianism, Utilitarianism

That's interesting and I think that's true to a certain extent, the bottomless pits of suffering and all that. Though Kantianism does make some pretty strong demands in its own way, for instance in the way that it really hammers home the idea of seeing things from others' points of view (via the Formula of Humanity), or in the way that it considers some duties to be absolute ("perfect").

I believe that Korsgaard also thinks we have duties to help others' promote their own good if it's at no great cost to ourselves, though these duties are not as strong as those not to violate other people's autonomy. I think maybe these sorts of duties lead to something like Effective Altruism, though I haven't really thought all of this through yet, or read much of the relevant literature, so what do I know.

Interview with Christine M. Korsgaard: Animal Ethics, Kantianism, Utilitarianism

Indeed, and a commenter there pointed out an interesting paper by Richard Yetter Chappell (pdf) which explores and argues against this claim by Korsgaard:

In utilitarianism, people and animals don’t really matter at all; they are just the place where the valuable things happen.

The title of the paper is "Value Receptacles". I haven't read it yet but I suspect it would be of interest to many here.

Interview with Christine M. Korsgaard: Animal Ethics, Kantianism, Utilitarianism

Thank you for the thoughtful comment! It is an excellent book – if you are at all interested in Kant's moral philosophy, I highly recommend it. I will preface the remainder of this comment with the caveat that I am explaining someone else's work, and that Professor Korsgaard may not agree with my interpretation. Also, any typos in the quoted passages are copying errors.

I haven't read her work myself and probably should, but I was told by someone that basically condition 3 or even having goal-directed behaviour is not necessary. I would hope it wouldn't be, because we could have a being who experiences good and bad and so has their own ends, but has no power to control what they experience and so would just be completely vulnerable and unable to pursue their own ends. Wouldn't such a being still matter?

Here's a passage from the book that expands on that thought but doesn't counter your objection:

The small [objection] is that the definition that I have given of what an animal is is not the same as the definition a contemporary biologist would give. An "animal", as I am using the term, is an organism that functions as an agent, where by agency I mean something like representation-governed locomotion. Animals are conscious organisms who seek out the things that are (functionally) good-for them and try to avoid the things that are bad. [...] The organisms we are concerned with when we think about whether we have duties to animals are sentient beings who perceive the world in valenced ways and act accordingly. This is the feature of organic life that I have argued places an organism in the morally interesting category of having a final good.

However, later on she gets to the argument from marginal cases (if something like intelligence or rationality is the ground for moral standing among humans, then what about infants, or folks with severe developmental impairments?), which I think is similar to your objection here. Korsgaard argues against it, because to her, there is such a thing as a type of creature, even if categories have fuzzy borders. And though your example beings may not be able to pursue their own functional goods, they are still the sorts of creatures who do.

A human infant is not a particular kind of creature, but a human creature at a particular life stage. I believe that it is not proper to assign moral standing, and the properties on which it is grounded, to life stages or to the subjects of those stages. Moral standing should be accorded to persons and animals considered as the subjects of their whole lives, at least in the case of animals with enough psychic unity over time to be regarded as the subjects of their whole life. Nor, except perhaps in the case of extremely simple life forms, should we think of the subject of a life merely as a collection of the subjects of the different temporal stages of the life. [F]or most animals having a self is not just a matter of being conscious at any given moment, but rather a matter of having a consciousness or a point of view that is functionally unified both at a particular time and from one moment to the next. That ongoing self is the thing that should have or lack moral standing, or be the proper unit of moral concern.


There is a third reason for rejecting the argument from marginal cases, and it is the most important. A creature is not just a collection of properties, but a functional unity, whose parts and systems work together in keeping him alive and healthy in the particular way that is characteristic to his kind. Even if it were correct to characterize a human being with cognitive defects as "lacking reason", which usually it is not, this would not mean that it was appropriate to treat the human being as a non-rational animal. Rationality is not just a property that you might have or lack without any other difference, like blue eyes. To say that a creature is rational is not just to say that he has "reason" as one of his many properties, but to say something about the way he functions. [...] A rational being who lacks some of the properties that together make rational functioning possible is not non-rational, but rather defectively rational, and therefore unable to function well. [...] It is not as if you could simply subtract "rationality" from a human animal. A non-rational animal, after all, functions perfectly well without understanding the principles of reason, since he makes his choices in a different way.


The Argument from Marginal Cases ignores the functional unity of creatures. A creature who is constructed to function in part by reasoning but who is still developing or has been damaged is still a rational creature. So the Kantian need not grant and should not grant that infants, the insane, the demented, and so on, are non-rational beings. The point is not, of course, that we should treat infants and people with cognitive disabilities exactly the way we treat adult rational beings, because they too are rational beings. The way we treat any creature has to be responsive to the creature's actual condition. But the creature's condition itself is not given by a list of properties, but also by the way those properties work together.

Korsgaard is talking about rationality here because that, to her, is what sets humans apart from the other animals (though of course she thinks that is the reason why we are moral agents, but not why we have moral standing). But I think she would argue similarly about creatures that are defective in other ways, e.g. who has no power to control what they experience or to pursue goals.

I also wonder what she has in mind by "functional" in "functional good". Do we need to decide what something's function is, if any, to define their goods and bads, and how do we do that? In my view, animals define their own goods and bads through their valenced experiences and/or desires, not just that they happen to experience their goods and bads or that their experiences guide them towards their own functional goods.

If I understand you correctly, I think she would agree. Her distinction between "final goods" and "functional goods" comes, I think, from this 1983 paper of hers, though there she calls functional goods "instrumental" instead. The functional good is basically that which allows a thing to function well, e.g. a whetstone is good for the blade because it keeps it sharp and tar is good for the boat because it keeps it from taking in water. The final good is "the end or aim of all our strivings, or at any rate the crown of their success, the summum bonum, a state of affairs that is desirable or valuable or worth achieving for its own sake". Where does the final good come from? Korsgaard basically argues, if I recall correctly, following Aristotle, that creatures have functions, and that, when we act to achieve some end, to attain whatever we value as good-for us, we take that end to be good in the final sense. I think this is pretty similar to what you were getting at?

It's interesting that she brings up artwork and the environment, too, as potential ends in themselves.

Ah yes, I thought so too, especially since I had understood (mistakenly, apparently) from the book that she did not think of those things as ends in themselves. I actually wrote a dialogue in the old style about this very subject, concluding that inanimate objects are not ends in themselves.

What previous work has been done on factors that affect the pace of technological development?

One good resource is Innovation in Cultural Systems: Contributions from Evolutionary Anthropology. I think that is kind of what you're after? I wrote a little about this here:

Though innovation seems to be happening at breakneck speed, there is nothing abrupt about it. Changes are small & cumulative.[6] New ideas are based on old ideas, on recombinations of them & on extending them to new domains.[7] This does not make those ideas any less important. An illustrative example is the lightbulb, the history of which is one of incremental improvement. [...]

Diffusion of innovations have been shown to normally follow S-shaped cumulative distribution curves, with a very slow uptake followed by rapid spread followed by a slowing down as the innovation nears ubiquity.[8] Joseph Henrich has shown that these curves, which are drawn from real-life data, fit models where innovations are adopted based on their intrinsic attributes (as opposed to models in which individuals proceed by trial-&-error, for example).[9] In other words, in the real world, it seems, innovations spread in the main because people choose to adopt them based on their qualities. And which qualities are those? Everett Rogers, an innovation theorist who coined the term “early adopter”, identified five essential ones: an innovation must (1) have a relative advantage over previous ideas; (2) be compatible such that it can be used within existing systems; (3) be simple such that it is easy to understand & use; (4) be testable such that it can be experimented with; & (5) be observable such that its advantage is visible to others.[10]


The rate of cultural innovation generally is correlated with population size.[13] That makes sense: a country of a million will naturally produce more innovations than a country of one. Simulations indicate that innovation produces far more value in large population groups.[14] [...]

But there is also another quality that greatly affects the population-level rate of innovation. That quality is not necessity, which the adage calls the mother of invention; companies cut R&D costs when times are tough, not the other way around.[15] Neither is it a handful of geniuses making earth-shattering individual contributions.[16] No, what greatly affects a population’s rate of innovation is its interconnectedness, in other words how widely ideas, information & tools are shared.[17] In a culture that is deeply interconnected, where information is widely shared, innovations are observable & shared tools & standards mean that innovations are also more likely to be compatible. Most importantly, interconnectedness provides each individual with a large pool of ideas from which they can select the most attractive to modify, recombine, extend & spread in turn.

Spears & Budolfson, 'Repugnant conclusions'

I'm neither a philosopher nor familiar with the formal methods Spears & Budolfson use, but here is my understanding of the paper, which understanding may well be wrong.

Normally, the repugnant conclusion says that a very large population with only barely positive lives is better than a small population of really great lives. I don't think Spears & Budolfson deny the fact that, in this particular situation, average utilitarianism (to take one example) does say that the small population of really great lives is in fact better than the alternative. Instead, they rephrase the problem to say something like that, for any one population, you can always make those people really unhappy if you only add enough additional lives to counterbalance it. Even average utilitarianism aggregates, so a large number of slightly happy members will outweigh a small group of very unhappy members. In any case, so long as you can add an arbitrary number to a population, & so long as you aggregate utility, a very large number of small differences can outweigh a small number of large differences.

I take them to say that Parfit & others were looking not for forms of utilitarianism that avoided any repugnant conclusion, but for ones that avoided some specific repugnant conclusion for some specific hypothetical populations (such as those originally described by Parfit). But there are still, for all forms of utilitarianism – including those that solve Parfit's original problem – other repugnant conclusions for other hypothetical populations. And because the particular hypothetical populations that produce repugnant conclusions are different in different variants of utilitarianism, they cannot easily be compared & repugnant conclusions are therefore not a good measure.

They also argue that there are repugnant conclusions for non-aggregative forms of utilitarianism. As I interpret it, they argue that, for any suffering population, you can always distribute some fixed amount of utility by giving a tiny amount to each existing member & distributing the rest over a very large number of additional members, such that all original members are still suffering & all new members are, too. But at every step we only added utility & therefore made everyone better off, so even if we don't aggregate utility, the final population should still be preferable to the original population. (To be clear, as I understand it, they are still discussing only utilitarian systems & their discussion doesn't apply to for example Kantian or virtue ethics.)

So I think the suggestion is that one shouldn't look at repugnancy as a binary category, but instead some sort of continuum, though the precise measuring of it is yet to be worked out.

How much does performance differ between people?

I was going to comment something to this effect, too. The authors write:

For instance, we find ‘heavy-tailed’ distributions (e.g. log-normal, power law) of scientific citations, startup valuations, income, and media sales. By contrast, a large meta-analysis reports ‘thin-tailed’ (Gaussian) distributions for ex-post performance in less complex jobs such as cook or mail carrier: the top 1% account for 3-3.7% of the total.

But there’s an important difference between these groups – the products involved in the first group are cheaply reproducible (any number of people can read the same papers, invest in the same start-up or read the same articles – I don’t know how to interpret income here) & those in the second group are not (not everyone can use the same cook or mail carrier).

So I propose that the difference there has less to do with the complexity of the jobs & more to do with how reproducible the products involved are.

On future people, looking back at 21st century longtermism

I do have a strong intuition that humans are simply more capable of having wonderful lives than other species, and this is probably down to higher intelligence. Therefore, given that I see no intrinsic value and little instrumental value in species diversity, if I could play god I would just make loads of humans (assuming total utilitarianism is true). I could possibly be wrong that humans are more capable of wonderful lives though.

I'd be skeptical of that for a few reasons: (1) I think different things are good for different species due to their different natures/capacities (the good here being whatever it is that wonderful lives have a lot of), e.g. contemplation is good for humans but not pigs & rooting around in straw is good for pigs but not humans; (2) I think it doesn't make sense to compare these goods across species, because it means different species have different standards for goodness; & (3) I think it is almost nonsensical to ask, say, whether it would be better for a pig to be a human, or for a human to be a dog. But I recognise that these arguments aren't particularly tractable for a utilitarian!

Life is not fair. The simple point is that non-human animals are very prone to exploitation (factory farming is case in point). There are risks of astronomical suffering that could be locked in in the future. I just don't think it's worth the risk so, as a utilitarian, it just makes sense to me to have humans over chickens. You could argue getting rid of all humans gets rid of exploitation too, but ultimately I do think maximising welfare just means having loads of humans so I lean towards being averse to human extinction.

That life is not fair in the sense that different people (or animals) are dealt different cards, so to put it, is true -- the cosmos is indifferent. But moral agents can be fair (in the sense of just), & in this case it's not Life making those groups' existence miserable, it's moral agents who are doing that.

I think I would agree with you on the prone-to-exploitation argument if I were a utility maximiser, with the possible objection that, if humans reach the level of wisdom & technology needed to humanely euthanise a species in order to reduce suffering, possibly they would also be wise & capable enough to implement safeguards against future exploitation of that species instead. But that is still not a good enough reason if one believes that humans have higher capacity as receptacles of utility, though. If I were a utilitarian who believed that, then I think I would agree with you (without having thought about it too much).

Absolutely I care about orangutans and the death of orangutans that are living good lives is a bad thing. I was just making the point that if one puts their longtermist hat on these deaths are very insignificant compared to other issues (in reality I have some moral uncertainty and so would wear my shortermist cap too, making me want to save an orangutan if it was easy to do so).

Got it. I guess my original uncertainty (& this is not something I thought a lot about at all, so bear with me here) was whether longtermist considerations shouldn't cause us to worry about orangutan extinction risks, too, given that orangutans are not so dissimilar from what we were some few millions of years ago. So that in a very distant future they might have the potential to be something like human, or more? That depends a bit on how rare a thing human evolution was, which I don't know.

Yes indeed. My utilitarian philosophy doesn't care that we would have loads of humans and no non-human animals. Again, this is justified due to lower risks of exploitation for humans and (possibly) greater capacities for welfare. I just want to maximise welfare and I don't care who or what holds that welfare.

By the way, I should mention that I think your argument for species extinction is reasonable & I'm glad there's someone out there making it (especially given that I expect many people to react negatively towards it, just on an emotional level). If I thought that goodness was not necessarily tethered to beings for whom things can be good or bad, but on the contrary that it was some thing that just resides in sentient beings but can be independently observed, compared & summed up, well, then I might even agree with it.

Load More