Hide table of contents

This version of the essay has been lightly edited. You can find the original here.


One theme of our work is trying to help populations that many people don't feel are worth helping at all.

We've seen major opportunities to improve the welfare of factory-farmed animals, because so few others are trying to do it. When working on immigration reform, we've seen big debates about how immigration affects wages for people already in the U.S., and much less discussion of how it affects immigrants. Even our interest in global health and development is fairly unusual: Many Americans may agree that charitable dollars go further overseas, but prefer to give domestically because they so strongly prioritize people in their own country compared to people in the rest of the world.[1]

The question "Who deserves empathy and moral concern?" is central for us. We think it's one of the most important questions for effective giving.

Unfortunately, we don't think we can trust conventional wisdom and intuition on the matter: History has too many cases where entire populations were dismissed, mistreated, and deprived of basic rights for reasons that fit the conventional wisdom of the time but today look indefensible. Instead, we aspire to radical empathy: working hard to extend empathy to everyone it should be extended to, even when it is unusual or seems strange to do so.

To clarify the choice of terminology:

  • "Radical" is intended as the opposite of "traditional" or "conventional." It doesn't necessarily mean "extreme" or "all-inclusive"; we don't extend empathy to everyone and everything (this would leave us essentially no basis for making decisions about morality). It refers to working hard to make the best choices we can, without anchoring to convention.

  • "Empathy" is intended to capture the idea that one could imagine oneself in another's position, and recognizes the other as having experiences that are worthy of consideration. It does not refer to literally feeling what another feels, and is therefore distinct from the "empathy" critiqued in Against Empathy (a book that acknowledges the multiple meanings of the term and explicitly focuses on one).

Conventional wisdom and intuition aren't good enough

In The Expanding Circle, Peter Singer discusses how, over the course of history, "The circle of altruism has broadened from the family and tribe to the nation and race ... to all human beings" (and adds that "the process should not stop there"). [2] By today's standards, the earliest cases he describes are striking:

At first, [the] insider/ outsider distinction applied even between the citizens of neighboring Greek city-states; thus there is a tombstone of the mid-fifth century B.C. which reads:

This memorial is set over the body of a very good man. Pythion, from Megara, slew seven men and broke off seven spear points in their bodies ... This man, who saved three Athenian regiments ... having brought sorrow to no one among all men who dwell on earth, went down to the underworld felicitated in the eyes of all.

This is quite consistent with the comic way in which Aristophanes treats the starvation of the Greek enemies of the Athenians, starvation which resulted from the devastation the Athenians had themselves inflicted. Plato, however, suggested an advance on this morality: he argued that Greeks should not, in war, enslave other Greeks, lay waste their lands or raze their houses; they should do these things only to non-Greeks. These examples could be multiplied almost indefinitely. The ancient Assyrian kings boastfully recorded in stone how they had tortured their non-Assyrian enemies and covered the valleys and mountains with their corpses. Romans looked on barbarians as beings who could be captured like animals for use as slaves or made to entertain the crowds by killing each other in the Colosseum. In modern times Europeans have stopped treating each other in this way, but less than two hundred years ago some still regarded Africans as outside the bounds of ethics, and therefore a resource which should be harvested and put to useful work. Similarly Australian aborigines were, to many early settlers from England, a kind of pest, to be hunted and killed whenever they proved troublesome. [3]

The end of the quote transitions to more recent, familiar failures of morality. In recent centuries, extreme racism, sexism, and other forms of bigotry ⁠— including slavery ⁠— have been practiced explicitly and without apology, and often widely accepted by the most respected people in society.

From today's vantage point, these seem like extraordinarily shameful behaviors, and people who were early to reject them ⁠— such as early abolitionists and early feminists ⁠— look to have done extraordinary amounts of good. But at the time, looking to conventional wisdom and intuition wouldn't necessarily have helped people avoid the shameful behaviors or seek out the helpful ones.

Today's norms seem superior in some respects. For example, racism is much more rarely explicitly advocated (which is not to say that it is rarely practiced). However, we think today's norms are still fundamentally inadequate for the question of who deserves empathy and moral concern. One sign of this is the discourse in the U.S. around immigrants, which tends to avoid explicit racism but often embraces nationalism ⁠— excluding or downplaying the rights and concerns of people who aren't American citizens (and even more so, people who aren't in the U.S. but would like to be).

Intellect vs. emotion

I sometimes hear the sentiment that moral atrocities tend to come from thinking of morality abstractly, losing sight of the basic emotional basis for empathy, and distancing oneself from the people one's actions affect.

I think this is true in some cases, but importantly false in others.

People living peaceful lives are often squeamish about violence, but it seems that this squeamishness can be overcome disturbingly quickly with experience. There are ample examples throughout history where large numbers of "conventional" people casually and even happily practiced direct cruelty and violence to those whose rights they didn't recognize. [4] Today, watching the casualness with which factory farm workers handle animals (as shown in this gruesome video), I doubt that people would eat much less meat if they had to kill animals themselves. I don't think the key is whether people see and feel the consequences of their actions. More important is whether they recognize those whom their actions affect as fellow persons, meriting moral consideration.

On the flip side, there seems to be at least some precedent for using logical reasoning to reach moral conclusions that look strikingly prescient in retrospect. For example, see Wikipedia on Jeremy Bentham, who is known for basing his morality on the straightforward, quantitative logic of utilitarianism:

He advocated individual and economic freedom, the separation of church and state, freedom of expression, equal rights for women, the right to divorce, and the decriminalising of homosexual acts. [My note: he lived from 1747-1832, well before most of these views were common.] He called for the abolition of slavery, the abolition of the death penalty, and the abolition of physical punishment, including that of children. He has also become known in recent years as an early advocate of animal rights.

Aspiring to radical empathy

Who deserves empathy and moral concern?

To the extent that we get this question wrong, we risk making atrocious choices. If we can get it right to an unusual degree, we might be able to do outsized amounts of good.

Unfortunately, we don't think it is necessarily easy to get it right, and we're far from confident that we are doing so. But here are a few principles we try to follow, in making our best attempt:

Acknowledging our uncertainty. For example, we're quite unsure of where animals should fit into our moral framework. My own reflections and reasoning about philosophy of mind have, so far, seemed to indicate against the idea that e.g. chickens merit moral concern. And my intuitions value humans astronomically more. However, I don't think either my reflections or my intuitions are highly reliable, especially given that many thoughtful people disagree. And if chickens do indeed merit moral concern, the amount and extent of their mistreatment is staggering. With worldview diversification in mind, I don't want us to pass up the potentially considerable opportunities to improve their welfare.

I think the uncertainty we have on this point warrants putting significant resources into farm animal welfare, as well as working to generally avoid language that implies that only humans are morally relevant.[5]

That said, I don't feel uncertain about all of our unusual choices. I'm confident that differences in geography, nationality, and race ought not affect moral concern, and our giving should reflect this.

Being extremely careful about too quickly dismissing "strange" arguments on this topic. Relatively small numbers of people argue that insects, and even some algorithms run on today's computers, merit moral concern. It's easy and intuitive to laugh off these viewpoints, since they seem so strange on their face and have such radical implications. But as argued above, I think we should be highly suspicious of our instincts to dismiss unusual viewpoints on who merits moral concern. And the stakes could certainly be high if these viewpoints turn out to be more reasonable than they appear at first.

So far I remain unconvinced that insects, or any algorithms run on today's computers, are strong candidates for meriting moral concern. But I think it's important to keep an open mind.

Exploring the idea of supporting deeper analysis. Luke Muehlhauser is currently exploring [6] the current state of research and argumentation on the question of who merits moral concern (which he calls the question of moral patienthood). It's possible that if we identify gaps in the literature and opportunities to become better informed, we'll recommend funding further work. In the near future, work along these lines could affect our priorities within farm animal welfare ⁠— for example, it could affect how we prioritize work focused on improving the treatment of fish. Ideally, our views on moral patienthood would be informed by an extensive literature drawing on as much deep reflection, empirical investigation, and principled argumentation as possible.

Not limiting ourselves to the "frontier," because widely recognized problems still do a great deal of damage. In our work, we often find ourselves focusing on unconventional targets for charitable giving, such as farm animal welfare and potential risks from advanced artificial intelligence. This is because we often find that opportunities to do disproportionate amounts of good are in areas that have been, in our view, relatively neglected by others. However, our goal is to do the most good we can, not to seek out and support those causes which are most "radical" in our present society. When we see great opportunities to play a role in addressing harms in more widely-acknowledged areas ⁠— for example, in the U.S. criminal justice system ⁠— we take them.

This work is licensed under a Creative Commons Attribution 4.0 International License.


  1. For example, according to data from Giving USA, only approximately 4% of US giving in 2015 was focused on international aid. (Reported by Charity Navigator here.) ↩︎

  2. Page 120. ↩︎

  3. Pages 112-113. ↩︎

  4. Many examples available in the first chapter of The Better Angels of Our Nature. ↩︎

  5. As a side note, it is often tricky to avoid such language. We generally use the term "persons" when we want to refer to beings that merit moral concern, without pre-judging whether such beings are human and also without causing too much distraction for casual readers. A more precise term is "moral patients." ↩︎

  6. You can find his final report here. ↩︎

Comments11
Sorted by Click to highlight new comments since: Today at 8:18 AM

This article was the "Classic Forum post" in the EA Forum Digest today. An excellent choice. Though an old post (in EA terms, 2017 is ancient history!), it asks a question that is fundamental to EA. If we want to measure and compare the impact (effectiveness) of two interventions quantitatively, we necessarily must multiply the objective impact measured on a group by some factor quantifying the relative value of that group - be it insects or future generations or chickens. 

Since joining EA, I've been impressed by how many people in this community do this, and how often it leads to surprising conclusions, for example in longtermism or animal rights. 

At the same time, I would hazard that the vast majority of people in the world today would essentially give "humans who are alive today" an infinitely larger value than animals or future generations. They wouldn't use those words, but that's how they'd view it. As in, they may be all in favour of animal rights, but would they be willing to sacrifice one human life to save one million cows? Most would not. Would they agree to sacrifice 100 people today to save  100 billion people who will live in the 24th century? Many would not. 

I struggle with questions like this - it seems to require a massive amount of confidence that I'm right, and I'm not sure I have that. 

So it's great that we look for opportunities (reducing x-risks, alternative protein, biosecurity, ...) which are win/win, but sometimes we'll be forced to choose. When I think of radical empathy, I don't just think of the "easy" part where we recognise the potential for suffering and the importance of quality of life, but also of the difficult part where we may have to make choices where one side of the balance has the lives of real, living human beings and the other side does not. 

Why not start from the other end and work backwards? Why wouldn't we treasure every living being and non-living thing?

Aren't insects (just to react to the article) worthy of protecting as an important part of the food chain (from a utilitarian standpoint), for biodiversity (resilience of the biosphere) or even just simply being? After all, there are numerous articles and studies about their numbers and species declining precipitously, see for example: https://www.theguardian.com/environment/2019/feb/10/plummeting-insect-numbers-threaten-collapse-of-nature

But let's stretch ourselves a bit further! What about non-living things? Why not give a bit more respect to objects, as a start by reducing waste? If we take a longtermist view, there will absolutely not be enough raw materials for people for even 100-200 more years – let alone a 800,000 – with our current (and increasing) global rates of resource extraction.

I'm not saying these should be immediate priorities over human beings, but I really miss these considerations from the article.

I feel the same way and want to point out a less utilitarian and more selfish effect. I think it’s not hard for people to envision a world where people truly cherish. Cherish themselves, each other, possessions and everything around them. I’m assuming all of us (except maybe some neurodivergent beings) have had moments where we can see the beauty of everything. Walks in nature can easily trigger this effect. This sort of appreciation for everything seems to be the basis for almost every religion and can now still easily be found in stoicism and Buddhism for instance. The teachings there suggest that one meditates for those around them and in return gains clarity and love. We gain something by giving.

I think it’s powerful to point towards those selfish effects if we want to redirect people’s beliefs and intuitions to include everything as objects to cherish. Grappling with the unwholesome reasons of doing something wholesome is just another part of that practice.

At the basis of giving and helping is both the rational ethical choice and also the “warm fuzzies”. These warm fuzzies can be there whenever you’re ready and they’re completely free.

It's not clear to me what it would mean to "treasure a non-living thing" in the same way that we should "treasure a living [I'd add 'sentient'] being". When I treasure a sentient being, what I mean by this is that:

(1) I recognize that sentient being's capacity to feel positive and negative states of mind;

(2) I recognize that that sentient being has interests of their own; and 

(3) I take the previous two facts into consideration in my decision-making so that I don't, unnecessarily, make that sentient being feel negative states of mind, or deprive them of their interests.

However, in the case of non-living things, such as rocks, knives, toys, etc, facts (1) and (2) are absent, and therefore I cannot treasure them in the same way I treasure sentient beings.

I can, of course, decide that some non-living thing has value (such as a potato), in so far as it can, for instance, satisfy the interest of a sentient being not to be hungry, and make that sentient being not experience the negative state of mind associated with hunger, but rather experience the positive state of mind associated with satiation, and the ripple effects of nutrition that flow from this.

In your example of reducing waste, who (or what), exactly, is being treasured? The waste, or the future sentient beings who, because of an environmentally friendly disposal of the waste, will have their interests satisfied by not living in a depleted Earth?

The difficulty I have with this argument is where do you draw the line with sentience? And if there's a living thing just below the line, without "real feelings" or interests, but still able to experience pain or other feelings would you not treasure it?

One issue with my post I realise is that maybe by definition you need a sentient being to feel real empathy with, but what I had in mind wasn't strictly just empathy, but caring for or treasuring things.

In a sense it's more of an invitation for a thought experiment to extend our circle of concern regardless of utility. So to answer your question, it's treasuring / appreciating / valuing / finding delight in anything really, just for the mere fact that it came together from cosmic dust. So even if something doesn't have utility for a sentient being, favouring not destroying or harming them.

That being said, of course I'm not saying we should care more about a tuft of grass over a goat for example (and prevent the goat from eating the grass out of concern for the grass's wellbeing) or to put more effort into preserving minerals than farm animal welfare, etc. Instead, as a concrete example, to consider the effects of our (over)consumption in increasing entropy and decreasing natural beauty, even if mining a bare hill without vegetation doesn't impact anything living.

As a newcomer to EA I am probably going to ask some banal questions so please bear with me :)

Surely 'optics' are important for any movement, so if it requires a paragraph to explain each word attached to an idea, then maybe those are in-effective words.

For example, Radical is explained above as moderately complex (for most people) and fundamentally in juxtaposition to other common and simple words.

It might seem like quibbling but I strongly believe in the power of neurolinguistics (warned you that I could be banal; did you believe me?) hence something like 'Unconventional' I think would be a more effective optic.

Not really expecting a response given how old this thread is... but would be keen to know how this idea has evolved since 2017?

In my experience, the term "radical empathy" isn't used very often when people explain these ideas to the public -- I more often see it used as shorthand within the community, as a quick way of referring to concepts that people are already familiar with.

In public communication, I see this kind of thing more often just called "empathy", or referred to in simple terms like "caring for everyone equally", "helping people no matter where they live", etc.

Footnote one is missing the link to the Charity Navigators report: https://web.archive.org/web/20200220220116/https://www.charitynavigator.org/index.cfm/bay/content.view/cpid/42

[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]

I like this overall (re-)framing of EA: I think that both words are things that are important to EA (and that we maybe want even more of on the margin).

This writing may be addressing the risk of rejection of EA due to its consideration of individuals. Prima facie, the post claims that “we don't extend empathy to everyone and everything,” but it implies that the opposite should be the case with one’s thinking development. The post seeks to gain authority by critiquing others. It does not invite collaborative critical thinking.

Thus, readers can aspire to critique others based on their ‘extent of empathy,’ which can mean the consideration of broad moral circles, without developing the skill of inviting critical thinking on various EA-related concepts.

This readers’ approach can constitute a reputational loss risk to the EA community among creative problem solvers while inviting hierarchically-minded persons who seek to gain status by reproducing notions.

While inviting participants in hierarchical structures which do not celebrate critical thinking may be important for the EA community, I suggest that the critical thinkers in these systems are invited first. Thus, I recommend that a critical discussion is joined with this piece, in order to encourage critically thinking hierarchically-minded individuals to share EA-related ideas in their networks in a way favorable to these networks’ participants.