There seems to be a widespread and misconceived belief that the biggest problems in the world are caused by hatred. However, if we miraculously got rid of all the hatred in the world tomorrow, the worst problems would remain. [1]

All the major causes of suffering in the world seem to be the result of the absence of caring (i.e. indifference), rather than the presence of hatred:

1. Extreme poverty is not the result of the rich hating the poor; its continued existence is the result of the rich being largely indifferent towards the suffering associated with extreme poverty.

2. Factory farming is not the result of human hatred towards non-human animals; it is the result of human indifference towards the intense suffering of animals living in intense confinement.

3. Humanity's collective under-investment in the prevention of existential risks, which endanger the existence and flourishing of our future descendants, is not the result of the present generation hating future generations; it is the result of the present generation being largely indifferent towards the well-being of potential future lives, of which there could be an astronomical number.

4. The widespread and intense suffering of animals living in the wild is not the result of hatred; it is the result of a blind and indifferent optimization process (i.e. evolution through natural selection).

Indifference is the immediate result of our evolved human cognitive architecture that is largely insensitive to the scope of moral problems and to the well-being of those different from ourselves, as demonstrated by the work of Paul Slovic.

The problem of indifference is not solvable merely by increasing emotions such as love or empathy, despite their importance in our everyday lives. These emotions are too narrow and parochial to reliably extend to all beings that deserve our moral concern. In Joshua Greene's words, our "emotions make us social animals, turning Me into Us. But they also make us tribal animals, turning Us against Them". Heightened empathy and love are important drivers for in-group altruism, but by themselves they are not sufficient to overcome our collective indifference—especially, because they may in some cases actually increase bias and hostility towards the out-group. In the same vein, Paul Bloom points out that

"I actually feel a lot less empathy for people who aren’t in my culture, who don’t share my skin color, who don’t share my language. This is a terrible fact of human nature, and it operates at a subconscious level, but we know that it happens. (...) empathy often leads us to make stupid and unethical decisions. (...) when it comes to moral reasoning, empathy (...) just throws in bias and innumeracy and confusion."

The fact that to a human brain ten deaths feel only marginally less bad than one thousand deaths is merely a fact about our state of mind, not about the world itself. The badness of the suffering of others is not in the least alleviated by the fact that we are largely indifferent towards it, whether by our choice or by our nature. Joseph Stalin is reported to have said, "A single death is a tragedy; a million deaths is a statistic". Yet, embracing our brain's capacity for deliberate reasoning, we can truly make a step towards overcoming our indifference—recognising that while every single death is indeed a tragedy, a million deaths really are a million tragedies.

In other words, what is needed to address the problem of indifference is, as Bloom puts it, that we should "in the moral domain, (...) become rational deliberators motivat[ed] by compassion and care for others." Relatedly, Robin Hanson comments that "if only we could be moved more by our heads than our hearts, we could do a lot more good." This is almost precisely how Peter Singer described effective altruism in its early days in his 2013 TED talk, stating

"[effective altruism is] important because it combines both the heart and the head. The heart, of course, you felt (...) the empathy for that child. But it's really important to use the head as well to make sure that what you do is effective and well-directed; and not only that, but also I think reason helps us to understand that other people, wherever they are, are like us, that they can suffer as we can, that parents grieve for the deaths of their children, as we do, and that just as our lives and our well-being matter to us, it matters just as much to all of these people".

To overcome our collective indifference towards the largest causes of suffering, Singer suggests that we need to use our reasoning faculties to accept a notion of impartiality as fundamental in our concern for others and to let this insight guide our altruistic decision-making and actions: the insight that the suffering and well-being of myself and my tribe is not intrinsically more important than that of yourself and your tribe.

Impartiality is most crucial when our emotional indifference is most inadequate—when we are dealing with the suffering of those most distant to ourselves. Yet, as Singer's idea of the expanding circle illustrates, moral impartiality can transcend geographic, temporal and even species-based boundaries. It was the application of this impartiality principle that put Jeremy Bentham so far ahead of time, with regards to his contemporaries, writing

"Why should the law refuse its protection to any sensitive being? (...) The time will come when humanity will extend its mantle over everything which breathes (...) when the rest of the animal creation may acquire those rights which never could have been withholden from them but by the hand of tyranny."

The ongoing moral catastrophes of our time—extreme poverty, factory farming, existential risks etc.—illustrate that even centuries after Bentham's death there remain numerous neglected opportunities to do an enormous amount of good. These are opportunities to, as Singer puts it, "prevent something bad from happening, without thereby sacrificing anything of comparable moral importance". Using evidence and reason, effective altruism is the intellectual pursuit and practical implementation of these opportunities.

Or, to put it differently: effective altruism is the serious attempt to overcome our collective indifference towards the major causes of suffering in the world.

In addition to providing benefits such as improved coordination, the effective altruism community serves this purpose in two important ways: First, it increases the altruistic motivation of its members over the long-term by connecting them with other individuals, groups and organisations with shared goals and epistemic norms, thus overcoming feelings of isolation in their pursuit to do the most good. Second, the EA community establishes social incentives that reward altruistic actions aimed at benefiting others the most, as opposed to altruistic actions that mainly serve to foster our own (hidden) selfish motives. The 'warm glow' theory of giving in economics suggests that many people donate money or volunteer (in part) to reap the emotional satisfaction associated with helping others, often choosing emotionally salient causes over other more effective causes—in contrast, the EA community increases the positive impact of its members by positively reinforcing effectively altruistic decisions (e.g. by systematically comparing cause areas, choosing charities based on rigorous cost-effectiveness evaluations etc.). To this end, Robin Hanson argues that

"to put ourselves in situations where our hidden motives better align with our ideal motives (...) we might join the effective altruism movement, in order to surround ourselves with people who will judge our charitable giving more by its effects than by superficial appearances. Incentives are like the wind: we can choose to row or tack against it, but it’s better if we can arrange to have the wind at our backs."

At the same time, the ideas and principles of effective altruism help to overcome indifference on an intellectual level. For instance, Nate Soares writes that

“if you choose to do so, you can still act like the world's problems are as big as they are. You can stop trusting the internal feelings to guide your actions and switch over to manual control. (…) addressing the major problems of our time isn't about feeling a strong compulsion to do so. It's about doing it anyway, even when internal compulsion utterly fails to capture the scope of the problems we face. (...) The closest we can get [to comprehending the scope of these problems] is doing the multiplication: finding something we care about, putting a number on it, and multiplying. And then trusting the numbers more than we trust our feelings."

For some people, the rational and systematic approach to doing good taken by effective altruism may at first feel cold and calculating. However, effective altruism really is warm and calculating. Behind every number there are individuals that matter. Prioritising who to help first is the warm-hearted response to the tragically indifferent world we live in; a world where the resources of those who care are far too limited provide for everyone deserving of being cared for.

Making a similar point, Holly Elmore states that

“I fervently hope that one day we will be able to save everyone. In the meantime, it is irresponsible to pretend that we aren’t making life and death decisions with the allocation of our resources. Pretending there is no choice only makes our decisions worse. (...) I understand that it’s hard, that we will always instinctively care more for the people we see than those we don’t. (...) But there should be great shame in letting more people suffer and die than needed to because you can’t look past your own feelings."

Ending on the words of someone who fought humanity's collective indifference for most of his professional career, deceased statistician Hans Rosling wrote:

“It is hard for people to talk about resources when it comes to saving lives, or prolonging or improving them. Doing so is often taken for heartlessness. Yet, so long as resources are not infinite—and they never are infinite—it is the most compassionate thing to do to use your brain and work out how to do the most good with what you have.”

I owe my gratitude to Chi Nguyen, Nadia Mir-Montazeri, Eli Nathan and Huw Thomas for providing me with valuable feedback on this post.


[1]: Stefan Schubert commented on an earlier version of this post: "I agree with much of this. However, [I] guess that morally motivated hatred presents large opportunity costs: people focus on hating their moral opponents, and on zero-sum conflicts with them, rather than on finding effective solutions. Hatred is also probably a reason why people don't engage more in moral trade and moral cooperation."

Comments8


Sorted by Click to highlight new comments since:

I majored in cognitive science with a particular focus on decision-making and persuasion, and I love seeing posts that discuss framing and public messaging.

I think that your framing here describes the world pretty accurately. But when I actually talk about EA with non-EA people who I'd like to become interested, I don't use an "indifference" framework. That would certainly be better than a "hatred" framework, but I still want to avoid making people feel like I'm accusing them of indifference to suffering.

Instead, I use an "unawareness" framework. Rather than "most people are indifferent to these problems", I say something like "most people aren't fully aware of the extent of the problems, or do know about the problems but aren't sure how to address them; instead, they stick to working on things they feel they understand better".

In my experience, this naturally provokes questions about the extent of problems or how to address them, since "not being aware" is morally neutral and not shameful to alleviate (while "indifference" is a bad thing most people won't want to admit feeling).

I broadly agree, but maybe some further nuance could be added. One could oppose the "indifference framework" either because one thinks it's inaccurate, or because one thinks it's not a fruitful way of engaging with people who haven't bought into the ideas of effective altruism. One might interpret you as saying that the indifference framework is indeed accurate but that we still shouldn't use it, for the second, pragmatic reason. (I'm not sure if that's what you actually mean; in any case, it's an interesting interpretation which it's useful to comment on.) I think the indifference framework may not be quite accurate, however. Take, for instance, this sentence:

The problem of indifference is not solvable merely by increasing emotions such as love or empathy, despite their importance in our everyday lives.

The way these words are normally used, a person who is highly loving or empathetic is not indifferent. Therefore, again using these words in their normal senses, the problem of indifference is in a sense solvable by increasing emotions such as love or empathy.

I guess an objection here would be that though such a person wouldn't be generally indifferent, they would be indifferent towards the most important problems and the most important sources of suffering. If so, it may be a bit misleading to call them "indifferent". But also, it's not clear that they really are indifferent towards those problems. It may be that they don't know about them (cf Aaron's comment), or that other problems are more available in their mind, or that they fail to see that any resources spent on less important problems will lead to less resources towards the most important problems (i.e., a failure to fully grasp notions such as opportunity cost and prioritization).

Relatedly, many people who are not working towards solving the problems that effective altruists find the greatest put a lot of resources into projects which, from their point of view, are moral. This includes charitable giving, political engagement, etc. It doesn't seem quite correct to me to call them "indifferent".

To clarify my point (which did need clarification, thanks for pointing that out):

  • I disagree on (1) and think few people are truly indifferent to the suffering of other humans, even far-off strangers.
  • I somewhat agree on (2) and think many people are mostly or entirely indifferent to the suffering of farm animals.
  • I broadly agree on (3) and (4); I think most people are indifferent to "future people" (in the sense of "people who could be alive a thousand years from now", not "the children you may have someday"), as well as the suffering of wild animals, though the latter may be closer to "instinctive justification to avoid intrusive/horrible thoughts", which only masquerades as indifference.

I certainly wouldn't use "indifferent" as an indiscriminate term to describe the view of an average person about mortality, especially since I'd guess that most people are involved in some form of altruistic activity that they care about.

I think the indifference framework can be useful when thinking on a population level (e.g. few voters care about issue X, so the electorate can be said to be "indifferent" in a practical sense), but not on an individual level (almost anyone is capable of caring about EA-aligned causes if they are presented in the right way to reach that specific person).

Instead, I use an "unawareness" framework. Rather than "most people are indifferent to these problems", I say something like "most people aren't fully aware of the extent of the problems, or do know about the problems but aren't sure how to address them; instead, they stick to working on things they feel they understand better".

I would guess that similarly this is why "woke" as caught on as a popular way of talking about those who "wake up" to the problems around them that they were previously ignorant of and "asleep to": it's a framing that let's you feel good about becoming aware of and doing more about various issues in the world without having to feel too bad about having not done things about them in the past, so you aren't as much on the defensive when someone tries to "shake you awake" to those problems.

"However, effective altruism really is warm and calculating."

I can't believe I've never thought of this! That's great :)

Great post, too. I think EA has a helpful message for most people who are drawn to it, and for many people that message is overcoming status quo indifference. However, I worry that caring too much, as in overidentifying with or feeling personally responsible for the suffering of the world, is also a major EA failure mode. I have observed that most people assume their natural tendency towards either indifference or overresponsibility is shared by basically everyone else, and this assumption determines what message they think the world needs to hear. For instance, I'm someone who's naturally overresponsible. I don't need EA to remind me to care. I need it to remind me that the indiscriminate fucks I'm giving are wasted, because they can take a huge toll on me and aren't particularly helping anyone else. Hence, I talk a lot about self-care and the pitfalls of trying to be too morally perfect within EA. When spreading the word about EA, I emphasize the moral value of prioritization and effectiveness because that's what was missing for me.

EA introduced me to many new things to care about, but I only didn't care about them before because I hadn't realized they were actionable. This might be quibbling, but I wouldn't say I was indifferent before-- I just had limiting assumptions about how I could help. I side more with Aaron's "unawareness" frame on this.

+6 on "warm and calculating"

I loved reading this post. 

Several years after it was written, it feels more relevant than ever. I see so much media coverage of Effective Altruism which is at worst negative - often presented as rich tech bros trying to ease their conscience about their rich lifestyle and high salaries, especially after SBF - and at best grudgingly positive, as for example in the article in Time this week. 

I'm relatively new to EA - about 2 years in the community, about 1 year actively involved. And what I've noticed is the dramatic contrast between how EA is often presented (cold, logical, even cynical), and the people within EA, who are passionate about doing good, caring, friendly and incredibly supportive and helpful. 

It's frustrating when something like SBF's implosion happens, and it does hurt the image of EA. But the EA community needs to keep pushing back on the narrative that we are cold and calculating. 

So, I really love this post because what it's saying is the opposite to the common misperception: In fact, in a world of cold indifference, the EA's are the group who are not indifferent, who care so much that they will work to help people they will never meet, animals they will never understand, future generations they won't live to see. 

 

Nice thoughts, Denis!

the EA's are the group who are not indifferent,

This might be a little bit of a nitpick, but I would say "EA is one of the groups [rather than the group] who are not indifferent", because there are others.

Curated and popular this week
 ·  · 23m read
 · 
Or on the types of prioritization, their strengths, pitfalls, and how EA should balance them   The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone is trying to figure out how to prepare for AI. This is the first in a series of posts examining the state of cause prioritization and proposing strategies for moving forward.   Executive Summary * Performing prioritization work has been one of the main tasks, and arguably achievements, of EA. * We highlight three types of prioritization: Cause Prioritization, Within-Cause (Intervention) Prioritization, and Cross-Cause (Intervention) Prioritization. * We ask how much of EA prioritization work falls in each of these categories: * Our estimates suggest that, for the organizations we investigated, the current split is 89% within-cause work, 2% cross-cause, and 9% cause prioritization. * We then explore strengths and potential pitfalls of each level: * Cause prioritization offers a big-picture view for identifying pressing problems but can fail to capture the practical nuances that often determine real-world success. * Within-cause prioritization focuses on a narrower set of interventions with deeper more specialised analysis but risks missing higher-impact alternatives elsewhere. * Cross-cause prioritization broadens the scope to find synergies and the potential for greater impact, yet demands complex assumptions and compromises on measurement. * See the Summary Table below to view the considerations. * We encourage reflection and future work on what the best ways of prioritizing are and how EA should allocate resources between the three types. * With this in mind, we outline eight cruxes that sketch what factors could favor some types over others. * We also suggest some potential next steps aimed at refining our approach to prioritization by exploring variance, value of information, tractability, and the
 ·  · 5m read
 · 
[Cross-posted from my Substack here] If you spend time with people trying to change the world, you’ll come to an interesting conundrum: Various advocacy groups reference previous successful social movements as to why their chosen strategy is the most important one. Yet, these groups often follow wildly different strategies from each other to achieve social change. So, which one of them is right? The answer is all of them and none of them. This is because many people use research and historical movements to justify their pre-existing beliefs about how social change happens. Simply, you can find a case study to fit most plausible theories of how social change happens. For example, the groups might say: * Repeated nonviolent disruption is the key to social change, citing the Freedom Riders from the civil rights Movement or Act Up! from the gay rights movement. * Technological progress is what drives improvements in the human condition if you consider the development of the contraceptive pill funded by Katharine McCormick. * Organising and base-building is how change happens, as inspired by Ella Baker, the NAACP or Cesar Chavez from the United Workers Movement. * Insider advocacy is the real secret of social movements – look no further than how influential the Leadership Conference on Civil Rights was in passing the Civil Rights Acts of 1960 & 1964. * Democratic participation is the backbone of social change – just look at how Ireland lifted a ban on abortion via a Citizen’s Assembly. * And so on… To paint this picture, we can see this in action below: Source: Just Stop Oil which focuses on…civil resistance and disruption Source: The Civic Power Fund which focuses on… local organising What do we take away from all this? In my mind, a few key things: 1. Many different approaches have worked in changing the world so we should be humble and not assume we are doing The Most Important Thing 2. The case studies we focus on are likely confirmation bias, where
 ·  · 1m read
 · 
I wanted to share a small but important challenge I've encountered as a student engaging with Effective Altruism from a lower-income country (Nigeria), and invite thoughts or suggestions from the community. Recently, I tried to make a one-time donation to one of the EA-aligned charities listed on the Giving What We Can platform. However, I discovered that I could not donate an amount less than $5. While this might seem like a minor limit for many, for someone like me — a student without a steady income or job, $5 is a significant amount. To provide some context: According to Numbeo, the average monthly income of a Nigerian worker is around $130–$150, and students often rely on even less — sometimes just $20–$50 per month for all expenses. For many students here, having $5 "lying around" isn't common at all; it could represent a week's worth of meals or transportation. I personally want to make small, one-time donations whenever I can, rather than commit to a recurring pledge like the 10% Giving What We Can pledge, which isn't feasible for me right now. I also want to encourage members of my local EA group, who are in similar financial situations, to practice giving through small but meaningful donations. In light of this, I would like to: * Recommend that Giving What We Can (and similar platforms) consider allowing smaller minimum donation amounts to make giving more accessible to students and people in lower-income countries. * Suggest that more organizations be added to the platform, to give donors a wider range of causes they can support with their small contributions. Uncertainties: * Are there alternative platforms or methods that allow very small one-time donations to EA-aligned charities? * Is there a reason behind the $5 minimum that I'm unaware of, and could it be adjusted to be more inclusive? I strongly believe that cultivating a habit of giving, even with small amounts, helps build a long-term culture of altruism — and it would