There seems to be a widespread and misconceived belief that the biggest problems in the world are caused by hatred. However, if we miraculously got rid of all the hatred in the world tomorrow, the worst problems would remain. [1]

All the major causes of suffering in the world seem to be the result of the absence of caring (i.e. indifference), rather than the presence of hatred:

1. Extreme poverty is not the result of the rich hating the poor; its continued existence is the result of the rich being largely indifferent towards the suffering associated with extreme poverty.

2. Factory farming is not the result of human hatred towards non-human animals; it is the result of human indifference towards the intense suffering of animals living in intense confinement.

3. Humanity's collective under-investment in the prevention of existential risks, which endanger the existence and flourishing of our future descendants, is not the result of the present generation hating future generations; it is the result of the present generation being largely indifferent towards the well-being of potential future lives, of which there could be an astronomical number.

4. The widespread and intense suffering of animals living in the wild is not the result of hatred; it is the result of a blind and indifferent optimization process (i.e. evolution through natural selection).

Indifference is the immediate result of our evolved human cognitive architecture that is largely insensitive to the scope of moral problems and to the well-being of those different from ourselves, as demonstrated by the work of Paul Slovic.

The problem of indifference is not solvable merely by increasing emotions such as love or empathy, despite their importance in our everyday lives. These emotions are too narrow and parochial to reliably extend to all beings that deserve our moral concern. In Joshua Greene's words, our "emotions make us social animals, turning Me into Us. But they also make us tribal animals, turning Us against Them". Heightened empathy and love are important drivers for in-group altruism, but by themselves they are not sufficient to overcome our collective indifference—especially, because they may in some cases actually increase bias and hostility towards the out-group. In the same vein, Paul Bloom points out that

"I actually feel a lot less empathy for people who aren’t in my culture, who don’t share my skin color, who don’t share my language. This is a terrible fact of human nature, and it operates at a subconscious level, but we know that it happens. (...) empathy often leads us to make stupid and unethical decisions. (...) when it comes to moral reasoning, empathy (...) just throws in bias and innumeracy and confusion."

The fact that to a human brain ten deaths feel only marginally less bad than one thousand deaths is merely a fact about our state of mind, not about the world itself. The badness of the suffering of others is not in the least alleviated by the fact that we are largely indifferent towards it, whether by our choice or by our nature. Joseph Stalin is reported to have said, "A single death is a tragedy; a million deaths is a statistic". Yet, embracing our brain's capacity for deliberate reasoning, we can truly make a step towards overcoming our indifference—recognising that while every single death is indeed a tragedy, a million deaths really are a million tragedies.

In other words, what is needed to address the problem of indifference is, as Bloom puts it, that we should "in the moral domain, (...) become rational deliberators motivat[ed] by compassion and care for others." Relatedly, Robin Hanson comments that "if only we could be moved more by our heads than our hearts, we could do a lot more good." This is almost precisely how Peter Singer described effective altruism in its early days in his 2013 TED talk, stating

"[effective altruism is] important because it combines both the heart and the head. The heart, of course, you felt (...) the empathy for that child. But it's really important to use the head as well to make sure that what you do is effective and well-directed; and not only that, but also I think reason helps us to understand that other people, wherever they are, are like us, that they can suffer as we can, that parents grieve for the deaths of their children, as we do, and that just as our lives and our well-being matter to us, it matters just as much to all of these people".

To overcome our collective indifference towards the largest causes of suffering, Singer suggests that we need to use our reasoning faculties to accept a notion of impartiality as fundamental in our concern for others and to let this insight guide our altruistic decision-making and actions: the insight that the suffering and well-being of myself and my tribe is not intrinsically more important than that of yourself and your tribe.

Impartiality is most crucial when our emotional indifference is most inadequate—when we are dealing with the suffering of those most distant to ourselves. Yet, as Singer's idea of the expanding circle illustrates, moral impartiality can transcend geographic, temporal and even species-based boundaries. It was the application of this impartiality principle that put Jeremy Bentham so far ahead of time, with regards to his contemporaries, writing

"Why should the law refuse its protection to any sensitive being? (...) The time will come when humanity will extend its mantle over everything which breathes (...) when the rest of the animal creation may acquire those rights which never could have been withholden from them but by the hand of tyranny."

The ongoing moral catastrophes of our time—extreme poverty, factory farming, existential risks etc.—illustrate that even centuries after Bentham's death there remain numerous neglected opportunities to do an enormous amount of good. These are opportunities to, as Singer puts it, "prevent something bad from happening, without thereby sacrificing anything of comparable moral importance". Using evidence and reason, effective altruism is the intellectual pursuit and practical implementation of these opportunities.

Or, to put it differently: effective altruism is the serious attempt to overcome our collective indifference towards the major causes of suffering in the world.

In addition to providing benefits such as improved coordination, the effective altruism community serves this purpose in two important ways: First, it increases the altruistic motivation of its members over the long-term by connecting them with other individuals, groups and organisations with shared goals and epistemic norms, thus overcoming feelings of isolation in their pursuit to do the most good. Second, the EA community establishes social incentives that reward altruistic actions aimed at benefiting others the most, as opposed to altruistic actions that mainly serve to foster our own (hidden) selfish motives. The 'warm glow' theory of giving in economics suggests that many people donate money or volunteer (in part) to reap the emotional satisfaction associated with helping others, often choosing emotionally salient causes over other more effective causes—in contrast, the EA community increases the positive impact of its members by positively reinforcing effectively altruistic decisions (e.g. by systematically comparing cause areas, choosing charities based on rigorous cost-effectiveness evaluations etc.). To this end, Robin Hanson argues that

"to put ourselves in situations where our hidden motives better align with our ideal motives (...) we might join the effective altruism movement, in order to surround ourselves with people who will judge our charitable giving more by its effects than by superficial appearances. Incentives are like the wind: we can choose to row or tack against it, but it’s better if we can arrange to have the wind at our backs."

At the same time, the ideas and principles of effective altruism help to overcome indifference on an intellectual level. For instance, Nate Soares writes that

“if you choose to do so, you can still act like the world's problems are as big as they are. You can stop trusting the internal feelings to guide your actions and switch over to manual control. (…) addressing the major problems of our time isn't about feeling a strong compulsion to do so. It's about doing it anyway, even when internal compulsion utterly fails to capture the scope of the problems we face. (...) The closest we can get [to comprehending the scope of these problems] is doing the multiplication: finding something we care about, putting a number on it, and multiplying. And then trusting the numbers more than we trust our feelings."

For some people, the rational and systematic approach to doing good taken by effective altruism may at first feel cold and calculating. However, effective altruism really is warm and calculating. Behind every number there are individuals that matter. Prioritising who to help first is the warm-hearted response to the tragically indifferent world we live in; a world where the resources of those who care are far too limited provide for everyone deserving of being cared for.

Making a similar point, Holly Elmore states that

“I fervently hope that one day we will be able to save everyone. In the meantime, it is irresponsible to pretend that we aren’t making life and death decisions with the allocation of our resources. Pretending there is no choice only makes our decisions worse. (...) I understand that it’s hard, that we will always instinctively care more for the people we see than those we don’t. (...) But there should be great shame in letting more people suffer and die than needed to because you can’t look past your own feelings."

Ending on the words of someone who fought humanity's collective indifference for most of his professional career, deceased statistician Hans Rosling wrote:

“It is hard for people to talk about resources when it comes to saving lives, or prolonging or improving them. Doing so is often taken for heartlessness. Yet, so long as resources are not infinite—and they never are infinite—it is the most compassionate thing to do to use your brain and work out how to do the most good with what you have.”

I owe my gratitude to Chi Nguyen, Nadia Mir-Montazeri, Eli Nathan and Huw Thomas for providing me with valuable feedback on this post.


[1]: Stefan Schubert commented on an earlier version of this post: "I agree with much of this. However, [I] guess that morally motivated hatred presents large opportunity costs: people focus on hating their moral opponents, and on zero-sum conflicts with them, rather than on finding effective solutions. Hatred is also probably a reason why people don't engage more in moral trade and moral cooperation."

Comments8


Sorted by Click to highlight new comments since:

I majored in cognitive science with a particular focus on decision-making and persuasion, and I love seeing posts that discuss framing and public messaging.

I think that your framing here describes the world pretty accurately. But when I actually talk about EA with non-EA people who I'd like to become interested, I don't use an "indifference" framework. That would certainly be better than a "hatred" framework, but I still want to avoid making people feel like I'm accusing them of indifference to suffering.

Instead, I use an "unawareness" framework. Rather than "most people are indifferent to these problems", I say something like "most people aren't fully aware of the extent of the problems, or do know about the problems but aren't sure how to address them; instead, they stick to working on things they feel they understand better".

In my experience, this naturally provokes questions about the extent of problems or how to address them, since "not being aware" is morally neutral and not shameful to alleviate (while "indifference" is a bad thing most people won't want to admit feeling).

I broadly agree, but maybe some further nuance could be added. One could oppose the "indifference framework" either because one thinks it's inaccurate, or because one thinks it's not a fruitful way of engaging with people who haven't bought into the ideas of effective altruism. One might interpret you as saying that the indifference framework is indeed accurate but that we still shouldn't use it, for the second, pragmatic reason. (I'm not sure if that's what you actually mean; in any case, it's an interesting interpretation which it's useful to comment on.) I think the indifference framework may not be quite accurate, however. Take, for instance, this sentence:

The problem of indifference is not solvable merely by increasing emotions such as love or empathy, despite their importance in our everyday lives.

The way these words are normally used, a person who is highly loving or empathetic is not indifferent. Therefore, again using these words in their normal senses, the problem of indifference is in a sense solvable by increasing emotions such as love or empathy.

I guess an objection here would be that though such a person wouldn't be generally indifferent, they would be indifferent towards the most important problems and the most important sources of suffering. If so, it may be a bit misleading to call them "indifferent". But also, it's not clear that they really are indifferent towards those problems. It may be that they don't know about them (cf Aaron's comment), or that other problems are more available in their mind, or that they fail to see that any resources spent on less important problems will lead to less resources towards the most important problems (i.e., a failure to fully grasp notions such as opportunity cost and prioritization).

Relatedly, many people who are not working towards solving the problems that effective altruists find the greatest put a lot of resources into projects which, from their point of view, are moral. This includes charitable giving, political engagement, etc. It doesn't seem quite correct to me to call them "indifferent".

To clarify my point (which did need clarification, thanks for pointing that out):

  • I disagree on (1) and think few people are truly indifferent to the suffering of other humans, even far-off strangers.
  • I somewhat agree on (2) and think many people are mostly or entirely indifferent to the suffering of farm animals.
  • I broadly agree on (3) and (4); I think most people are indifferent to "future people" (in the sense of "people who could be alive a thousand years from now", not "the children you may have someday"), as well as the suffering of wild animals, though the latter may be closer to "instinctive justification to avoid intrusive/horrible thoughts", which only masquerades as indifference.

I certainly wouldn't use "indifferent" as an indiscriminate term to describe the view of an average person about mortality, especially since I'd guess that most people are involved in some form of altruistic activity that they care about.

I think the indifference framework can be useful when thinking on a population level (e.g. few voters care about issue X, so the electorate can be said to be "indifferent" in a practical sense), but not on an individual level (almost anyone is capable of caring about EA-aligned causes if they are presented in the right way to reach that specific person).

Instead, I use an "unawareness" framework. Rather than "most people are indifferent to these problems", I say something like "most people aren't fully aware of the extent of the problems, or do know about the problems but aren't sure how to address them; instead, they stick to working on things they feel they understand better".

I would guess that similarly this is why "woke" as caught on as a popular way of talking about those who "wake up" to the problems around them that they were previously ignorant of and "asleep to": it's a framing that let's you feel good about becoming aware of and doing more about various issues in the world without having to feel too bad about having not done things about them in the past, so you aren't as much on the defensive when someone tries to "shake you awake" to those problems.

"However, effective altruism really is warm and calculating."

I can't believe I've never thought of this! That's great :)

Great post, too. I think EA has a helpful message for most people who are drawn to it, and for many people that message is overcoming status quo indifference. However, I worry that caring too much, as in overidentifying with or feeling personally responsible for the suffering of the world, is also a major EA failure mode. I have observed that most people assume their natural tendency towards either indifference or overresponsibility is shared by basically everyone else, and this assumption determines what message they think the world needs to hear. For instance, I'm someone who's naturally overresponsible. I don't need EA to remind me to care. I need it to remind me that the indiscriminate fucks I'm giving are wasted, because they can take a huge toll on me and aren't particularly helping anyone else. Hence, I talk a lot about self-care and the pitfalls of trying to be too morally perfect within EA. When spreading the word about EA, I emphasize the moral value of prioritization and effectiveness because that's what was missing for me.

EA introduced me to many new things to care about, but I only didn't care about them before because I hadn't realized they were actionable. This might be quibbling, but I wouldn't say I was indifferent before-- I just had limiting assumptions about how I could help. I side more with Aaron's "unawareness" frame on this.

+6 on "warm and calculating"

I loved reading this post. 

Several years after it was written, it feels more relevant than ever. I see so much media coverage of Effective Altruism which is at worst negative - often presented as rich tech bros trying to ease their conscience about their rich lifestyle and high salaries, especially after SBF - and at best grudgingly positive, as for example in the article in Time this week. 

I'm relatively new to EA - about 2 years in the community, about 1 year actively involved. And what I've noticed is the dramatic contrast between how EA is often presented (cold, logical, even cynical), and the people within EA, who are passionate about doing good, caring, friendly and incredibly supportive and helpful. 

It's frustrating when something like SBF's implosion happens, and it does hurt the image of EA. But the EA community needs to keep pushing back on the narrative that we are cold and calculating. 

So, I really love this post because what it's saying is the opposite to the common misperception: In fact, in a world of cold indifference, the EA's are the group who are not indifferent, who care so much that they will work to help people they will never meet, animals they will never understand, future generations they won't live to see. 

 

Nice thoughts, Denis!

the EA's are the group who are not indifferent,

This might be a little bit of a nitpick, but I would say "EA is one of the groups [rather than the group] who are not indifferent", because there are others.

Curated and popular this week
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
 ·  · 15m read
 · 
In our recent strategy retreat, the GWWC Leadership Team recognised that by spreading our limited resources across too many projects, we are unable to deliver the level of excellence and impact that our mission demands. True to our value of being mission accountable, we've therefore made the difficult but necessary decision to discontinue a total of 10 initiatives. By focusing our energy on fewer, more strategically aligned initiatives, we think we’ll be more likely to ultimately achieve our Big Hairy Audacious Goal of 1 million pledgers donating $3B USD to high-impact charities annually. (See our 2025 strategy.) We’d like to be transparent about the choices we made, both to hold ourselves accountable and so other organisations can take the gaps we leave into account when planning their work. As such, this post aims to: * Inform the broader EA community about changes to projects & highlight opportunities to carry these projects forward * Provide timelines for project transitions * Explain our rationale for discontinuing certain initiatives What’s changing  We've identified 10 initiatives[1] to wind down or transition. These are: * GWWC Canada * Effective Altruism Australia funding partnership * GWWC Groups * Giving Games * Charity Elections * Effective Giving Meta evaluation and grantmaking * The Donor Lottery * Translations * Hosted Funds * New licensing of the GWWC brand  Each of these is detailed in the sections below, with timelines and transition plans where applicable. How this is relevant to you  We still believe in the impact potential of many of these projects. Our decision doesn’t necessarily reflect their lack of value, but rather our need to focus at this juncture of GWWC's development.  Thus, we are actively looking for organisations and individuals interested in taking on some of these projects. If that’s you, please do reach out: see each project's section for specific contact details. Thank you for your continued support as we
 ·  · 3m read
 · 
We are excited to share a summary of our 2025 strategy, which builds on our work in 2024 and provides a vision through 2027 and beyond! Background Giving What We Can (GWWC) is working towards a world without preventable suffering or existential risk, where everyone is able to flourish. We do this by making giving effectively and significantly a cultural norm. Focus on pledges Based on our last impact evaluation[1], we have made our pledges –  and in particular the 🔸10% Pledge – the core focus of GWWC’s work.[2] We know the 🔸10% Pledge is a powerful institution, as we’ve seen almost 10,000 people take it and give nearly $50M USD to high-impact charities annually. We believe it could become a norm among at least the richest 1% — and likely a much wider segment of the population — which would cumulatively direct an enormous quantity of financial resources towards tackling the world’s most pressing problems.  We initiated this focus on pledges in early 2024, and are doubling down on it in 2025. In line with this, we are retiring various other initiatives we were previously running and which are not consistent with our new strategy. Introducing our BHAG We are setting ourselves a long-term Big Hairy Audacious Goal (BHAG) of 1 million pledgers donating $3B USD to high-impact charities annually, which we will start working towards in 2025. 1 million pledgers donating $3B USD to high-impact charities annually would be roughly equivalent to ~100x GWWC’s current scale, and could be achieved by 1% of the world’s richest 1% pledging and giving effectively. Achieving this would imply the equivalent of nearly 1 million lives being saved[3] every year. See the BHAG FAQ for more info. Working towards our BHAG Over the coming years, we expect to test various growth pathways and interventions that could get us to our BHAG, including digital marketing, partnerships with aligned organisations, community advocacy, media/PR, and direct outreach to potential pledgers. We thin