Hide table of contents

Context note: This is more of an emotional piece meant to capture feelings and reservations, rather than a logical piece meant to make a persuasive point. This is a very personal piece and only represents my own views, and does not necessarily represent the views of anyone else or any institution I may represent.

~

It’s been a rough past five months for effective altruism.

Because of this, understandably many people are questioning their connection and commitment to the movement and questioning whether “effective altruism” is still a brand or set of ideas worth promoting. I’ve heard some suggest it might be better to instead promote other brands and perhaps even abandon promotion of “effective altruism” altogether.

I could see ways in which this is a good move. Ultimately I want to do whatever is most impactful. However, I worry that moving away from effective altruism could make us lose some of what I think makes the ideas and community so special and what drew me to the community more than ten years ago.

Essentially, effective altruism contains three radical ideas that I don’t easily find in other communities. These three ideas are ideas I want to protect.

 

Radical empathy

Humanity has long had a fairly narrow moral circle. Radical Empathy is the idea that there are many groups of people, or other entities, that are worthy of moral concern even if they don't look or act like us. Moreover, it’s important to deliberately identify all entities worthy of moral concern so that we can ensure they are protected. I find effective altruism to be unique in extending moral concern to not just traditionally neglected farm animals and future humans (very important) – but also to invertebrates and potential digital minds. Effective altruists are also unique in trying to intentionally understand who might matter and why and actually incorporating this into the process of discovering how to best help the world. Asking the question "Who might matter that we currently neglect?" is a key question that is asked way too rarely.

We understand that while it’s ok to have special concern for family and friends, we should generally aim to make altruistic decisions based on impartiality, not weighing people differently just because they are at a different level of geographic distance, a different level of temporal distance, a different species, or run cognition on a different substrate.

I worry that if we were to promote individual subcomponents of effective altruism, like pandemic preparedness or AI risk, we might not end up promoting radical empathy and we might end up missing entire classes of entities that matter. For example, I worry that one more subtle form of misaligned AI might be an AI that treats humans ok but adopts common human views on nonhuman animal welfare and perpetuates factory farming or abuse of a massive number of digital minds. The fact that effective altruism has somehow created a lot of AI developers that avoid eating meat and care about nonhuman animals is a big and fairly unexpected win. I think only some weird movement that somehow combined factory farming prevention with AI risk prevention could’ve created that.

 

Scope sensitivity

I also really like that EAs are willing to “shut up and multiply”. We’re scope sensitive. We’re cause neutral. Nearly everyone else in the world is not. Many people pick ways to improve the world based on vibes or personal experience, rather than through a systematic search of how they can best use their resources. Effective altruism understands that resources are limited and that we have to make hard choices between potential interventions and help 100 people instead of 10, even if helping the 10 people feels as or more satisfying. We understand “value per effort” and “bang for your buck” matter in philanthropy.

When thinking, we’re also willing to think in bets and use expected value reasoning. When facing problems we can think probabilistically rather than in black and white. We can properly handle uncertainty.

I worry that by not communicating scope sensitivity when promoting subcomponents of effective altruism, we might be recruiting people who are unprepared to choose the right things when tackling the relevant problems. It’s important that we empower people to be scope sensitive when deciding which risks to tackle and how to tackle them.

 

Scout mindset

Lastly, the third and final factor I find really rare in the world is scout mindset, or the view that we should be open, collaborative, and truth-seeking in our understanding of what to do. While rare in the real world, this is abundant in effective altruism. We look for the truth rather than treat our arguments as soldiers or our beliefs as attire. We’re open to being wrong, even about very fundamental things like our preferred ways to help others. We practice and hone good epistemics. We hold each other to high standards of honesty, integrity, and friendliness.

I worry that by not communicating scout mindset when promoting components of effective altruism, we may recruit people who argue counterproductively, do not reason well, and ultimately hold us back as we try to collectively find the truth.

 

Why this matters to me

These three radical views – radical empathy, scope sensitivity, and scout mindset – are so rare in the world that I rarely find people with just one of them, let alone all three. I think the effective altruism community has been amazing in cultivating a large group of talented and compelling people that hold these three views. I think that’s precious and I want to protect that as much as I can.

It’s natural for social movements to go through trials and tribulations. I was a part of the New Atheist movement during the acrimonious Atheism Plus split. I was a part of the animal rights movement during the shockingly poor behavior and more importantly terrible institutional handling of issues related to Wayne Pacelle, Nick Cooney, and others. While these issues are serious, it’s normal for social movements to go through crisis – what’s more important is how we respond to that crisis.

I’m definitely ok with using other brands if that’s what is most impactful. I don’t want to cling stubbornly to my movement – that wouldn’t be a good use of scout mindset. 

But I think I’m going to stick with effective altruism, at least as an internal motivation, as long as I think it effectively represents these three radical ideas and as long as no other movement does a better job at that.

Comments16
Sorted by Click to highlight new comments since: Today at 12:48 PM

Peter - excellent short piece; I agree with all of it.

The three themes you mentioned -- radical empathy, scope-sensitivity, scout mindset -- are really the three key takeaways that I try to get my students to learn about in my undergrad classes on EA. Even if they don't remember any about the details of global public health, AI X-risk, or factory farming, I hope they remember those principles.

Let me make a case that we should call it Radical Compassion instead of Radical Empathy. This is a very minor point of course, but then again, people have endlessly debated whether Effective Altruism is a sub-optimal label and what a better label would be. People clearly care about what important things are called (and maybe rightly so from a linguistic precision and marketing perspective).

You probably know this literature, but there's a lot of confusion around what empathy should be defined as. Sometimes, empathy refers to various perspective-taking processes, like feeling what another feels (let's call it Empathy 1). I think this is the most common lay definition. Sometimes, it refers to valuing others' welfare, also referred to as empathic concern or compassion (let's call it Empathy 2). Sometimes, definitions reference both processes (let's call it Empathy 3), which doesn't seem like the most helpful strategy to me.

Holden briefly points to the debate in his post which you link to, but it's not clear to me why he chose the empathy term despite this confusion and disagreement. In one place, he seems to endorse Empathy 3, but in another, he separates empathy from moral concern, which is inconsistent with Empathy 3.

I think most EA's want people to care about the welfare of others. It doesn't matter if people imagine what it feels like being a chicken that is pecked to death in a factory farm (that's going to be near-impossible), or if they imagine how they would feel in factory farm conditions (again, very difficult to imagine). We just want them to care about the chicken's welfare. We therefore want to promote Empathy 2, not 1 or 3. Given the confusion around the empathy term, it seems better to stick with compassion. Lay definitions of compassion also align with the "just care about their welfare" view.

bxjaeger -- fair point. It's worth emphasizing Paul Bloom's distinction between rational compassion and emotional empathy, and the superiority of the former when thinking about evidence-based policies and interventions. 

Agreed - I think Paul Bloom's distinction makes a lot of sense. Many prominent empathy researchers have pushed back on this, mostly to argue for the Empathy 3 definition that I listed, but I don't see any benefit in conflating these very different processes under one umbrella term.

Yep -- I think Paul Bloom makes an important point in arguing that 'Empathy 2' (or 'rational compassion') is more consistent with EA-style scope-sensitivity, and less likely to lead to 'compassion fatigue', compared to 'Empathy 1' (feeling another's suffering as if it's one's own).

I don't think compassion is the right term descriptively for EA views, and it seems worse than empathy here. Compassion is (by the most common definitions, I think) a response to (ongoing) suffering (or misfortune).

Longtermism might not count as compassionate because it's more preventative than responsive, and the motivation to ensure future happy people come to exist probably isn't a matter of compassion, because it's not aimed at addressing suffering (or misfortune). But what Holden is referring to is meant to include those. I think what we're aiming for is counting all interests and anyone who has interests, as well as the equal consideration of interests.

Of course, acts that are supported by longtermism or that ensure future happy people come to exist can be compassionate, but maybe not for longtermist reasons and probably not because they ensure future happy people exist, and instead because they also address suffering (or misfortune). And longtermists and those focused on ensuring future happy people come to exist can still be compassionate in general, but those motivations (or at least ensuring future happy people come to exist) don't seem to be compassionate, i.e. they're just not aimed at ongoing suffering in particular.

You're right that both empathy and compassion are typically used to describe what determines people's motivation to relieve someone's suffering. Neither perfectly captures preventive thinking or consideration of interests (beyond welfare and suffering) that characterize longtermist thinking. I think you are right that compassion doesn't lead you to want future people to exist. But I do think that it leads you to want future people to have positive lives. This point is harder to make for empathy. Compassion often means caring for others because we value their welfare, so it can be easily applied to animals or future people. Empathy means caring for others because we (in some way) feel what it's like to be them or in their position. It seems like this is more difficult when we talk about animals and future people. 

I would argue that empathy, how it is typically described, is even more local and immediate, whereas compassion, again, how it is typically described, gets somewhat closer to the idea of putting weight on others' welfare (in a potentially fully calculated, unemotional way), which I think is closer to EA thinking. This is also in line with how Paul Bloom frames it: empathy is the more emotional route to caring about others, whereas compassion is the more reflective/rational route. So I agree that neither label captures the breadth of EA thinking and motivations, especially not when considering longtermism. I am not even arguing very strongly for compassion as the label we should go with. My argument more is that empathy seems to be a particualrly bad choice.

Great piece. Short and sweet. 

Given the stratospheric karma this post has reached, and the ensuing likelihood it becomes a referenced classic, I thought it'd be a good time to descend to some pedantry. 

"Scope sensitivity" as a phrase doesn't click with me. For some reason, it bounces off my brain. Please let me know if I seem alone in this regard. What scope are we sensitive to? The scope of impact? Also some of the related slogans "shut up and multiply" and "cause neutral" aren't much clearer. "Shut up and multiply" which seems slightly offputting / crass as a phrase stripped of context, gives no hint at what we're multiplying[1]. "Cause neutral" without elaboration, seems objectionable. We shouldn't be neutral about causes! We should prefer the ones that do the most good! They both require extra context and elaboration. If this is something that is used to introduce EA, which now seems likelier, I think this section confuses a bit. A good slogan should have a clear, and difficult to misinterpret meaning that requires little elaboration. "Radical compassion / empathy" does a good job of this. "Scout mindset" is slightly more in-groupy, but I don't think newbies would be surprised that thinking like a scout involves careful exploration of ideas and emphasizes the importance of reporting the truth of what you find. 

Some alternatives to "scope sensitivity" are: 

  • "Follow the numbers" / "crunch the numbers": we don't quite primarily "follow the data / evidence" anymore, but we certainly try to follow the numbers. 
  • "More is better" / "More-imization" okay, this is a bit silly, but I assume that Peter was intentionally avoiding saying something like "Maximization mindset" which is more intuitive than "scope sensitivity", but has probably fallen a bit out of vogue. We think that doing more good for the same cost is always better.
  • "Cost-effectiveness guided" while it sounds technocratic, that's kind of the point. Ultimately it all comes back to cost-effectiveness. Why not say so? 
  1. ^

    If I knew nothing else, I'd guess it's a suggestion of the profound implications of viewing probabilities as dependent (multiplicative) instead of dependent (addictive) and, consequently, support for complex systems approaches /GEM modelling instead of reductive OLSing with sparse interaction terms. /Joke

"Scale Matters" ?

Thanks for the piece. It brings me two contradicting emotions of warmth and unsettling sadness for "excluding" the rest of humanity. I don't think I need to explain the first one, but I want to explore the second.

I note two things which I can unite under the "honest EA" concept:
1) EA, as a concept, is a very human way of thinking. If you ask your average Joe if they want to do good, they'd likely say "yes," and if you'd ask them if they'd like to do it effectively, they'd probably also say "yes."  So, I really believe that an honest version of EA is close to universal moral (*might be too universal, honestly, but that is a problem with the word "good")
2) All three features you point out, as well as point 1) above, are not binary. For each individual, there is a distribution of the range of empathy,  there is scope sensitivity (it is just that it gets overridden by moral circle concerns), and surely there are environments and conditions in which almost any human can experience the scout mindset, just that few people bother to create those environments. Being effective in one's altruism is also the point on the distribution. 

As you notice, we appreciate that it's ok to care more about our family and friends, and in those moments, we are not "absolute EA," but we are very normal humans. 

I believe that honestly appreciating the fact that "we are just points on the distribution that, due to a privilege of economic, intellectual or emotional stability, are on the "high" end of the distribution" can provide us with the humility to empathize with a fellow non-EA human being and recognize that the three features you've listed, are, in fact, everywhere. It is just that it takes much more than these three qualities to make a human. 

I believe this recognition is essential for the future of the community and the psychological health of the citizens of this forum. 
I want to talk to non-EA, not as to someone who doesn't share my values, but to someone who wasn't lucky enough to have a chance to make space for EA activities in their days, minds, and hearts but deep inside, we share the same honest EA idea "we aim to do good effectively, while also doing those other mysterious things that make us into a wholesome human being." 

*This might be the very same way you feel. But I still thought it was important to share.

I strongly agree with this, but worry that protecting those three accidentally sneaks in a lot of baggage. As I wrote at length, I think there are a lot of different pieces that are easily conflated, and I'm concerned that saying yes to the movement without being clear on which parts you disagree with is going to lead to bad epistemic practice. 

Given that, I think it's especially valuable to say what you disagree with EA consensus about, or which things you're not willing to keep as key values even if you think they are OK, as a way to keep a clearer scout mindset. (This is, of course, socially much harder, but it's also part of what keeps a cause from becoming a cult.)

Thank you, Peter. These are the things that initially attracted me to effective altruism and I appreciate you articulating them so effectively. I will also say that these are ideas I admire you for obviously fostering, both through rethink priorities and your forecasting work.

Unfortunately it seems to me that the first and third ideas are far less prominent of a feature of EA than they used to be.

The first idea seems to me to be less prominent as a result of so many people believing in extremely high short term catastrophic ai risk. It seems that this has encouraged an attitude of animal welfare being trivial by comparison and the welfare of humans in the far future being irrelevant (because if we don't solve it, humans will go extinct within decades). Attitudes about animal welfare seem in my opinion to be compounded by the increasing influence of Eliezer, who does not believe that non human animals (with the possible exception of chimps) are sentient.

The third idea also seems to be declining as a result of hard feelings related to internal culture warring. In my view, bickering about the integrity of various prominent figures, about the appropriate reaction to sbf, about whose fault sbf was, about how prevalent sexual assault is in EA, about how to respond to sexual assault in EA, about whether those responses are cultish or at least bigoted, etc etc etc has just made the general epistemics a lot worse. I see these internal culture wars bleeding into cause areas and other ostensibly unrelated topics. People are frustrated with the community and regardless of whatever side of these culture wars they are on, they are annoyed about the existence of the other side and frustrated that these seemingly fundamental issues of common decency are even a discussion. It puts them in no mood to discuss malaria vaccines with curiosity.

I personally deactivated my real-name forum account and stopped participating in the in person community and talking to people about ea. I still really really value these three ideas and see pockets of the community that still embody them. I really hope the community once again embodies them like I think they used to.

While these issues are serious, it’s normal for social movements to go through crisis – what’s more important is how we respond to that crisis.

I like CEA's timely addition last summer of collaborative spirit to the other three values you have here (which they called impartial altruism, prioritization, and open truthseeking).

I will probably use this framing to communicate EA from now on

+1. Most people I speak to that 'only have heard of EA' explain EA as the idea that one should make more money to donate more (and more effectively). However, the principles described here are more encompassing. 

one more subtle form of misaligned AI might be an AI that treats humans ok but adopts common human views on nonhuman animal welfare and perpetuates factory farming or abuse of a massive number of digital minds

This is unrelated to the core messages of the post, but I think there's an important point to consider. A sufficiently intelligent system could improve cultured meat technology or invent other technological innovations for producing meat without factory farms.

Curated and popular this week
Relevant opportunities