I'd like to open a discussion on making Effective Altruism more emotionally appealing. I'm especially interested in this topic due to my broader project, Intentional Insights, of spreading rational thinking, including about Effective Altruism, to a wide audience. As part of doing so, I find that I and other members of Intentional Insights engage with a number of people who are interested in EA when I present it to them, and accept the premise of doing the most good for the most number, but have trouble engaging with the movement due to the emotional challenges they experience. To be clear, this is in addition to some of the problematic signaling coming from the EA movement that Tom Davidson described well in his recent piece, "EA's Image Problem," and not explicitly due to the things outlined there, although they play a role indirectly.

 

What I am talking about is people who are interested in effective giving optimized to save the most lives, but then have trouble buying into the EA approach to it emotionally. They have challenges accepting that they have inherent cognitive biases that make their intuitions about optimal giving skewed. They have challenges letting go of cached patterns of giving to previous causes and accepting that their previous giving was suboptimal. They experience guilt over their previous giving or the lack thereof, something that causes them to flinch away from the EA movement. They have difficulty connecting emotionally with the data presented by venues like GiveWell as evidence for optimal giving, it doesn't feel emotionally rewarding to them. Moreover, they have emotional challenges with the difficulty of the many things they need to learn to "get" Effective Altruism - data analysis, utilitarian philosophy, rationality, etc. Many accept intellectually that an African life is worth the same inherently as an American life, but then have trouble emotionally enacting the implications of their intellectual recognition by optimizing their giving through contributions to alleviate African vs. American suffering.

 

For instance, right now Intentional Insights is focusing on spreading rational thinking and effective altruism to the skeptic/secular movement. People I talk to who accept the premise of optimizing giving to do the most good try to rationalize their current giving to secular communities as optimal for saving lives by saying things like "well, if I give to my secular community, it will create a venue where other secular people will feel safe and respected, and then we can give later to save the lives of Africans." Now, this is an awfully convenient way of justifying current giving, and I suspect it does not actually optimize for saving lives, but is just an example of confirmation bias. Sure, I present data from GiveWell on the benefits of giving to developing countries, but they still have an out that lets them preserve their self-image as rational people, since the QALYs of giving to a secular community and then potentially giving together latter are hard to quantify. Moreover, they often have trouble engaging with the dry data analysis, it just doesn't ring true to them emotionally. 

 

This example illustrates some of the problems with accepting cognitive biases, letting go of cached patterns of giving, connecting emotionally with data, and enacting the implications of their emotional recognition, also known as the drowning child problem. Now, you might think the stances I described above are weird, and do not feel intuitive to you. I hear you, and my gut reaction also does not accept these stances. If I learn that something is true - i.e., if my goal is to give effectively to do the most good - then it is relatively easy for me to let go of cached patterns and update my beliefs.

 

However, I think I, and the brunt of EAs in general, are much more analytical in our thinking than the baseline. If we want to expand the EA movement, we can't fall into typical mind fallacy and assume that what worked to convince us will convince others who are less analytical and more emotionally-oriented thinkers. Too often, I have seen effective altruists try to convince others by waving data in their face, and then calling them intellectually dishonest and inconsistent thinkers when those others did not change their perspective due to their internal emotional resistance. We need to develop new ways of addressing this emotional resistance, in a compassionate and generous way, to grow the EA movement.

 

Something that I find worked with our outreach efforts is to help provide people interested in EA goals with emotional tools to address their internal emotional challenges. For instance, to address the guilt people experience over their previous giving, to address cached patterns, and help people update their beliefs, it helps to use the CBT tool of reframing by encouraging themselves to distance their current self from their past self, and remember that they did not have this information about EA when they decided on their previous giving, making it ok to choose a new path right now. Another approach I found helpful is to encourage people to think of themselves as being at the ordinary human baseline, and then orient toward improving, rather than seeing oneself as never able to achieve perfect rationality in one's giving. To address guilt in particular, teaching non-judgment and compassion toward oneself is really helpful. To help people connect emotionally with the hard data, we know what works to pull at people's heart strings - we should tell stories about the children saved from malaria, of the benefits people gained from GiveDirectly, etc. Indeed, I found that telling stories, and then supporting them with numbers and metrics, works well. Likewise, it helps to have effective altruists share personal and moving stories of why they got into effective altruism in the first place and why they are passionate about it, stories that illustrate their own strong feelings and go light on the data.

 

On an institutional level, I would suggest that EA groups focus more on being welcoming toward emotionally-oriented thinkers. Perhaps having people who are specifically assigned as mentors for new members, who can help be guides for their intellectual and emotional development alike.

 

What are your thoughts about these, and more broadly strategies for overcoming emotional resistance to Effective Altruism? Also happy to discuss any collaboration on EA outreach, my email is gleb@intentionalinsights.org.

 

EDIT: Title edited based on comments

 

P.S. This article is part of the EA Marketing Resource Bank project lead by Intentional Insights and the Local Effective Altruism Network, with support from The Life You Can Save.

Comments21


Sorted by Click to highlight new comments since:

I'm one of those people who has trouble connecting with EA emotionally, even though I fully "get" it rationally. My field is cost-benefit analysis for public programs so I fully understand the moral and statistical basis for giving to the mathematically "correct" charity. But I don't feel any particular personal connection to, say, Deworming the World, so I'm more apt to donate to something I feel connected to.

In EA thinking, emotions and "warm fuzzy" feelings tend to be looked upon disparagingly. However, our emotions and passions are powerful and essential to our humanity, and I think that accomplishing what we want (driving more resources to the needy in the most effective way possible) requires understanding that we are humans, not GiveBots.

To me, one solution is to use the tools of behavioral psychology to encourage people to give more where we want. I'm talking about touching heartstrings, helping us see the actual people we are helping, and talking stories instead of just numbers.

Thanks for the post!

Sounds like we are thinking along the same lines.

I don't particularly object to the content of the post, but could you please consider rewriting the title?

"Overcoming emotional resistance" honestly sounds like something deeply unpleasant pick up artists write about coercing women into unwanted sex (https://en.m.wikipedia.org/wiki/Pickup_artist#Practices)

Thanks, appreciate the suggestion! I edited the title.

I was about to delete my post (thanks Gleb_T for the quick change of name) but noticed a downvote. Could that person come forward and explain why they thought my post was unhelpful?

I'd also like to know that. I think your point was right on and thanks for helping improve the title.

I'm so glad people are becoming more aware of this issue! Right now the only addition I can think of is the “You are already a tribe member” tactic by Tyler Alterman that I found here: https://docs.google.com/document/d/1vsQdWIcL1nWdTTdQtB4uH1f_rIjDo27-CwaZUnfqEG4/edit#

Nice, thanks for that idea!

Too many thoughts all jumbled up, have to try and write more on this but:

  1. Everyone is an emotional thinker, EA's have some strategies for avoiding worst pitfalls, but still easy and many do fall into them, and we have weaknesses as well which harm our work.
  2. Different strategies needed for between raising profile & funding of highly effective causes and evidence strategies and creating more EA's - and we can do both.
  3. Some of these ideas sound good, some sound weird and creepy and like we have all the answers and are concerned with manipulating people rather than being engaged in a dialogue where we can learn something valuable as well.
  4. Different approaches may work well/less well for different causes, and approaches that work for one may in doing so harm support for others.

Agreed on many of the points, except the weird and creepy - I'd like to understand more about that. More broadly, my point as I stated in the beginning of the piece was to open up a discussion, not give definitive answers. I'd like to hear many other people's thoughts on this.

reading again this was the bit i found wierd/creepy - For instance, to address the guilt people experience over their previous giving, to address cached patterns, and help people update their beliefs, it helps to use the CBT tool of reframing by encouraging themselves to distance their current self from their past self, and remember that they did not have this information about EA when they decided on their previous giving, making it ok to choose a new path right now. Another approach I found helpful is to encourage people to think of themselves as being at the ordinary human baseline, and then orient toward improving, rather than seeing oneself as never able to achieve perfect rationality in one's giving.

But i actually don't think these ideas are bad i just think the phrasing of them is off. the way you've wrtten this makes it seem a bit "oh we are the enlightened ones and here our our clever ways of manipulating you to join us" but i appreciate that is not what you meant - much more about EA as a movement of people who want to "do good better" than criticise people for not thinking same way we do or manipulate guilt.

Thank you for clarifying what I actually meant to convey. I'll work on phrasing it more effectively in the future.

I very much welcome the opening of this discussion.

Many utilitarian EAs simultaneously claim that EA is "compatible" with most other forms of ethical thinking while also continuing to make their arguments very narrowly consequentialist.

I genuinely believe that most EA actions are actually required of people who subscribe to other ethical systems, and I try my best to adapt my language to the person I'm trying to convince based on what they care about.

One example: many left-leaning students talk a lot about "privilege". I tell them that the best thing they could do if they were serious about finding it "problematic" that so many of us are overly privileged is by giving that privilege away!

Alternatively, people who care about justice are very receptive if you tell them that globalization means we now have reciprocal relationships with most of the world and that we elect governments that are utterly hypocritical on the issue of free trade, causing extreme poverty. Our riches often do in some sense come out of their poverty, and if one believes the global economy is in need of change they should refuse to submit to it by voting with their wallets as well as with their ballots.

Oh, I like that framing! Nice way of getting across to the audience that you are speaking to.

I absolutely agree, this is a crucial and ongoing challenge in EA. I am currently undertaking a course titled 'Civil Resistance and the Dynamics of Nonviolent Movements' online through the United States Institute of Peace. There are a lot of takeaways on how to build movements that are really directly applicable to EA, such as how to appeal to different audiences, how to upscale a movement (diversity of members is the key, which requires a diverse range of activities people can participate in, e.g. for EA not just analytical) and strategic analysis, and I'm thinking about the best ways to apply them through my chapter in Adelaide.

The link to the course is here, it's free for now. https://www.usipglobalcampus.org/training-overview/civilresistance/

Thank you for the resource, and glad you're doing this, Michael!

I'm new to EA, and my experience talking to people about it has been different than yours Gleb: They're very pragmatic, and ask an important question I don't have a good answer to.

Here's how my conversations usually go:

I explain EA/Peter Singer THEM..."yeah, but the ECONOMY! It would be bad for everyone because the world economy is driven by consumption. If we stop that it won't work." ME: "when people get richer, they have more money to spend to buy things, which provides a larger market for the first world, and thus could improve the world economy, or at least dampen the effects of less consumption in the rich world." THEM: "What about the people now who would lose their jobs? Like the factory worker who makes Ferraris, or starbucks coffee barista? If there were less frivolous spending, many of those people would be out of work." At this point I get stuck. They make a fair point don't they? If many people gave large amounts of their income to charity there would be some bad effects, undoubtedly - probably on the economy in the first world, at least temporarily, and on many people's livelihood in the first world. I have no doubt those negative effects would be much smaller than the positive effects that charity would have, but I don't have any proof.

Am I missing some logical counter-argument I could make here? Has an economist taken a stab at estimating the immediate and longer term effect on the world economy, of a segment or the entire population of the first world giving large parts of their income to charity?

If many people gave large amounts of their income to charity there would be some bad effects, undoubtedly - probably on the economy in the first world, at least temporarily, and on many people's livelihood in the first world.

The phrase you are looking for in the economics literature is 'pecuniary externality'. In general there are good reasons not to care about them.

Not sure about economist, but you can always make the counter-argument that spending on charity will get the economic wheels moving as well, and in a much better direction. For example, spending on AMF would cause production of malaria nets, and then shipping them overseas, and then the malaria net would cause a mosquito to not bite a productive worker, and that worker would not be sick and lose work time, etc.

Thanks Gleb. Any suggestions for where else I could post/who else I could ask? I'm sure someone's got to have put some numbers together!

Thanks! Ryan

Check out the Effective Altruism Facebook group

Curated and popular this week
 ·  · 1m read
 · 
I recently read a blog post that concluded with: > When I'm on my deathbed, I won't look back at my life and wish I had worked harder. I'll look back and wish I spent more time with the people I loved. Setting aside that some people don't have the economic breathing room to make this kind of tradeoff, what jumps out at me is the implication that you're not working on something important that you'll endorse in retrospect. I don't think the author is envisioning directly valuable work (reducing risk from international conflict, pandemics, or AI-supported totalitarianism; improving humanity's treatment of animals; fighting global poverty) or the undervalued less direct approach of earning money and donating it to enable others to work on pressing problems. Definitely spend time with your friends, family, and those you love. Don't work to the exclusion of everything else that matters in your life. But if your tens of thousands of hours at work aren't something you expect to look back on with pride, consider whether there's something else you could be doing professionally that you could feel good about.
 ·  · 14m read
 · 
Introduction In this post, I present what I believe to be an important yet underexplored argument that fundamentally challenges the promise of cultivated meat. In essence, there are compelling reasons to conclude that cultivated meat will not replace conventional meat, but will instead primarily compete with other alternative proteins that offer superior environmental and ethical benefits. Moreover, research into and promotion of cultivated meat may potentially result in a net negative impact. Beyond critique, I try to offer constructive recommendations for the EA movement. While I've kept this post concise, I'm more than willing to elaborate on any specific point upon request. Finally, I contacted a few GFI team members to ensure I wasn't making any major errors in this post, and I've tried to incorporate some of their nuances in response to their feedback. From industry to academia: my cultivated meat journey I'm currently in my fourth year (and hopefully final one!) of my PhD. My thesis examines the environmental and economic challenges associated with alternative proteins. I have three working papers on cultivated meat at various stages of development, though none have been published yet. Prior to beginning my doctoral studies, I spent two years at Gourmey, a cultivated meat startup. I frequently appear in French media discussing cultivated meat, often "defending" it in a media environment that tends to be hostile and where misinformation is widespread. For a considerable time, I was highly optimistic about cultivated meat, which was a significant factor in my decision to pursue doctoral research on this subject. However, in the last two years, my perspective regarding cultivated meat has evolved and become considerably more ambivalent. Motivations and epistemic status Although the hype has somewhat subsided and organizations like Open Philanthropy have expressed skepticism about cultivated meat, many people in the movement continue to place considerable hop
 ·  · 7m read
 · 
Introduction I have been writing posts critical of mainstream EA narratives about AI capabilities and timelines for many years now. Compared to the situation when I wrote my posts in 2018 or 2020, LLMs now dominate the discussion, and timelines have also shrunk enormously. The ‘mainstream view’ within EA now appears to be that human-level AI will be arriving by 2030, even as early as 2027. This view has been articulated by 80,000 Hours, on the forum (though see this excellent piece excellent piece arguing against short timelines), and in the highly engaging science fiction scenario of AI 2027. While my article piece is directed generally against all such short-horizon views, I will focus on responding to relevant portions of the article ‘Preparing for the Intelligence Explosion’ by Will MacAskill and Fin Moorhouse.  Rates of Growth The authors summarise their argument as follows: > Currently, total global research effort grows slowly, increasing at less than 5% per year. But total AI cognitive labour is growing more than 500x faster than total human cognitive labour, and this seems likely to remain true up to and beyond the point where the cognitive capabilities of AI surpasses all humans. So, once total AI cognitive labour starts to rival total human cognitive labour, the growth rate of overall cognitive labour will increase massively. That will drive faster technological progress. MacAskill and Moorhouse argue that increases in training compute, inference compute and algorithmic efficiency have been increasing at a rate of 25 times per year, compared to the number of human researchers which increases 0.04 times per year, hence the 500x faster rate of growth. This is an inapt comparison, because in the calculation the capabilities of ‘AI researchers’ are based on their access to compute and other performance improvements, while no such adjustment is made for human researchers, who also have access to more compute and other productivity enhancements each year.