Thank you to Emma Abele, Lennart Justen, Lara Thurnherr, Nikola Jurković, Thomas Woodside, Aris Richardson, Jemima Jones, Harry Taussig, Lucas Moore, and James Aung for inspiration and feedback. All mistakes are mine.
- I don’t think powerful emotions related to the desire to do good in the world (e.g. feelings of awe, outrage, or fear) surface enough in EA relative to how important they can be for some people.
- I argue that we should give such feelings of ‘emotional altruism’ more consideration.
- Possible benefits of a greater emphasis on emotional altruism include: Improving internal community health; Accelerating EA engagement; Reaffirming healthy motivations; Creating an inspiring work setting; Conveying a more sympathetic community image; and Instilling personal well-being/ purpose.
- Ways to affirm emotional altruism include: More within community discussion; Talking more about beneficiaries in outreach; Crafting experiences that allow powerful emotions to bubble up; More powerful writing; More videos and virtual reality content; New creative artwork; and Deliberately having standout conversations.
Effective altruism grapples with profound themes. Themes like suffering at unimaginable scales, futures that transcend our notions of what “good” could look like, and the end of everything we’ve ever known do not occupy the realm of ordinary thoughts. And for many, myself included, these themes do not inspire ordinary emotions. Rather, they are entangled with deep, visceral emotions – emotions that can cast us into deep reflection, that can bring us to tears, and that may underlie the very reason we are in EA.
The feelings I’m gesturing at are delicate, and so is the art of conveying their immensity. Unfortunately, I’m not here to offer them the poetic justice they deserve. Rather, I’m here to claim that these emotions don’t get enough attention, and discuss why I think giving them more attention could benefit the EA community.
What emotions am I talking about?
I’ll give you a sense of what emotions I’m talking about when I use terms like “visceral motivation” and “emotional altruism” by sharing what thoughts can, at times, invoke these feelings for me. But a caveat first: I’m not referring to any one emotion. I’m thinking about a constellation of different emotions, all sharing an ability to inspire good. Feelings that come to mind for me include:
- A sense of awe: Holy shit, so much is at stake. The lives of billions, wait trillions, of beings may be at stake in this century. My care-o-meter simply doesn’t go high enough to feel these numbers anywhere near where they should register. But, every once in a while, I get a glimpse of the moral vastness such a number demarcates– and it’s a vastness I can’t overlook.
- A sense of horror: What if we fail? What if we, humanity, fall short and the lives of billions are ended and foreclosed? It’s not written in the stars that humanity goes on and flourishes— what if we live in a world where that doesn’t happen?
- A sense of guilt: who am I to have been born so lucky? At no expense of my own, I’m among the most well-off human beings that have ever existed. What did I do to deserve this, and how can I possibly repay my good fortune?
- A sense of loving-kindness: Think of the billions of conscious creatures walking this world right now, at the whims of their environment. What a peculiar condition we find ourselves in, craving to make sense of the world, yet so often victims to it. If I could relieve the suffering of all beings, I would. May all find peace and joy in their cosmic blip.
- A sense of outrage: How can so many of us act like everything is fine? Can’t you see all the people dying only because people didn’t give a shit?? Can’t you see all the animals being mass-produced only to be slaughtered for our normalized cuisine?? Open your eyes!
- A sense of gratitude: Oh, how lucky I am to be alive; to be empowered to do something with my aliveness. What an incredible opportunity I have to impact the lives of so many around me; what an incredible group of people I’ve surrounded myself with.
- A sense of yearning: The story of humanity could just be getting started. We could do so, so much more. Creating a profoundly good world is actually possible, and I so desperately want to work towards that world.
- A sense of siblinghood: We are all in this together. We've all been dealt a really unfair hand, and we have to make most of it, together. The problems tower over us. Alone we cannot survive or thrive, but together we stand a chance.
- A sense of ancestral duty: We stand on the shoulders of giants. People have been fighting to make their own microcosms better for millennia; the torch has been passed to us and it flickers in the wind. We owe it to every generation before us to pass the torch to the next generation.
Some people may relate to a few of these emotions, some none, and some might wish to add their own. (Please share what feelings resonate for you in the comments!) I don’t think the specifics matter for the argument I’m making. Crucially, there exist raw, deeply personal emotions that underpin one’s desire to do good, and I think many EAs' feel such emotions in the face of suffering or the vision of a brighter future. This visceral response to a cruel world or a vast potential is honed by rational thought to land at effective altruism, but the motivation is emotional at the core for many.
This isn’t the case for everyone: some people may arrive at EA following a series of rational arguments void of strong emotional appeals. And that’s fine. I do not wish to prescribe a “correct” motivation to do good in the world, I just want to speak to the emotional one. Even if it is not present at all times or accessible in all settings, I think powerful emotions form the bedrock of many people’s connections to effective altruism.
Yet the emotional weightiness of EA can feel muffled, glossed over at times. This can be at a personal level: in moments where life is asking a lot, it can be difficult to find those pure wells of motivation. That seems normal. But I also sense this at a community level: aren’t we here for the same reason? Don’t so many of us feel the same yearnings?
There are fair reasons to quell strong emotional motivations. At times we need to dilute our emotions with a pragmatism that allows us to make progress on the very thing our emotions are telling us is important. And talking about these emotions can be scary. They’re personal, and disclosing them can leave one feeling vulnerable.
But I think the resonance we can feel with others who share our hopes for a better future is worth stepping out of our comfort zone for. What's more, if we gloss over the emotional weightiness of what's at stake and how it makes us feel, I think we undersell the EA project. We risk giving outside observers a tainted picture of why we do what we do, and we risk not inviting in people who too feel that the world is fucked up.
We should be mindful of muffling the emotional grandeur embedded in the problems we tackle. I’m not claiming EA needs a seismic shift in the extent to which it relies on emotional vs. rational appeal, or that all members should gush about their most personal motivations for doing what they do. But, for those among us who draw a lot from emotional reservoirs, a little more emotionality could go a long way.
Possible benefits of a greater emphasis on emotional motivation
- Improving internal community health: For better or worse, EAs judge one another. It can be easy to see others behaving in a way that seems utterly unrelatable and question their motivations. Anecdotally, friends have shared how unsettling they’ve found some encounters with EA ‘rationalist’ events, in part because of the absence of a sense that they were there for the same reason: helping others. This uncertainty leaves an avoidable uneasiness. I became more sympathetic to parts of the EA community I found foreign after a late-night conversation with an AI safety field builder about free-spending. “We’re sprinting,” I remember being told, “because we think we might fail. There’s a world in which we all die because we held back– because we didn’t do enough.” That hit. That, I could relate to.
- Accelerating EA engagement: EA outreach content solidly conveys EA ideas, but I don’t think it sincerely conveys how one can experience EA as something more than a cool idea. I, and quite a few others I’ve talked to, needed something like 500 Million, But Not a Single More or On Caring to really grasp onto EA. If it weren’t for those, we might not have stuck with EA or our engagement would have been much slower.
- Conveying a more sympathetic community image: People will judge anyone claiming to ‘do the most good’ critically. As a community of mostly consequentialists, the EA community has an uphill climb to be likable. We’re only making it harder for people to like us if we don’t radiate the genuine altruistic source of our actions
- Reaffirming healthy motivation: I think if those who subscribed to having strong emotional motivations talked more about their motivations, we could collectively steer ourselves towards healthier, less guilt-based motivations.
- Creating an inspiring work setting: I relish working with and under people who have reflected on why they’re in EA. It makes me want to work my ass off. It doesn’t need to be much either: just a nod that our feelings are pointing us somewhere similar. I also feel this when I read posts from unmet members of the community. “On Future People, Looking Back at 21st Century Longtermists” made me want to hug Joseph Carlsmith – and then dedicate myself to my work.
- Instilling personal well-being/ purpose: Cultivated correctly, I think emotional motivations to do good can be the foundation for a deep sense of well-being and purpose.
Ways to affirm emotional motivation in effective altruism
Below are ways I think the effective altruism community could better tap into people’s emotional motivations to do good in this world. I’m excited about these, but I think they need to be approached delicately. Conversations, content, or experiences in the wrong settings can come across weird or culty. Some of the below recommendations need more careful consideration than I’m giving them here.
More within community discussions
I wish it were more common to talk about our EA motivations with fellow community members. Why do you dedicate yourself to this? Such conversations should be approached delicately, but, if you ever really resonate with someone, I’d encourage you to go out of your comfort zone and steer the conversation to the feel-y realm.
Example: weekly work check-ins to share motivations and try to cultivate a shared sense of purpose. Some people may also draw a lot from ‘metta,’ or loving-kindness, meditations.
Talk about beneficiaries in outreach
When we talk about EA with those unfamiliar, we should remind people of the end goal: helping sentient beings. Give others something to grasp onto. “Why are you going to this EA Global conference?” “I want to help people in our generation and future generations, and connecting with this community seems like one of the best ways I can do so.” EA is instrumental in helping others, but I’m worried our appeals often ascribe it terminal value (not that EA isn’t something to celebrate!)
Example: When talking about biosecurity, paint a picture of the millions of people who could die awful deaths if a virus were released, not just the different levels at which one could intervene.
Craft experiences that allow for powerful emotions
It’s hard to overestimate the importance of setting and vibes when conveying emotional motivations. The same words in the context of a fluorescent-lit classroom and a moonlit field invoke very different feelings. With this in mind, I think smaller, more intimate retreats can often offer the most transformative experiences. They allow for a change in setting and a sense of shared purpose that even EAG events can’t match. (EAG after-parties or late-night walks, however, are a different story).
Example: Organize events (e.g. small retreats) that take aspiring EAs out of their default settings. Be deliberate in designing programming (e.g. late night walks) that allow for these conversations, but don’t force it.
More powerful writing
Powerful writing has already left a mark on many EAs. I and many others cite works like On Caring as formative in our EA engagement. I’d love to see more people try to capture their own motivations (or challenges) in personal pieces, or even fiction.
Examples: more writing like On Caring, 500 Million, But not a Single More, The value of a life, and Killing the Ants. I’d love to see more thoughtful pieces detailing why you are involved in EA. I’d also like to see more people experiment with poetry, fiction, and a variety of different framings of EA concepts (need not be explicitly reference EA). For example, I’d love to see more explorations of:
- Our psychological pulls away from rational compassion, like scope insensitivity, an aversion to prioritization, and a narrow moral circle. (I’ve created this interactive module on the topic, which I hope to write about more soon).
- Empathizing with the human condition across time – in the past, present, and future.
- The possibility that the entire future of humanity might actually hinge on the next few centuries.
- The novelness of the altruistic situation we find ourselves in in the 21st century. Never before has it been so easy to be aware of suffering at a global scale and to help at a global scale.
- How good the future could be.
- The possibility that humanity could actually, really fail. There’s no second try.
More videos and virtual reality content
For some people, I think the best video could be better than the best piece of writing for conveying the ‘why’ of EA. And I think the best virtual reality content could be even better. 
Examples: Videos like The Egg, the Last Human, There’s No Rule That Says We’ll Make It, and novel virtual reality content have an enormous power to convey EA concepts and their corresponding emotional weight.
New creative artwork
Cold take: art in EA seems underexplored. Another cold take: Art evokes strong feelings in some people. This post does a nice job sourcing and categorizing some existing art connected to EA. Creating good art that overlaps with EA seems difficult, and my naive recommendation would be to focus more on conveying certain EA-related ideas or mindsets (e.g. coordination problems).
Example: Try to create a comic or graphic novel that conveys EA ideas.
Deliberately having standout conversations.
Having standout conversations about peoples’ personal emotions and motivations towards EA feels like a skill. I perceive attributes of the best conversations to be: consensual (people want to engage), sheltered (people understand that nothing they say will harm them), curious (people try to genuinely understand where the other is coming from), and loving (radiating goodwill).
Example: I’d be excited about people thinking deliberately about how they could improve on such conversations and bringing that energy to EA spaces (e.g., fellowships, board meetings).
- Contests: Running a contest to source motivational content would be exciting.
- Interviews with prominent EAs: I like the idea of transcribing (or recording) a series of interviews with prominent EAs about what inspires them to take EA so seriously. Would be cool to do this for people across a range of cause areas, especially those that are not the most ostensibly ‘emotional’ (e.g. AI Safety, meta-EA). I think thought-leaders have a lot of power to shift the tone here.
- Research: Researching how other altruistic movements in the past have inspired more action on altruistic principles or whether there’s relevant wisdom in psychology literature. For example, how do you get people to bridge the gap from recognizing something as probably true to really acting on it?
Here’s a metaphor I find fitting for EA’s project:
When it comes to doing good, let emotions be your gas pedal, and careful reasoning your steering wheel. 
We’re good at not forgetting how unreliable our feelings are as guides for how to help other beings. But sometimes I worry we're also good at forgetting the feelings themselves. Don’t, I say. Let’s repurpose those feelings to do exactly what we want them to do.
We don’t need to impose emotional weightiness on concepts like existential risk, animal suffering, or whatever else we’re dedicating ourselves to. Emotional weightiness is already embedded in the parts of reality these concepts point to. We can cultivate emotional motivation that stays true to the pursuit of doing the most good.
This post is written from a place of and about a visceral desire to improve the world. In pure EA fashion, I’ve done gone and intellectualized that feeling. But I hope the sentiment still comes across. Promoting EA should stay closely coupled with promoting – or tapping into – the desire to improve the world. When we talk about effective altruism, we should allow it to be profound.
This could be its own post. I think there are growing rifts in our community, and I wish we focused more on what we had in common. The AI safety/ rationalist communities also care a lot, even if they could convey this better. (AI safety/rationalist communities, please convey this better).
Thomas Woodside’s speech to Yale EA captures a similar sentiment: “I don’t care about Yale EA”
See this recent publication on The Psychology of (In)Effective Altruism by Lucius Caviola, Stefan Schubert, and Joshua Greene.
I’d love to see people think more about what virtual reality could do here – and then do it. I remember being awe-struck at the grandeur VR could convey the first time I put on a headset and floated through the free trial of the International Space Station.
I forget where I found this and who to credit. Can anyone help me out?