Hide table of contents


The Egg” takes ideas that are counterintuitive but seemingly true—such as impartiality—and makes them obvious. This allows people to feel the weight of and be motivated by these nuanced ideas in a way that we rarely are. These kinds of stories can help people to new EA and those already involved become and stay committed to doing good. 

Epistemic Status

These are just my thoughts considering what has motivated me within EA. I don't mean to use this as evidence that the principles of effective altruism are true, but that if they are true  stories and framings like “The Egg” are a great way to get into a mindset that makes the normally counter-intuitive ideas of EA more obvious.

Effective Altruism is Inherently Unsatisfying

Effective altruists must face many inherently unsatisfying aspects of EA. Looking to do the most good, we look to find causes that are neglected and overlooked by others. As a result, the things we spend time on to help others can feel weird or unmeaningful, regardless of how much good we are attempting to do. 

But this is the same reason that EA exists: what feels good and what does good don't fully overlap. It never feels 1000 times better to help 1000 times more people. We are insensitive to the scale of problems, persuaded by stories more than facts, and care about those close to us more than those further away. This gives people who highly value reason and evidence a better chance to do good than those following our naturally flawed moral intuitions.

When I first learned about EA, I found it easy to agree with the idea of impartiality—that it’s no better to help someone closer to you in appearance, location, or time than someone further away. Although I understood impartiality rationally, learning about it demotivated me in some ways. While I recognized I had a greater chance to do good, I always realized that my best opportunities might be to help others I will never meet, or who may never live at the same time as me.

I understood impartiality and it did guide some of my actions such as donations and further involvement within EA, but it still didn’t impact or motivate me emotionally.

We can't reprogram our intuitions to perfectly align what feels good and what does good, but stories allow us to temporarily change our mental framing and feel the weight of a new perspective—in this case the intuitiveness of impartiality. 

The Egg


If you haven't read “The Egg,” go read it. It takes 3 minutes and may stick with you for the rest of your life. If you’re still not going to, here is the most important part relating to this post (spoilers): 

“I’m every human being who ever lived?”

“Or who will ever live, yes.”

“I’m Abraham Lincoln?”

“And you’re John Wilkes Booth, too,” I added.

“I’m Hitler?” You said, appalled.

“And you’re the millions he killed.”

“I’m Jesus?”

“And you’re everyone who followed him.”

You fell silent.

“Every time you victimized someone,” I said, “you were victimizing yourself. Every act of kindness you’ve done, you’ve done to yourself. Every happy and sad moment ever experienced by any human was, or will be, experienced by you.”

“The Egg” takes everything initially unsatisfying and demotivating about impartiality, and turns it on its head to make it obvious and inspiring. It presents an emotionally salient method for appreciating the perspective of impartiality: Imagining that you will live every human life throughout all time. 

Applying impartiality to decisions can feel cold, removed, and too analytical for many people, especially those that I’ve interacted with who are new to EA. They rarely deny the premises behind it, but find the moral implications off-putting. Their intuitions don’t match up with the conclusions that treating others impartially leads to: Don't we have a responsibility to help those close to us?

From the perspective of all humanity as described in “The Egg,” helping others is the same as helping yourself. There are no more tradeoffs between those close to you and those further away—they are all you. Of course any suffering is equally bad regardless of who bears it, where they are, or how their suffering came about. These factors become irrelevant. You intuitively want to help yourself as much as you can, and can clearly see that you’ll probably want to prioritize removing the worst suffering from the world for the greatest number. This perspective makes it obvious that impartiality and attempting to measure how much good we are doing is actually compassionate. 

The story doesn’t make us feel anything close to our compassion or empathy for a close friend multiplied by billions, but it brings us closer to feeling emotionally connected to others far away in time and location. 

Moral obligation

Getting involved with EA can also bring up an unhelpful sense of guilt from failing to live up to our moral obligations or potential. With the immense privilege and relative wealth many of us have, there is almost no limit to what we could do to help others more. So, should we be doing more?

I think this question inspires a lot more resistance and guilt than actual action, and I don’t think people are consistently motivated by guilt or obligation to do good—guilt seems more likely to inspire a small one-off donation than a more important long-term change. 

But this sense of obligation or guilt can instead be framed as an opportunity to do good. 

We don't ask whether we are morally obligated to improve our own lives or health, we just ask how. And in “The Egg,” you are everyone and there’s no need for moral obligation to decide to help yourself. Of course you want to help yourself more, and clearly there is no exact amount you “should” try to improve your life. Yet it’s still clear that helping yourself more is going in the right direction. From this perspective we can intuitively let go of guilt, and notice how we’d treat others, if we truly did treat them as ourselves. From this framing, obligation isn't necessary to inspire motivation. 

Why This Might Matter

Facilitating Intro EA Discussions

In the EA Intro Fellowship I'm facilitating, one of our icebreakers was to have everyone to decide between the following options:

          A.  Save 400 lives, with certainty.  

          B.  Take a 90% chance to save 500 lives, with a 10% chance of saving no lives.

Although you've probably seen a similar problem and recognize that the expected value in option B of 450 lives is higher than in option A, most people intuitively pick option A. Even after recognizing that B is the better expected option, the decision can still feel hard. Should we trust our intuition or reason? Is there something wrong about risking all those people’s lives on that 10% chance, even if it will usually turn out better? After calculating the expected value, many of the fellows in our group decided to stick with option A—it still felt more moral. Even as the facilitator, I was somewhat sympathetic with this view. It’s just more morally intuitive.

But let’s try using this perspective of all humanity as in “The Egg”. Imagine that you were actually one of the people in this group of 500 who will either die or be saved. Further, imagine that you will live every one of these 500 people's lives. Which option would you want? 

You'd want whichever option gives you the highest chance to survive—option B. In option A, you wouldn't know which 100 out of the 500 people would still die, so there’s a 20% chance you’d die in option A, but only a 10% chance in option B. From this perspective, the rational approach (option B) also becomes intuitive. Of course option B is worth the risk now that it is framed as a strictly higher chance for each person in this group to survive. 

When imagining this scenario from only your individual perspective, there is a trade-off between what you know rationally to be best, what intuitively feels best, and how you will feel based on certain outcomes. Hitting that 10% chance of saving no one in option B would feel awful.

But these people’s lives are more important than how you’ll feel about making a decision regarding those lives. When you take this holistic perspective of each individual within the group, you can focus more directly on the actual impacts on people instead of how you'll feel about those impacts. What does good and what feels good more closely align and option B becomes intuitive because it’s what each person within the group would prefer. 


Stories like “The Egg” motivate me in a way that reason or evidence alone doesn’t. Even as people committed to EA, we will only do as much good as we are motivated to do, and that motivation won’t come purely from an appeal to reason for most of us. It’s only more unlikely that reason alone is enough to inspire others. 

Again, the fact that people are more persuaded by stories than facts is the reason people help individuals over the masses and why EA should exist—to correct for that error. So there is a positive and natural resistance to taking advantage of emotional persuasion through stories in EA. 

But stories don’t have to hide facts or nuance, and can actually be better at both conveying truth and inspiration than rational argument. “The Egg” does just this. I can recommend stories like “The Egg” to a much wider range of people than, say, The Precipice. I can get those people to be interested in and start to understand these underlying ideas without an initial commitment to doing good. I’ve had a lot less push-back when sharing “The Egg” with friends, than when trying to introduce them to EA. It’s hard to say which will have more impact, but I’m fairly confident that “The Egg” will stick with them more. 

I think that this matters and is worth writing about because stories like “The Egg” are what’s motivated me in the long-run within EA. When I was first introduced to EA I had a burst of energy and inspiration, so I was motivated to keep learning about it regardless of any extrinsically motivating factors. As I fell out of my EA honeymoon phase, I needed more ways to stay motivated; knowing and believing in the principles of EA wasn’t enough. Other stories that kept me motivated were the anecdotes about the close calls in nuclear war and re-releasing pandemics, Ted Chiang’s short stories and Max Tegmark’s Prologue in Life 3.0.

Stories are not necessarily a particularly effective way to promote positive value change or growth within EA. But more stories promoting these positive values could be an important component of a world that has large numbers of people who subscribe to them. 

Stories are a possible path for letting people less familiar with these counterintuitive ideas feel the strength of their implications, while helping those already involved in EA to stay committed. 

Further Questions

These are some questions related to this topic that could be valuable to explore, and that I’d love to hear any of your thoughts on in the comments. 

- Does a story such as “The Egg” have a place in an EA Intro Fellowship/Course?

- Is some form of sharing stories an effective path to positive values change? 

- Can we systematically try to change the stories we tell ourselves, and therefore positively shift our culture’s moral intuitions?

- How can stories convey importance better than facts? When should we take advantage of that?

- Ask yourself: Is there a story here that will help people frame this correctly and intuitively?





More posts like this

Sorted by Click to highlight new comments since:

You might be familiar with Bostrom's Fable of the Dragon Tyrant https://www.nickbostrom.com/fable/dragon.html

And of course, Yudkowsky's fiction, while not exactly EA, was inspiring to many people.

In some ways, the EA creed requires being against empathy in an important way. We can't just care for those close to us, or those with sympathetic stories. But of course that kind of impartiality is also a story. So at the very least, fiction is useful as a kind of reverse mind-control or intuition pump.

For what it's worth, in this particular instance, I don't find "impartiality" to be a useful source of emotional motivation. Working on animal welfare for example, you might find it more helpful to develop selective empathy post-hoc.

That sounds silly, but it's basically just reverse of what people typically do. Normally we form emotional judgements and then rationalize them after the fact, there's no reason you can't do the opposite.

Curated and popular this week
Relevant opportunities