Aaron Gertler

I moderate the Forum, and I'd be happy to review your next post.

I'm a full-time content writer at CEA. I started Yale's student EA group, and I've also volunteered for CFAR and MIRI. I spend a few hours a month advising a small, un-Googleable private foundation that makes EA-adjacent donations. I also play Magic: the Gathering on a semi-professional level and donate half my winnings (more than $50k in 2020) to charity.

Before joining CEA, I was a tutor, a freelance writer, a tech support agent, and a music journalist. I blog, and keep a public list of my donations, at aarongertler.net.


The Farm Animal Welfare Newsletter
Replacing Guilt
Part 8: Putting it into Practice
Part 7: Increasing the Accuracy of Our Judgments
Part 6: Emerging Technologies
Part 5: Existential Risk
Part 4: Longtermism
Part 3: Expanding Our Compassion
Part 2: Differences in Impact
Load More (9/11)


What are some examples of successful social change?

While you could call the French Revolution "successful", in the dictionary sense of "accomplishing an aim or purpose", you certainly don't have to. That's a reasonable distinction to draw.

As I said, I wouldn't have put the list together the same way, and I'd also much prefer to learn from movements and groups that actually achieved things I value.


That said, I've seen a lot of people in rationalist spaces discuss the rise of certain religions as interesting phenomena worthy of study, at least in piecemeal ways. Even if religious rituals are used to bond small social groups together, around shared belief in something false, one can still consider whether it's possible to copy the bonding elements without getting false beliefs at the same time. On a larger scale, can we learn from people who successfully lobbied for bad policies if we want to lobby for good policies?

(Another spin on this is to find examples of groups that started with worthy goals, then lost sight of the goals as they grew more capable of changing the world. What happens to groups like that, and what makes them different from groups that keep hold of their goals? How can we keep our own groups in the second category rather than the first?)

An animated introduction to longtermism (feat. Robert Miles)

On (1), all fair questions.

  • I'm running off of my own experience here (talking about longtermism with many dozens of people), rather than survey data. In that experience, I've seen most people round off "one second saves billions of lives" to "okay, I acknowledge that given these assumptions it's important to advance technology and reduce risk". But a few people seem to be (mentally) rolling their eyes a bit, or finding that the gigantic number of zeroes to be a bit absurd. 
    • I think discussions of those numbers will eventually come up if people are serious about exploring the topic, but for first-time exposure, my impression is that people care more about getting a general sense of what's at stake ("if humanity goes to the stars, there could be trillions of us, living happily for thousands of generations") than getting the exact EV of X-risk work based on the size of the Virgo Supercluster.
    • Put another way: If the choice is between "you can enable a flourishing life in expectation for one one-trillionth of a penny" and "you can enable a flourishing life in expectation for a few cents", and the latter argument seems less suspicious to enough people that it's 10% more convincing overall, I'd favor that argument. It's hard for me to picture someone being compelled by the first and not the second.
      • Though "it's hard for me to picture" definitely doesn't mean those people don't exist. I'm just not sure I've met them.
  • Emulations actually seem like a good addition under this paradigm — they're a neat way of indicating that the future will be good in strange ways the viewer hasn't considered, and they give me a "space utopia" feel that long strings of zeroes don't.
  • I agree that you want to not be boring, and you want to be personable. My issue is that I think  that many people find gigantic numbers to be kind of boring, compared to more vivid explanations of what the future could look like. Scope insensitivity is a real thing.
    • Of course, this video has some good space utopia imagery and is generally solid on that front. I just found the astronomical waste calculations and "definition of expected value" to be slower parts of the video, while I think something like Will MacAskill's charts of the human future might have been more engaging (e.g. his cartoon representation of how many people have lived so far vs. might live in the future, or the timeline passing through "everyone is well-off" and "entirely new form of art?"

To sum it up, I'm critiquing this small section of an (overall quite good) video based on my guess that it wasn't great for engagement (compared to other options*), rather than because I think it was unreasonable. The "not great for engagement" is some combination of "people sometimes think gigantic numbers are sketchy" and "people sometimes think gigantic numbers are boring", alongside "more conservative numbers make your point just as well". 

*Of course, this is easy for me to say as a random critic and not the person who had to write a script about a fairly technical paper!

An animated introduction to longtermism (feat. Robert Miles)

I'm not excited about the astronomical waste theory as part of introductory longtermist content. I think much smaller/more conservative get exactly the same point across without causing as many people to judge the content as hyperbolic/trying to pull one over on them.

That said, I was impressed by the quality level of the video, and the level of engagement you've achieved already! If every EA-related video or channel had to be perfect right away, no one would ever produce videos, and that's much worse than having channels that are trying to produce good content and seeking feedback as they go.


One really easy win for videos like this: I counted 12 distinct characters (not counting Bostrom, Ord, or the monk character who shows up in several videos). All of them were animated in the style I think of as "white by default", including the citizens of 10,000 BC (which may predate white skin entirely).

 This isn't some kind of horrible mistake, but it seems suboptimal for trying to reach a broad audience (which I figure is a big reason to make animated video content vs. other types of content).

My favorite example of effortless diversity (color, age, gender, etc.) in rationalist-adjacent cartoon sci-fi is Saturday Morning Breakfast Cereal.

What are the 'PlayPumps' of cause prioritisation?

Not sure where this falls on the "cause vs. intervention" spectrum, but protests against nuclear energy are a clear example of people doing something net-negative according to their own values (by reducing the amount of safe, clean energy that the world can produce).

I'm confused by the fact that you want to show "prioritization between causes" but also that you want to show effective and ineffective examples of people working in a single cause area. If you're just looking for cause areas where some ideas were overhyped and effective ideas didn't get enough support, there are a lot of examples out there. A few that I had time to look up:

What are some examples of successful social change?

Trying to imagine Lutter's reply, I'd say the uprising "brought many people together to fight for the same thing,  to an unusual degree," even if it didn't succeed.

Personally, I wouldn't include it on a list like this, and I think better examples for the list will involve more concrete change. (Though perhaps the longer-term history of China would have looked somewhat different without the uprising?) 

What are some examples of successful social change?

(Not defending any particular example on Lutter's list, which is clearly an early-stage project and needs some filtering.)

The modern environmental movement seems to have changed the course of history, using policies and positions supported by a majority of the movement's supporters. 

Whether the net effect of all this change was actually good, by the movement's own lights, may be in doubt, but it seems to have done much of what it set out to do, to a greater extent than many similar movements. (Similarly, the French Revolution could be called "successful" even if many of its own leaders died in the process and the average French person was harmed more than helped.)

EA intro videos for kids

Rather than educating children about specific cause areas, it seems better to teach them about basic conditions of the world (which are less likely to change by the time they're adults). 

I was a voracious reader in my youth, and what prepared me to be interested in things like EA was the books I read about what life was like in other countries — that's how I learned how immensely fortunate I was to be living in the U.S., and how much a little extra money could mean to people in other countries.

I'm not sure what the best books are for accomplishing that, but I'd be looking for books to impart lessons like the following:

  • People in other places are just like us in the most important ways, but they might have a lot less money/food/access to education. And there are ways we can help them. (And we, your parents, are already doing those things.)
  • Science and technology have made life a lot better for many people. They have some bad consequences, too, but overall, they've been an overwhelmingly good thing. And you can be someone who makes these things, if you want. (Books about cool scientists and inventors probably aren't hard to find.)
  • (If this fits your lifestyle.) Most animals on farms have hard lives. We try not to eat them. It's okay if you have to eat meat (e.g. at a friend's house), and other people aren't bad people for eating meat, but you should consider being mostly vegetarian.
  • It's really important for people to help each other. Kindness is one of the most important virtues. It's especially important to help when other people don't want to help, or when they don't notice a problem.
How do other EAs keep themselves motivated?

Scattered ideas:

  1. It's easy to subscribe to updates from most EA-aligned charities, and that makes for a motivating way to "procrastinate". (In cases where the charity is a grantmaker like Open Phil, I might also click through to the websites of charities/projects they've funded and explore that work.)
  2. I find an excuse to watch Life in a Day, my favorite EA-aligned movie, every few years.
  3. I look for EA-related elements in creative work I enjoy, and save them to folders that I occasionally leaf through. (These tend to come from manga, though I've also saved screenshots from TV shows, passages from books, etc.)

I work with a small foundation that makes much bigger donations than I do, focused on global health. One way I've tried to help them "feel" their impact is by reframing their work — instead of "you made cash grants to 1000 people", it's "every day for the last year, three times per day, you gave someone one of the best days of their life. If you could meet them, they'd shake your hand or hug you, and they might be crying from happiness. If you checked back in on them a few weeks or months later, they'd have a new roof or a motorcycle or better food, and they'd still be very happy to see you." 

There's no shame in telling stories like this to yourself (even writing them down!) These are the details that actually make up reality; the numbers and pictures we can see online are mere substitutes. If you take action to make reality better, you deserve to experience as realistic a version of that change as you can manage.

[Feedback Request] Hypertext Fiction Piece on Existential Hope

I don't have time for detailed feedback, but I will say that I enjoyed the way this was presented! Flowcharts are underrated; I'd love to see more people take advantage of tools like Clearer Thinking's GuidedTrack to develop similar experiences.

david_reinstein's Shortform

I spent a few minutes looking at the impact feature, and I... will also go with "not satisfied". 

From their review of Village Enterprise:

Impact & Results scores of livelihood support programs are based on income generated relative to cost. Programs receive an Impact & Results score of 100 if they increase income for a beneficiary by more than $1.50 for every $1 spent and a score of 75 if income increases by more than $0.85 for every $1 spent. If a nonprofit reports impact but doesn't meet the threshold for cost-effectiveness, it earns a score of 50.

My charitable interpretation is that the "$0.85" number is meant to represent one year's income, and to imply a higher number over time (e.g. you have new skills or a new business that boosts your income for years to come).

But I also think it's plausible that "$0.85" is meant to refer to the total increase, such that you could score "75" by running a program that, in your own estimation, helps people less than just giving them money. 

(The "lowest score is 50" element puzzled me at first, but this page clarifies that you score "0" if CN can't find enough information to estimate your impact in the first place.)


Still, this is much better than the original CN setup, and I hope this is an early beta version with many improvements on the way.

Load More