Aaron Gertler

I ran the Forum for three years. I'm no longer an active moderator, but I still provide advice to the team in some cases.

I'm a Communications Officer at Open Philanthropy. Before that, I worked at CEA, on the Forum and other projects. I also started Yale's student EA group, and I spend a few hours a month advising a small, un-Googleable private foundation that makes EA-adjacent donations.

Outside of EA, I play Magic: the Gathering on a semi-professional level and donate half my winnings (more than $50k in 2020) to charity.

Before my first job in EA, I was a tutor, a freelance writer, a tech support agent, and a music journalist. I blog, and keep a public list of my donations, at aarongertler.net.

Sequences

Part 7: What Might We Be Missing?
The Farm Animal Welfare Newsletter
Part 8: Putting it into Practice
Part 6: Emerging Technologies
Part 5: Existential Risk
Part 4: Longtermism
Part 3: Expanding Our Compassion
Part 2: Differences in Impact
Part 1: The Effectiveness Mindset
Load More (9/10)

Topic Contributions

Comments

Open Philanthropy's Cause Exploration Prizes: $120k for written work on global health and wellbeing

The flower was licensed from this site.

The designer saw and appreciated this comment, but asked not to be named on the Forum.

"Big tent" effective altruism is very important (particularly right now)

I didn't get that message at all. If someone tells me they downvoted something I wrote, my default takeaway is "oh, I could have been more clear" or "huh, maybe I need to add something that was missing" — not "yikes, I shouldn't have written this". *

I read Max's comment as "I thought this wasn't written very clearly/got some things wrong", not "I think you shouldn't have written this at all". The latter is, to me, almost the definition of a strong downvote.

If someone sees a post they think (a) points to important issues, and (b) gets important things wrong, any of upvote/downvote/decline-to-vote seems reasonable to me.

 

*This is partly because I've stopped feeling very nervous about Forum posts after years of experience. I know plenty of people who do have the "yikes" reaction. But that's where the users' identities and relationship comes into play — I'd feel somewhat differently had Max said the same thing to a new poster.

EA is more than longtermism

I'll read any reply to this and make sure CEA sees it, but I don't plan to respond further myself, as I'm no longer working on this project. 

 

Thanks for the response. I agree with some of your points and disagree with others. 

To preface this, I wouldn't make a claim like "the 3rd edition was representative for X definition of the word" or "I was satisfied with the Handbook when we published it" (I left CEA with 19 pages of notes on changes I was considering). There's plenty of good criticism that one could make of it, from almost any perspective.

It’s pretty clear that the curriculum places way more weight on the content it frames as “essential” than a content linked to at the bottom of the “further reading” section.

I agree.

But much, maybe most, of the "essential" reading in the first three sections isn’t really about neartermist (or longtermist) causes. For instance, “We are in triage every second of every day” is about… triage.  I’d also put  “On Fringe Ideas”, “Moral Progress and Cause X”, “Can one person make a difference?”, “Radical Empathy”, and “Prospecting for Gold” in this bucket.

Many of these have ideas that can be applied to either perspective. But the actual things they discuss are mostly near-term causes. 

  • "On Fringe Ideas" focuses on wild animal welfare.
  • "We are in triage" ends with a discussion of global development (an area where the triage metaphor makes far more intuitive sense than it does for longtermist areas).
  • "Radical Empathy" is almost entirely focused on specific neartermist causes.
  • "Can one person make a difference" features three people who made a big difference — two doctors and Petrov. Long-term impact gets a brief shout-out at the end, but the impact of each person is measured by how many lives they saved in their own time (or through to the present day).

This is different from e.g. detailed pieces describing causes like malaria prevention or vitamin supplementation. I think that's a real gap in the Handbook, and worth addressing.

But it seems to me like anyone who starts the Handbook will get a very strong impression in those first three sections that EA cares a lot about near-term causes, helping people today, helping animals, and tackling measurable problems. That impression matters more to me than cause-specific knowledge (though again, some of that would still be nice!).

However, I may be biased here by my teaching experience. In the two introductory fellowships I've facilitated, participants who read these essays spent their first three weeks discussing almost exclusively near-term causes and examples.

By contrast, the essential reading in the “Longtermism”, “Existential Risk”, and “Emerging technologies” section is all highly focused on longtermist causes/worldview; it’s all stuff like “Reducing global catastrophic biological risks”, “The case for reducing existential risk”, and “The case for strong longtermism”.

I agree that the reading in these sections is more focused. Nonetheless, I still feel like there's a decent balance, for reasons that aren't obvious from the content alone:

  • Most people have a better intuitive sense for neartermist causes and ideas. I found that longtermism (and AI specifically) required more explanation and discussion before people understood them, relative to the causes and ideas mentioned in the first three weeks. Population ethics alone took up most of a week.
  • "Longtermist" causes sometimes aren't. I still don't quite understand how we decided to add pandemic prevention to the "longtermist" bucket. When that issue came up, people were intensely interested and found the subject relative to their own lives/the lives of people they knew. 
    • I wouldn't be surprised if many people in EA (including people in my intro fellowships) saw many of Toby Ord's "policy and research ideas" as competitive with AMF just for saving people alive today.
    • I assume there are also people who would see AMF as competitive with many longtermist orgs in terms of improving the future, but I'd guess they aren't nearly as common.

“Pascal’s mugging” is relevant to, but not specific to, longtermism

I don't think I've seen Pascal's Mugging discussed in any non-longtermist context, unless you count actual religion. Do you have an example on hand for where people have applied the idea to a neartermist cause?

"The case of the missing cause prioritization research” doesn’t criticize longtermist ideas per se,  it more argues that the shift toward prioritizing longtermism hasn’t been informed by significant amounts of relevant research. 

I agree. I wouldn't think of that piece as critical of longtermism.

As far as I can tell, no content in this whole section addresses the most frequent and intuitive criticism of longtermism I’ve heard (that it’s really really hard to influence the far future so we should be skeptical of our ability to do so).

I haven't gone back to check all the material, but I assume you're correct. I think it would be useful to add more content on this point.

This is another case where my experience as a facilitator warps my perspective; I think both of my groups discussed this, so it didn't occur to me that it wasn't an "official" topic.

Process-wise, I don’t think the use of test readers was an effective way of making sure the handbook was representative. Each test reader only saw a fraction of the content, so they’d be in no position to comment on the handbook as a whole. 

I agree. That wasn't the purpose of selecting test readers; I mentioned them only because some of them happened to make useful suggestions on this front.

While I’m glad you approached members of the animal and global development communities for feedback, I think the fact that they didn’t respond is itself a form of (negative) feedback (which I would guess reflects the skepticism Michael expressed that his feedback would be incorporated). 

I wrote to four people, two of whom (including Michael) sent useful feedback . The other two also responded; one said they were busy, the other seemed excited/interested but never wound up sending anything. 

A 50% useful-response rate isn't bad, and makes me wish I'd sent more of those emails. My excuse is the dumb-but-true "I was busy, and this was one project among many".

(As an aside, if someone wanted to draft a near-term-focused version of the Handbook, I think they'd have a very good shot at getting a grant.) 

I’d feel better about the process if, for example, you’d posted in poverty and animal focused Facebook groups and offered to pay people (like the test readers were paid) to weigh in on whether the handbook represented their cause appropriately. 

I'd probably have asked "what else should we include?" rather than "is this current stuff good?", but I agree with this in spirit.

(As another aside, if you specifically have ideas for material you'd like to see included, I'd be happy to pass them along to CEA — or you could contact someone like Max or Lizka.)

"Big tent" effective altruism is very important (particularly right now)

This is a minor point in some ways but I think explicitly stating "I downvoted this post" can say quite a lot (especially when coming from someone with a senior position in the community).

I ran the Forum for 3+ years (and, caveat, worked with Max). This is a complicated question.

Something I've seen many times: A post or comment is downvoted, and the author writes a comment asking why people downvoted (often seeming pretty confused/dispirited). 

Some people really hate anonymous downvotes. I've heard multiple suggestions that we remove anonymity from votes, or require people to input a reason before downvoting (which is then presumably sent to the author), or just establish an informal culture where downvotes are expected to come with comments.

So I don't think Max was necessarily being impolite here, especially since he and Luke are colleagues who know each other well.  Instead, he was doing something that some people want a lot more of and other people don't want at all. This seems like a matter of competing access needs (different people wanting different things from a shared resource).

In the end, I think it's down to individual users to take their best guess at whether saying "I downvoted" or "I upvoted" would be helpful in a given case. And I'm still not sure whether having more such comments would be a net positive — probably depends on circumstance.

***

Max having a senior position in the community is also a complicated thing. On the one hand, there's a risk that anything he says will be taken very seriously and lead to reactions he wouldn't want. On the other hand, it seems good for leaders to share their honest opinions on public platforms (rather than doing everything via DM or deliberately softening their views).

There are still ways to write better or worse comments, but I thought Max's was reasonable given the balancing act he's trying to do (and the massive support Luke's post had gotten already — I'd feel differently if Max had been joining a pile-on or something).

EA is more than longtermism

While at CEA, I was asked to take the curriculum for the Intro Fellowship and turn it into the Handbook, and I made a variety of changes (though there have been other changes to the Fellowship and the Handbook since then, making it hard to track exactly what I changed). The Intro Fellowship curriculum and the Handbook were never identical.

I exchanged emails with Michael Plant and Sella Nevo, and reached out to several other people in the global development/animal welfare communities who didn't reply. I also had my version reviewed by a dozen test readers (at least three readers for each section), who provided additional feedback on all of the material. 

I incorporated many of the suggestions I received, though at this point I don't remember which came from Michael, Sella, or other readers. I also made many changes on my own.

 

It's reasonable to argue that I should have reached out to even more people, or incorporated more of the feedback I received. But I (and the other people who worked on this at CEA) were very aware of representativeness concerns. And I think the 3rd edition was a lot more balanced than the 2nd edition. I'd break down the sections as follows:

  • "The Effectiveness Mindset", "Differences in Impact", and "Expanding Our Compassion" are about EA philosophy with a near-term focus (most of the pieces use examples from near-term causes, and the "More to Explore" sections share a bunch of material specifically focused on anima welfare and global development).
  • "Longtermism" and "Existential Risk" are about longtermism and X-risk in general.
  • "Emerging Technologies" covers AI and biorisk specifically.
    • These topics get more specific detail than animal welfare and global development do if you look at the required reading alone. This is a real imbalance, but seems minor compared to the imbalance in the 2nd edition. For example, the 3rd edition doesn't set aside a large chunk of the only global health + development essay for "why you might not want to work in this area".
  • "What might we be missing?" covers a range of critical arguments, including many against longtermism!
    • Michael Plant seems not to have noticed the longtermism critiques in his comment, though they include "Pascal's Mugging" in the "Essentials" section and a bunch of other relevant material in the "More to Explore" section.
  • "Putting it into practice" is focused on career choice and links mostly to 80K resources, which does give it a longtermist tilt. But it also links to a bunch of resources on finding careers in neartermist spaces, and if someone wanted to work on e.g. global health, I think they'd still find much to value among those links.
    • I wouldn't be surprised if this section became much more balanced over time as more material becomes available from Probably Good (and other career orgs focused on specific areas).

In the end, you have three "neartermist" sections, four "longtermist" sections (if you count career choice), and one "neutral" section (critiques and counter-critiques that span the gamut of common focus areas).

Bad Omens in Current Community Building

This is a tricky question to answer, and there's some validity to your perspective here. 

I was speaking too broadly when I said there were "rare exceptions" when epistemics weren't the top consideration.

Imagine three people applying to jobs:

  • Alice: 3/5 friendliness, 3/5 productivity, 5/5 epistemics
  • Bob: 5/5 friendliness, 3/5 productivity, 3/5 epistemics
  • Carol: 3/5 friendliness, 5/5 productivity, 3/5 epistemics

I could imagine Bob beating Alice for a "build a new group" role (though I think many CB people would prefer Alice), because friendliness is so crucial. 

I could imagine Carol beating Alice for an ops role.

But if I were applying to a wide range of positions in EA and had to pick one trait to max out on my character sheet, I'd choose "epistemics" if my goal were to stand out in a bunch of different interview processes and end up with at least one job.

 

One complicating factor is that there are only a few plausible candidates (sometimes only one) for a given group leadership position. Maybe the people most likely to actually want those roles are the ones who are really sociable and gung-ho about EA, while the people who aren't as sociable (but have great epistemics) go into other positions. This state of affairs allows for "EA leaders love epistemics" and "group leaders stand out for other traits" at the same time.

 

Finally, you mentioned "familiarity" as a separate trait from epistemics, but I see them as conceptually similar when it comes to thinking about group leaders.

Common questions I see about group leaders include "could this person explain these topics in a nuanced way?" and "could this person successfully lead a deep, thoughtful discussion on these topics?" These and other similar questions involve familiarity, but also the ability to look at something from multiple angles, engage seriously with questions (rather than just reciting a canned answer), and do other "good epistemics" things.

Aaron Gertler's Shortform

Memories from starting a college group in 2014

In August 2014, I co-founded Yale EA (alongside Tammy Pham). Things have changed a lot in community-building since then, and I figured it would be good to record my memories of that time before they drift away completely.

If you read this and have questions, please ask!

 

Timeline

I was a senior in 2014, and I'd been talking to friends about EA for years by then. Enough of them were interested (or just nice) that I got a good group together for an initial meeting, and a few agreed to stick around and help me recruit at our activities fair. One or two of them read LessWrong, and aside from those, no one had heard of effective altruism.

The group wound up composed largely of a few seniors and a bigger group of freshmen (who then had to take over the next year — not easy!). We had 8-10 people at an average meeting.

Events we ran that first year included:

  • A dinner with Shelly Kagan, one of the best-known academics on campus (among the undergrad population). He's apparently gotten more interested in EA since then, but during the dinner, he seemed a bit bemused and was doing his best to poke holes in utilitarianism (and his best was very good, because he's Shelly Kagan).
  • A virtual talk from Rob Mather, head of AMF. Kelsey Piper was visiting from Stanford and came to the event; she was the first EA celebrity I'd met and I felt a bit star-struck.
  • A live talk from Julia Wise and Jeff Kaufman (my second and third EA celebrities).  They brought Lily, who was a young toddler at the time. I think that saying "there will be a baby!" drew nearly as many people as trying to explain who Jeff and Julia were. This was our biggest event, maybe 40 people.
  • A lunch with Mercy for Animals — only three other people showed up.
  • A dinner with Leah Libresco, an atheist blogger and CFAR instructor who converted to Catholicism before it was cool. This was a weird mix of EA folks and arch-conservatives, and she did a great job of conveying EA's ideas in a way the conservatives found convincing.
  • A mixer open to any member of a nonprofit group on campus. (I was hoping to recruit their altruistic members to do more effective things — this sounds more sinister in retrospect than it did at the time.)
    • We gained zero recruits that day, but — wonder of wonders — someone's roommate showed up for the free alcohol and then went on to lead the group for multiple years before working full-time on a bunch of meta jobs. This was probably the most impactful thing I did all year, and I didn't know until years later.
  • A bunch of giving games, at activities fairs and in random dining halls. Lots of mailing-list signups, reasonably effective, and sponsored by The Life You Can Save — this was the only non-Yale funding we got all year, and I was ecstatic to receive their $300.
    • One student walked up, took the proffered dollar, and then walked away. I was shook.

We also ran some projects, most of which failed entirely:

  • Trying to write an intro EA website for high school students (never finished)
  • Calling important CSR staff at major corporations to see if they'd consider working with EA charities. It's easy to get on the phone when you're a Yale student, but it turns out that "you should start funding a strange charity no one's ever heard of" is not a compelling pitch to people whose jobs are fundamentally about marketing.
  • Asking Dean Karlan, development econ legend, if he had ideas for impactful student projects.
    • "I do!"
    • Awesome! What is it? 
    • "Can you help me figure out how to sell 200,000 handmade bags from Ghana?"
    • Um... thanks?
      • We had those bags all year and never even tried to sell them, but I think Dean was just happy to have them gone. No idea where they wound up.
  • Paraphrased ideas that we never tried:
    • See if Off! insect repellant (or other mosquito-fighting companies) would be interested in partnering with the Against Malaria Foundation?
    • Come up with a Christian-y framing of EA, go to the Knights of Columbus headquarters [in New Haven], and see if they'll support top charities?
    • Benefit concert with the steel drum band? [Co-president Pham was a member.]
    • Live Below the Line event? [Dodged a bullet.]
    • Write EA memes! [Would have been fun, oh well.]
    • The full idea document is a fun EA time capsule.
  • The only projects that achieved anything concrete were two fundraisers — one for the holidays, and one in memory of Luchang Wang, an active member (and fantastic person) whose death cast a shadow over the second half of the year. We raised $10-15k for development charities, of which maybe $5k was counterfactual (lots came from our members).
  • Our last meeting of the year was focused on criticism — what the group (and especially me) didn't do well, and how to improve things. I don't remember anything beyond that.
  • The main thing we accomplished was becoming friends. My happiest YEA-related journal entries all involve weird conversations at dinner or dorm-room movie nights. By the end of that year, I'd become very confident that social bonding was a better group strategy than direct action.

 

What it was like to running a group in 2014: Random notes

  • I prepared to launch by talking to 3-4 leaders at other college groups, including Ben Kuhn, Peter Wildeford, and the head of a Princeton group that (I think) went defunct almost immediately. Ben and Peter were great, but we were all flying by the seats of our pants to some degree.
  • While I kind of sucked at leading, EA itself was ridiculously compelling. Just walking through the basic ideas drove tons of people to attend a meeting/event (though few returned).
  • Aside from the TLYCS grant and some Yale activity funding, I paid for everything out of pocket — but this was just occasional food and maybe a couple of train tickets. I never even considered running a retreat (way too expensive).
  • Google Docs was still new and exciting back then. We didn't have Airtable, Notion, or Slack.
  • I never mention CEA in my journal. I don't think I'd really heard of them while I was running the group, and I'm not sure they had group resources back then anyway.
  • Our first academic advisor was Thomas Pogge, an early EA-adjacent philosopher who melted from public view after a major sexual harassment case. I don't think he ever responded to our very awkward "we won't be keeping you as an adviser" email.

 

But mostly, it was really hard

 The current intro fellowships aren't perfect, and the funding debate is real/important, but oh god things are so much better for group organizers than they were in 2014.

I had no idea what I was doing. 

There were no reading lists, no fellowship curricula, no facilitator guides, no nothing. I had a Google doc full of links to favorite articles and sometimes I asked people to read them.

I remember being deeply anxious before every meeting, event, and email send, because I was improvising everything and barely knew what we were supposed to be doing (direct impact? Securing pledges? Talking about cool blogs?).

Lots of people came to one or two meetings, saw how chaotic things were, and never came back. (I smile a bit when I see people complaining that modern groups come off as too polished and professional — that's not great, but it beats the alternative.)

I looked at my journal to see if the anxious memories were exaggerated. They were not. Just reading them makes me anxious all over again.

But that only makes it sweeter that Yale's group is now thriving, and that EA has outgrown the "students flailing around at random" model of community growth.

Some potential lessons from Carrick’s Congressional bid

I'd recommend cross-posting your critiques of the "especially useful" post onto that post — will make it easier for anyone who studies this campaign later (I expect many people will) to learn from you.

Some potential lessons from Carrick’s Congressional bid

Thanks for sharing all of this!

I'm curious about your fear that these comments would negatively affect Carrick's chances. What was the mechanism you expected? The possibility of reduced donations/volunteering from people on the Forum? The media picking up on critical comments?

If "reduced donations" were a factor, would you also be concerned about posting criticism of other causes you thought were important for the same reason?  I'm still working out what makes this campaign different from other causes (or maybe there really are similar issues across a bunch of causes). 

 

One thing that comes to mind is time-sensitivity: if you rethink your views on a different cause later, you can encourage more donations to make up for a previous reduction. If you rethink views on a political campaign after Election Day, it's too late. 

If that played a role, I can think of other situations that might exert the same pressure — for example, organizations running out of runway having a strong fundraising advantage if people are worried about dooming them. Not sure what to do about that, and would love to hear ideas (from anyone, this isn't specifically aimed at Michael).

Some potential lessons from Carrick’s Congressional bid

I think that the principal problem pointed out by the recent "Bad Omens" post was peer pressure towards conformity in ways that lead to people acting like jerks, and I think that we're seeing that play out here as well, but involving central people in EA orgs pushing the dynamics, rather than local EA groups. And that seems far more worrying.

What are examples of "pressure toward conformity" or "acting like jerks" that you saw among "central people in EA orgs"? Are you counting the people running the campaign as “central”? (I do agree with some of Matthew’s points there.)

I guess you could say that public support for Carrick felt like "pressure". But there are many things in EA that have lots of support and also lots of pushback (e.g. community-building strategies, 80K career advice). Lots of people are excited about higher funding levels in EA; lots of people are worried about it; vigorous discussion follows. 

Did something about the campaign make it feel different? 

*****

Habryka expressed concern that negative evidence on the campaign would be "systematically filtered out". This kind of claim is really hard to disprove. If you don't see strong criticism of X from an EA perspective, this could mean any of:

  1. People are critical, but self-censor for the sake of their reputation or "the greater good"
  2. People are critical, but no one took the time to write up a strong critical case
  3. People aren't critical because they defer too much to non-critical people
  4. People aren't critical because they thought carefully about X and found the pro-X arguments compelling

I think that (2) and (4) are more common, and (1) less common, than many other people seem to think. I do think that (3) is common, and I wish it were less so, but I don't see that as "pressure".

 

If someone had published  a post over the last few months titled "The case against donating to the Flynn campaign", and it was reasonably well-written, I think it would have gotten a ton of karma and positive comments — just like this post or this post or this post

Why did no one write this?

Well, the author would need (a) the time to write a post, (b) good arguments against donating, (c) a motive (improving community epistemics, preventing low-impact donations, getting karma), and (d) comfort with publishing the post (that is, not enough self-censorship to override (c)). 

I read Habryka as believing that there are (many?) people who fulfill (a), (b), and (c) but are stopped by (d). My best guess is that for many issues, including the Flynn campaign, no one fulfilled all of (a), (b), and (c), which left (d) irrelevant. 

I'm not sure how to figure out which of us is closer to the truth. But I will note that writing a pseudonymous post mostly gets around (d), and lots of criticism is published that way.

(If you are someone who was stopped by (d), let me know! That's really important evidence. I'm also curious why you didn't write your post under a pseudonym.)*

I also hope the red-teaming contest will help us figure this out, by providing more people with a reason to conduct and publish critical research. If some major topic gets no entries, that seems like evidence for (b) or (d), though with the election over I don't expect anyone to write about the Flynn campaign anyway.

 

*I've now heard from one person who  said that (d) was one factor in why they didn't leave comments — a mix of not wanting to make other commenters angry and not wanting to create community drama (the drama would happen even with a pseudonym). 

Given that this response came in soon after I made my comment, I've updated  moderately toward the importance  of (d), though I'm still unsure what fraction of (d) is about actual Forum comments vs. the author's reputation/relationships outside of the Forum.

Load More