Aaron Gertler

I moderate the Forum, and I'm happy to review your posts before they're published! See here for instructions:

https://forum.effectivealtruism.org/posts/ZeXqBEvABvrdyvMzf/editing-available-for-ea-forum-drafts

I'm a full-time content writer at CEA. I started Yale's student EA group, and I've also volunteered for CFAR and MIRI. I spend a few hours a month advising a small, un-Googleable private foundation that makes EA-adjacent donations.

Before joining CEA, I was a tutor, a freelance writer, a tech support agent, and a music journalist. I blog, and keep a public list of my donations, at aarongertler.net.

Aaron Gertler's Comments

The Effects of Animal-Free Food Technology Awareness on Animal Farming Opposition

Thanks for taking the time to post a summary, even if the full article didn't make it to the Forum!

For reasons related to this study's findings, I was very happy when GFI published its "cultivated meat" announcement. As a phrase, it doesn't sound wholly natural, but I do think it would sell better with a general audience than most other ways of talking about AFFT. (That said, I haven't seen any actual studies to this effect.)

Choosing the Zero Point

This essay aligns with my experience in trying to share effective altruism with other people. While I think that a "moral obligation" framing gets closer to my personal reasons for being altruistic, that's almost never how I frame EA in conversation nowadays.

If you like this essay, I also strongly recommend "Excited Altruism".

The case for building more and better epistemic institutions in the effective altruism community

This post was awarded an EA Forum Prize; see the prize announcement for more details.

My notes on what I liked about the post, from the announcement:

This post contains a well-structured argument for addressing a problem that could be dragging down the overall impact of EA work across many different areas. You could summarize the main point in a way that makes it seem obvious (“EA should try to figure things out in a better way than it does now”), but in doing so, you’d be ignoring the details that make the post great:

  • Pointing out examples of things the community has done that pushed EA in the right direction (e.g. influential criticism, expert surveys) in order to show that we could do even more work along the same lines.
  • Comparing one reasonable proposal (better institutions) to other reasonable proposals (better norms, other types of institution, focusing on growth over institution-building) without arguing too vociferously in favor of the first proposal. I liked the language “I sketch a few considerations,” where some posts might have used “I show how X is superior to Y and Z.”

If you read this post, I also strongly recommend reading the comments! (This applies to the post above as well.)

Effective Animal Advocacy Nonprofit Roles Spot-Check

This post was awarded an EA Forum Prize; see the prize announcement for more details.

My notes on what I liked about the post, from the announcement:

Many people have strong opinions on the state of the EA job market, but it can be difficult to find enough data to support any particular viewpoint. I appreciate AAC’s efforts to chase down facts, and to present its methodology and results very clearly. I don’t have much to say about the style or structure of this post; it’s just clear and thorough, and I’d be happy to hear about other researchers using it as a template for presenting their own work. 

(One detail: I like that the “limitations” section also includes suggestions for further research. Posts that show how others can build on them seem likely to encourage further intellectual progress.)

Effective Altruism and Free Riding

This post was awarded an EA Forum Prize; see the prize announcement for more details.

My notes on what I liked about the post, from the announcement:

This post describes issues that could apply to nearly every kind of EA work, with clear negative consequences for everyone involved. I especially liked the problem statement in this passage:

The key intuition is that in an uncooperative setting each altruist will donate to causes based on their own value system without considering how much other altruists value those causes. This leads to underinvestment in causes which many different value systems place positive weight on (causes with positive externalities for other value systems) and overinvestment in causes which many value systems view negatively (causes with negative externalities).

The post supports this point with a well-structured argument. Elements I especially liked:

  • The use of tables to demonstrate a simple example of the problem
  • References to criticism of EA from people outside the movement (showing that “free-riding” isn’t just a potential issue, but may be influencing how people perceive EA right now)
  • References to relevant work already happening within the movement (so that readers have a sense for existing work they could support, rather than feeling like they’d have to start from scratch in order to address the problem)
  • The author starting their “What should we do about this?” section by noting that they weren’t sure whether “defecting in prisoner’s dilemmas” was actually a bad thing for the EA community to do. It’s really good to distinguish between “behavior that might look bad” and “behavior that is actually so harmful that we should stop it.”
A Qualitative Analysis of Value Drift in EA

This post was awarded an EA Forum Prize; see the prize announcement for more details.

My notes on what I liked about the post, from the announcement:

Value drift isn’t discussed often on the Forum, but I’d like to see that change. 

I remember meeting quite a few people when I started to learn about EA (in 2013), and then realizing later on that I hadn’t heard from some of them in years — even though they were highly aligned and interested in EA work when I met them. 

If we can figure out how to make that sort of thing happen less often, we’ll have a better chance of keeping the movement strong over the long haul.

Marisa’s piece doesn’t try to draw any strong conclusions — which makes sense, given the sample size and the exploratory nature of the research — but I appreciated its beautiful formatting. I also like how she:

  • References non-EA research on social movements. (This is something the community as a whole may not be doing enough of.)
  • Includes a set of direct quotes from interviewees. (Actual human speech offers nuance and detail that are hard to match with a summary of multiple answers.).
  • Offers future research directions for people who see this post and want to work on similar issues.
Biases in our estimates of Scale, Neglectedness and Solvability?

This post was awarded an EA Forum Prize; see the prize announcement for more details.

My notes on what I liked about the post, from the announcement:

Cause prioritization is still a young field, and it’s great to see someone come in and apply a simple, reasonable critique that may improve many different research projects in a concrete way. 

It’s also great to check the comments and realize that Michael edited the post after publishing to improve it further — a practice I’d like to see more of!

Aside from that, this is just a lot of solid math being conducted around an important subject, with implications for anyone who wants to work on prioritization research. If we want to be effective, we need to have strong epistemic norms, and avoiding biased estimates is a key part of that.

My personal cruxes for working on AI safety

This post was awarded an EA Forum Prize; see the prize announcement for more details.

My notes on what I liked about the post, from the announcement:

“I edited [the transcript] for style and clarity, and also to occasionally have me say smarter things than I actually said.”

The “enhanced transcript” format seems very promising for other Forum content, and I hope to see more people try it out!

As for this enhanced transcript: here, Buck reasons through a difficult problem using techniques we encourage — laying out his “cruxes,” or points that would lead him to change his mind if he came to believe they were false. This practice encourages discussion, since it makes it easier for people to figure out where their views differ from yours and which points are most important to discuss. (You can see this both in the Q&A section of the transcript and in comments on the post itself.)

I also really appreciated Buck’s introduction to the talk, where he suggested to listeners how they might best learn from his work, as well as his concluding summary at the end of the post. 

Finally, I’ll quote one of the commenters on the post:

I think the part I like the most, even more than the awesome deconstruction of arguments and their underlying hypotheses, is the sheer number of times you said "I don't know" or "I'm not sure" or "this might be false".

Doing good is as good as it ever was

This post was awarded an EA Forum Prize; see the prize announcement for more details.

My notes on what I liked about the post, from the announcement:

In this piece, Denise argues that many people involved with EA are unsatisfied because they’ve developed high expectations about how much good they’ll be able to do — expectations which, when upset, lead to a lack of motivation. 

While the scope of this problem isn’t clear to me, I do think the essay was beautifully written, and it struck a chord with many readers. There are several lines I expect to reference well into the future:

“Maybe there are other people who are able to have an even higher impact than you. But that doesn’t change the amount of good you can do.”

““Participating in the EA community should make you feel more motivated about the amount of good you are able to do, not less. If it makes you feel less motivated on balance, then the EA community is doing something fundamentally wrong and everybody might be better off somewhere else until this is fixed.”

EAF’s ballot initiative doubled Zurich’s development aid

This post was awarded an EA Forum Prize; see the prize announcement for more details.

My notes on what I liked about the post, from the announcement:

A remarkable project and a remarkable writeup. There were many things I appreciated about this piece, including:

  • The use of photos and other visual aids
  • Links to original sources (e.g. meeting minutes, government websites)
  • Suggested projects for people who want to replicate the ballot initiative elsewhere
  • A collection of media coverage so that readers (at least those who speak German) could see how non-EA sources viewed the initiative

While this post provides an interesting history of the Zurich initiative, I’m more excited by the way it hints at being a “recipe” for this form of EA success: I can imagine many other such initiatives passing in the next decade.

Load More