Aaron Gertler

I moderate the Forum, and I'm happy to review your posts before they're published! See here for instructions:

https://forum.effectivealtruism.org/posts/ZeXqBEvABvrdyvMzf/editing-available-for-ea-forum-drafts

I'm a full-time content writer at CEA. I started Yale's student EA group, and I've also volunteered for CFAR and MIRI. I spend a few hours a month advising a small, un-Googleable private foundation that makes EA-adjacent donations.

Before joining CEA, I was a tutor, a freelance writer, a tech support agent, and a music journalist. I blog, and keep a public list of my donations, at aarongertler.net.

Aaron Gertler's Comments

Is it possible to change user name?

I second this. Right now, we review all new users when they join the Forum, including their names. We'd also want to review all name changes if users could make them, which isn't too different from users asking us for name changes (though infrastructure allowing that would be nice to have someday).

For anyone who wants an example of how a username change could cause a prblem: If you try to use "Will MacAskilI" (with a capital "I" instead of the second "L") as a username, you'll be caught before your account is approved. So we're also wary of someone changing their name to that and then pretending to be Will for a bit.

Slate Star Codex, EA, and self-reflection

The original post makes highly damaging claims, but it at least provides links to the sources that led the author to make said claims, allowing for in-depth engagement from commenters. One could argue that it breaks certain Forum rules (e.g. around accuracy), but I wouldn't call it "spam". 

This comment breaks Forum rules itself; it is unclear and unnecessarily rude. I appreciate that you feel strongly about the post's claims, but please refrain from referring to posts as "spam" or "trolling" unless you are at least willing to explain why you believe they are spammy or insincere.

Another way this could have been phrased: 

"I don't think the OP uses appropriate context when making serious, damaging claims about the motives and beliefs of another writer. (IDEALLY, MORE DETAIL AS TO WHY YOU THINK THE OP IS WRONG.) I don't think engaging with this author will be very productive."

Keeping conversation civil takes more time and effort, but it's really important to do this if we want the Forum to avoid many of the standard pitfalls of online discourse.

Impacts of rational fiction?

I shared some thoughts on this topic on a similar thread posted last year. An excerpt: 

"The key is that you need to show people using an EA mindset (thinking about consequences and counterfactuals, remembering that everyone is valuable), even if they aren't working on EA causes. Show people characters who do incredible things and invite them to contemplate the virtues of those characters, and you don't need to hammer too hard on the philosophy."

...so I suppose I'd say that (1) is important, but mostly when blended with (2). Rational fiction isn't uniquely instructive; instead, it takes lessons a reader could learn in many different ways and drives them deeper into the reader's identity than other media might be able to. There's an element of "I didn't know people could be like this" and an element of "this is the kind of person I want to be." 

I'd guess the second element is more important, since most people have heard about actual moral heroes outside of fiction, but they may not have a sense of how such people think about/experience the world.

My amateur method for translations

Thanks for posting this resource! Questions:

  • Do you find DeepL to be better than Google Translate and other options overall? Just for English/Portuguese translation?
  • Would you be open to adding a bit of material about how your group has used translation and/or how you think it might be useful to other people doing EA work? Right now, this post just reads like a handy guide to a skill; before I move it out of the "personal blog" category, I'd want to see some note on its relevance to EA.
Slate Star Codex, EA, and self-reflection

Anonymous submitters on the EA Forum have supported ideas like racial IQ differences.

I found many responses to that survey odious for various reasons and share your concerns in that regard. It makes me uneasy to think that friends/fellow movement members may have said some of those things.

However, the post you linked features a survey that was reposted in quite a few different places. I wouldn't necessarily consider people who filled it out to be "submitters to the EA Forum." (For example, some of them seem to detest the EA movement in general, such that I hope they don't spend much time here for their own sake.) That said, it's impossible to tell for sure.

If the New York Times were to run a similar survey, I'd guess that many respondents would express similar views. But I don't think that would say much, if anything, about the community of people who regularly read the Times. I expect that people in the EA community overwhelmingly support racial equality and abhor white supremacy.

(Additional context: 75% of EA Survey respondents are on the political left or center-left; roughly 3% are right or center-right. That seems to make the community more politically left-leaning than the Yale student body, though the comparison is inexact.)

EA Forum feature suggestion thread

This post (which links to the calendar and other resources) has been pinned on the Community page for weeks. I could also pin it on the main page, but I have a much higher bar for that, because it means everyone will see it every time they come to the Forum (and it doesn't really fit the Frontpage category).

DontDoxScottAlexander.com - A Petition

I'll add some context to clarify to readers why this could be seen as relevant:

Scott Alexander has done a huge amount of writing about effective altruism, including the following posts that many would regard as "classic" (or at least I do):

His most recent reader survey found that 13% of his readers self-identified as being "effective altruists" (this is from his summary of the survey; I don't know the original text of the question). That's about 1600 people.

aarongertler's Shortform

Excerpt from a Twitter thread about the Scott Alexander doxxing situation, but also about the power of online intellectual communities in general:

I found SlateStarCodex in 2015. immediately afterwards, I got involved in some of the little splinter communities online, that had developed after LessWrong started to disperse. I don't think it's exaggerating to say it saved my life.

I may have found my way on my own eventually, but the path was eased immensely by LW/SSC. In 2015 I was coming out of my only serious suicidal episode; I was in an unhappy marriage, in a town where I knew hardly anyone; I had failed out of my engineering program six months prior.

I had been peripherally aware of LW through a few fanfic pieces, and was directed to SSC via the LessWrong comments section.

It was the most intimidating community of people I had ever encountered -- I didn't think I could keep up. 

But eventually, I realized that not only was this the first group of people who made me feel like I had come *home,* but that it was also one of the most welcoming places I'd ever been (IRL or virtual).

I joined a Slack, joined "rationalist" tumblr, and made a few comments on LW and SSC. Within a few months, I had *friends*, some of whom I would eventually count among those I love the most.

This is a community that takes ideas seriously (even when it would be better for their sanity to disengage).

This is a community that thinks everyone who can engage with them in sincere good faith might have something useful to say.

This is a community that saw someone writing long, in-depth critiques on the material produced on or adjacent to LW/SSC...and decided that meant he was a friend. 

I have no prestigious credentials to speak of. I had no connections, was a college dropout, no high-paying job. I had no particular expertise, a lower-class background than many of the people I met, a Red-Tribe-Evangelical upbringing and all I had to do, to make these new friends, was show up and join the conversation.

[...]

The "weakness" of the LessWrong/SSC community is also its strength: putting up with people they disagree with far longer than they have to. Of course terrible people slip through. They do in every group -- ours are just significantly more verbose.

But this is a community full of people who mostly just want to get things *right,* become *better people,* and turn over every single rock they see in the process of finding ways to be more correct -- not every person and not all the time, but more than I've seen everywhere else.

The transhumanist background that runs through the history of LW/SSC also means that trans people are more accepted here than anywhere else I've seen, because part of that ideological influence is the belief that everyone should be able to have the body they want.

It is not by accident that this loosely-associated cluster of bloggers, weird nerds, and twitter shitposters were ahead of the game on coronavirus. It's because they were watching, and thinking, and paying attention and listening to things that sound crazy... just in case.

There is a 2-part lesson this community held to, even while the rest of the world is forgetting it: 

  • You can't prohibit dissent
  • It's sometimes worth it to engage someone when they have icky-sounding ideas

It was unpopular six months ago to think COVID might be a big deal; the SSC/LW diaspora paid attention anyways.

You can refuse to hang out with someone at a party. You can tell your friends they suck. But you can't prohibit them from speaking *merely because their ideas make you uncomfortable* and there is value in engaging with dissent, with ideas that are taboo in Current Year.

(I'm not leaving a link or username, as this person's Tweets are protected.)

How should we run the EA Forum Prize?

No user on the Forum has a "normal" vote worth more than 2 karma. 

(The full karma system is written out in this post.)

Load More