Aaron Gertler

16684San Diego, CA, USAJoined Oct 2014

Bio

I ran the Forum for three years. I'm no longer an active moderator, but I still provide advice to the team in some cases.

I'm a Communications Officer at Open Philanthropy. Before that, I worked at CEA, on the Forum and other projects. I also started Yale's student EA group, and I spend a few hours a month advising a small, un-Googleable private foundation that makes EA-adjacent donations.

Outside of EA, I play Magic: the Gathering on a semi-professional level and donate half my winnings (more than $50k in 2020) to charity.

Before my first job in EA, I was a tutor, a freelance writer, a tech support agent, and a music journalist. I blog, and keep a public list of my donations, at aarongertler.net.

Sequences
10

Part 7: What Might We Be Missing?
The Farm Animal Welfare Newsletter
Part 8: Putting it into Practice
Part 6: Emerging Technologies
Part 5: Existential Risk
Part 4: Longtermism
Part 3: Expanding Our Compassion
Part 2: Differences in Impact
Part 1: The Effectiveness Mindset
Load More (9/10)

Comments
1805

Topic Contributions
270

It's OK not to go into AI (for students)

My guess is that the median person who filled out the EA survey isn't being consistent in this way. I expect that they could have a one-hour 1-1 with a top community-builder that makes them realize they could be doing something at least 10% better. This is a crux for me.

I agree with most of this. (I think that other people in EA usually think they're doing roughly the best thing for their skills/beliefs, but I don't think they're usually correct.)

I don't know about "top community builder", unless we tautologically define that as "person who's really good at giving career/trajectory advice". I think you could be great at building or running a group and also bad at giving advice. (There are several ways to be bad at giving advice — you might be ignorant of good options, bad at surfacing key features of a person's situation, bad at securing someone's trust, etc.)

Separately, I do feel a bit weird about making every conversation into a career advice conversation, but often this seems like the highest impact thing.

I'm thinking about conversations in the vein of an EAG speed meeting, where you're meeting a new person and learning about what they do for a few minutes. If someone comes to EAG and all their speed meetings turn into career advice with an overtone of "you're probably doing something wrong", that seems exhausting/dispiriting and unlikely to help (if they aren't looking for help). I've heard from a lot of people who had this experience at an event, and it often made them less interested in further engagement.

If I were going to have an hour-long, in-depth conversation with someone about their work, even if they weren't specifically asking for advice, I wouldn't be surprised if we eventually got into probing questions about how they made their choices (and I hope they'd challenge me about my choices, too!). But I wouldn't try to ask probing questions unprompted in a brief conversation unless someone said something that sounded very off-base to me.

It's OK not to go into AI (for students)

Upvoted for explaining your stance clearly, though I'm unclear on what you see as the further implications of:

Because there are good reasons to work on AI safety, you need to have a better reason not to.

This is true about many good things a person could do. Some people see AI safety as a special case because they think it's literally the most good thing, but other people see other causes the same way — and I don't think we want to make any particular thing a default "justify if not X".

(FWIW, I'm not sure you actually want AI to be this kind of default — you never say so — but that's the feeling I got from this comment.)

Note that there are many people who should not work on AI safety because they have >400x more traction on problems 400x smaller, or whatever.

When someone in EA tells me they work on X, my default assumption is that they think their (traction on X * assumed size of X) is higher than the same number would be for any other thing. Maybe I'm wrong, because they're in the process of retraining or got rejected from all the jobs in Y or something. But I don't see it as my job to make them explain to me why they did X instead of Y, unless they're asking me for career advice or something.

There may be exceptional cases where someone is working on something really unusual, but in those cases, I aim for a vibe of "curious and interested" rather than "expecting justification". At a recent San Diego meetup, I met a dentist and was interested to learn how he chose dentistry; as it turns out, his reasoning was excellent (and I learned a lot about the dental business).

Finding the arguments for AI risk unconvincing is not a reason to just not work on AI risk, because if the arguments are wrong, this implies lots of effort on alignment is wasted and we need to shift billions of dollars away from it (and if they have nonessential flaws this could change research directions within alignment), so you should write counterarguments up to allow the EA community to correctly allocate its resources.

This point carries over to global health, right? If someone finds EA strategy in that area unconvincing, do they need to justify why they aren't writing up their arguments?

In theory, maybe it applies more to global health, since the community spends much more money on global health than AI? (Possibly more effort, too, though I could see that going either way.)

It's OK not to go into AI (for students)

I've been running EA events in San Francisco every other month, and often I will meet a recent graduate, and as part of their introduction they will explain to me why they are or aren't working on AI stuff.

The other day, I had my first conversation ever where someone explained why they weren't sure about going into AI, unprompted. I said something like "no need to justify yourself, EA is a big tent", which felt like the obvious thing to say (given all my experiences in the movement, meeting people who work on a dozen different problems). If some groups have atmospheres where AI self-justification feels important,  that seems bad.

(Though I think "explaining why you work on X" is very different than "explaining why you don't work on X, not so much"; the former seems fine/natural.)

*****

Related: an old post of mine on why being world-class at some arbitrary thing could be more impactful than being just okay at a high-priority career.

That post is way too long, but in short, benefits to having a diverse set of world-class people in EA include:

  • Wide-ranging connections to many different groups of people (including skilled people who can contribute to valuable work and successful people who have strong networks/influence)
  • EA being a more interesting movement for the people in it + people who might join
Leaning into EA Disillusionment

When I see new people setting themselves up so they only spend time with other EAs, I feel worried.

When you see this happen, is it usually because EA fills up someone's limited social schedule (such that they regretfully have to miss other events), or because they actively drop other social things in favor of EA? I'm surprised to see the phrase "setting themselves up", because it implies the latter.

I also wonder how common this is. Even when I worked at CEA, it seems like nearly all of my coworkers had active social lives/friend groups that weren't especially intertwined with EA. And none of us were in college (where I'd expect people to have much more active social lives).

Criticism of EA Criticism Contest

As an aside, I'm now curious about how well Eliezer's recent posts would have done in the contest — are those examples of content you'd expect to go unrewarded?

Criticism of EA Criticism Contest

Upvoted for:

  1. The interesting framework
  2. The choice of target (I think the contest is basically good and I don't share most of your critiques, but it's good that someone is red-teaming the red-teamers)
  3. The reminder of how important copyediting is (I think that some of the things that bothered you, like the unnecessary "just", would have been removed without complaint by some editors). I hope this does well in the contest!

Most of the items on your "framework" list have been critiqued and debated on the Forum before, and I expect that almost any of them could inspire top contenders in the contest (the ones that seem toughest are "effectiveness" and "scope sensitivity", but that's only because I can't immediately picture  — which isn't the same thing as being impossible).

A few titles of imaginary pieces that clearly seem like the kind of thing the contest is looking for:

  • We Owe The Future Nothing (addressing "Obligation")
  • EA Shouldn't Be Trying To Grow ("Evangelicalism")
  • EA Should Get Much Weirder ("Reputation")
  • EA Is Way Too Centralized ("Coordination")
  • We Need To Improve Existence Before We Worry About Existential Risk ("Existential Risk")
  • Most Grants Should Be Performance-Based, Not Application-Based ("Bureaucracy")
  • We Should Take Our Self-Professed Ideals More Seriously ("Grace")
  • Flying To EA Global Does More Harm Than Eating Six Metric Tons Of Cheese ("Veganism")

 

Question, if you have the time: What are titles for imaginary pieces that you think the criticism contest implicitly excludes, or would be very unlikely to reward based on the stated criteria?

Community Builders Spend Too Much Time Community Building

I was surprised to see that the word "class" appears nowhere in this post.

Once you've paid your tuition, college classes are free. And they teach a lot of useful skills if you pick the right ones. It's great to read articles and work on small projects and find other extracurricular ways to skill up. But I'd hope that anyone organizing an EA group is also choosing good classes to take.

Examples of classes I took in college that felt like "skilling up" (which, collectively, took much more time than founding Yale EA, even on a per-semester basis):

  • Several writing classes
  • A negotiation class (funnily enough, Ari Kagan was one of my classmates)
  • An entrepreneurship class focused on building and scoping a specific business idea
  • A class where I learned the R programming language
  • A class on marketing via behavioral economics

I also did a ton of extracurricular campus journalism, which has been exceedingly useful in my career despite being quite disconnected from EA-focused upskilling.

None of this was as time-efficient as targeted reading on EA topics would have been. But targeted reading doesn't come with certain benefits that classes offer (external project deadlines, free project review from experts, office hours with said experts). And because you have to take classes at college anyway, getting at least some value from them is a huge counterfactual win.

*****

It actually seems okay to me if most of organizers' "EA time" is spent on marketing-like activities, as long as they are learning and practicing useful skills in their classes and non-EA activities (and as long as group members can tell that their organizers have cool stuff going on outside of EA marketing).

Critiques of EA that I want to read

The fact that everyone in EA finds the work we do interesting and/or fun should be treated with more suspicion.

I know that "everyone" was an intentional exaggeration, but I'd be interested to see the actual baseline statistics on a question like "do you find EA content interesting, independent of its importance?"

Personally, I find "the work EA does" to be, on average... mildly interesting?

In college, even after I found EA, I was much more intellectually drawn to random topics in psychology and philosophy, as well as startup culture. When I read nonfiction books for fun, they are usually about psychology, business, gaming, or anthropology. Same goes for the Twitter feeds and blogs I follow. 

From what I've seen, a lot of people in EA have outside interests they enjoy somewhat more than the things they work on (even if the latter takes up much more of their time).

*****

Also, as often happens, I think that "EA culture" here may be describing "the culture of people who spend lots of time on EA Twitter or the Forum", rather than "the culture of people who spend a lot of their time on EA work".  Members of the former group seem more likely to find their work interesting and/or fun; the people who feel more like I do probably spend their free time on other interests.

Announcing the launch of Open Phil's new website

Despite the real visual + other issues, I still think the website is very reasonable! 

The changes to make, including some to the grant page, are tiny relative to the overall size of the project. It seems very easy to find our grants and other content, and overall reception from key stakeholders has been highly positive. OP staff seem to like the changes, too (and we had tons of staff feedback at all points of the process).

If you have other specific feedback, I'm happy to hear it, but I don't know what e.g. "a little more focus and polish" means.

A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform

The 2019 'spike' you highlight doesn't represent higher overall spending — it's a quirk of how we record grants on the website.

Each program officer has an annual grantmaking "budget", which rolls over into the next year if it goes unspent. The CJR budget was a consistent ~$25 million/year from 2017 through 2021. If you subtract the Just Impact spin-out at the end of 2021, you'll see that the total grantmaking over that period matches the total budget.

So why does published grantmaking look higher in 2019?

The reason is that our published grants generally "frontload" payment amounts — if we're making three payments of $3 million in each of 2019, 2020, and 2021, that will appear as a $9 million grant published in 2019.

In the second half of 2019, the CJR team made a number of large, multi-year grants — but payments in future years still came out of their budget for those years, which is why the published totals look lower in 2020 and 2021 (minus Just Impact). Spending against the CJR budget in 2019 was $24 million — slightly under budget.

So the actual picture here is "CJR's budget was consistent from 2017-2021 until the spin-out", not "CJR's budget spiked in the second half of 2019".

Load More