All of Aaron Gertler's Comments + Replies

Critiques of EA that I want to read

The fact that everyone in EA finds the work we do interesting and/or fun should be treated with more suspicion.

I know that "everyone" was an intentional exaggeration, but I'd be interested to see the actual baseline statistics on a question like "do you find EA content interesting, independent of its importance?"

Personally, I find "the work EA does" to be, on average... mildly interesting?

In college, even after I found EA, I was much more intellectually drawn to random topics in psychology and philosophy, as well as startup culture. When I read nonfiction ... (read more)

the culture of people who spend lots of time on EA Twitter or the Forum

there's an EA Twitter?

I think I agree with everything here, though I don't think the line is exactly people who spend lots of time on EA Twitter (I can think of several people who are pretty deep into EA research and don't use Twitter/aren't avid readers of the Forum). Maybe something like, people whose primary interest is research into EA topics? But it definitely isn't everyone, or the majority of people into EA.

7quinn8d
Yeah I'd be figuring out homotopy type theory and figuring out personal curiosities like pre-agriculture life or life in early cities, maybe also writing games. That's probably 15% of my list of things I'd do if it wasn't for all those pesky suffering lives or that annoying crap about the end of the world.
Announcing the launch of Open Phil's new website

Despite the real visual + other issues, I still think the website is very reasonable! 

The changes to make, including some to the grant page, are tiny relative to the overall size of the project. It seems very easy to find our grants and other content, and overall reception from key stakeholders has been highly positive. OP staff seem to like the changes, too (and we had tons of staff feedback at all points of the process).

If you have other specific feedback, I'm happy to hear it, but I don't know what e.g. "a little more focus and polish" means.

4Charles He8d
TL;DR I spent more time looking over the website (particularly on mobile) and I think I’m mostly wrong/bad in my comment above. There might still be some value in a redo, and I guess it is 30% likely to be valuable. I am not a web designer, but I’ve interacted with several in the past. I guess my comments below are about “50% true”. Why I changed my mind from my message above: * I was in the mindset of my initial impression and Habryka's comment, which seemed quite negative. I tried the website again, on mobile. The website “just works”. * The website seems much much faster than I remember, it seems like someone tuned or put some effort to fixing this * Your message that the key stakeholders like it, is a big update. Why I think there could be a redo: * For redoing the grants page, I think that redesign or work can be deceptively large and might involve touching many other places on the site (a similar filter menu appears on other pages). * I think in some situations in similar seeming project, it seems like an entire redo ultimately occurs, even when the original changes seemed small originally. This is because of the dependencies or costs of trying to accommodate existing content. * I think the changes might be minor in some sense, but I think this still needs a very experienced designer to go over the details. * There is some chance (30%) this assistance can be expensive or difficult to arrange, compared to just getting a new website from scratch. * There’s patterns of design being used that are different than ones I see (see comments below) In some sense, Open Phil is a major expression of the heart and machinery of EA. So having a highly polished page that is highly professional is valuable. I think some viewers of the site, will be really picky, especially newcomers and some kinds of talent (experienced professors). They might be judgmental and take on impressions, even if th
A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform

The 2019 'spike' you highlight doesn't represent higher overall spending — it's a quirk of how we record grants on the website.

Each program officer has an annual grantmaking "budget", which rolls over into the next year if it goes unspent. The CJR budget was a consistent ~$25 million/year from 2017 through 2021. If you subtract the Just Impact spin-out at the end of 2021, you'll see that the total grantmaking over that period matches the total budget.

So why does published grantmaking look higher in 2019?

The reason is that our published grants generally "fr... (read more)

So this doesn't really dissolve my curiosity.

In dialog form, because otherwise this would have been a really long paragraph:

NS: I think that the spike in funding in 2019, right after the GiveWell’s Top Charities Are (Increasingly) Hard to Beat blogpost, is suspicious

AG: Ah, but it's not higher spending. Because of our accounting practices, it's rather an increase in future funding commitments. So your chart isn't about "spending" it's about "locked-in spending commitments". And in fact, in the next few years, spending-as-recorded goes down because the lock... (read more)

A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform

(Writing from OP’s point of view here.)

We appreciate that Nuño reached out about an earlier draft of this piece and incorporated some of our feedback. Though we disagree with a number of his points, we welcome constructive criticism of our work and hope to see more of it.

We’ve left a few comments below.

*****

The importance of managed exits

We deliberately chose to spin off our CJR grantmaking in a careful, managed way. As a funder, we want to commit to the areas we enter and avoid sudden exits. This approach:

  1. Helps grantees feel comfortable starting and scali
... (read more)

If this dynamic leads you to put less “trust” in our decisions, I think that’s a good thing!


I will push back a bit on this as well. I think it's very healthy for the community to be skeptical of Open Philanthropy's reasoning ability, and to be vigilant about trying to point out errors.

On the other hand, I don't think it's great if we have a dynamic where the community is skeptical of Open Philanthropy's intentions. Basically, there's a big difference between "OP made a mistake because they over/underrated X" and "OP made a mistake because they were politically or PR motivated and intentionally made sub-optimal grants."

 

8brb24311d
Did you also think [https://www.openphilanthropy.org/research/expert-philanthropy-vs-broad-philanthropy/] that breadth of cause exploration is important? It seems [https://www.openphilanthropy.org/research/update-on-open-philanthropy-project/] that you were conducting shallow and medium-depth investigations since late 2014. So, if there were some suboptimal commitments early on these should have been shown by alternatives that the staff would probably be excited about, since I assume that everyone aims for high impact, given specific expertise. So, it would depend on the nature of the commitments that earlier decisions created: if these were to create high impact within one's expertise, then that should be great, even if the expertise is US criminal justice reform, specifically.[1] [#fng73igilvzl]If multiple such focused individuals exchange perspectives, a set of complementary[2] [#fnn46g22l95wo]interventions that covers a wide cause landscape emerges. If you think that not trusting you is good, because you are liable to certain suboptimal mechanisms established early on, then are you acknowledging that your recommendations are suboptimal? Where would you suggest that impact-focused donors in EA look? Are you sure that the counterfactual impact is positive, or more positive without your 'direct oversight?' For example, it can be that Just Impact donors would have otherwise donated to crime prevention abroad,[3] [#fnwv34ju3bhz]if another organization influenced them before they learn about Just Impact, which solicits a commitment? Or, it can be that US CJR donors would not have donated to other effective causes were they not first introduced to effective giving by Just Impact. Further, do you think that Just Impact can take less advantage of communication with experts in other OPP cause areas (which could create important leverages) when it is an independent organization?

The importance of managed exits

So one of the things I'm still confused is about having two spikes in funding, one in 2019 and the other one in 2021, both of which can be interpreted as parting grants:

 

 

So OP gave half of the funding to criminal justice reform ($100M out of $200M) after writing GiveWell’s Top Charities Are (Increasingly) Hard to Beat, and this makes me less likely to think about this in terms of exit grant and more in terms of, idk, some sort of nefariousness/shenanigans.

Where can I learn about how DALYs are calculated?

This is excellent, thanks!

These two papers, in particular, were what I was looking for. The corresponding information on QALYs was also great.

(For future readers of my post, the relevant info is under the "descriptive system" and "valuation methods" subheadings in Derek's post.)

Where can I learn about how DALYs are calculated?

Thanks! The correlation graphs were helpful to see, though I'm sad about the muddled results from the graph in the updated section.

Announcing the launch of Open Phil's new website

This is very good feedback — I'll look into making that change.

Announcing the launch of Open Phil's new website

Thanks for all of this feedback! Lots of good points to consider moving forward, and exactly the kind of thing I hoped to get from this post.

This website was a weird project — passed around between owners and developers over a period of ~2 years. I think there was a good amount of usability testing before my time, but I'm not sure how much of that was holistic and focused on the final design (vs. focused on specific elements). I agree with most of your points myself and also trust your experience in this area.

2Charles He11d
Uh... So I'm just going to say that, at this point, it seems like Open Phil should just consider a wholly new redesign, basically make a new website. The considerations for "cost benefit analysis" for making a new website: * Maybe a lifespan of 5 years? * The time opportunity cost or value to current users * As one of many valuable things, the grant page is a big deal. It seems like a good design on just this feature would have a lot of value to many people * A nice "PWA feel" would be good for the grant page and others (I don't know that much about web design but the idea is something like the UX of this [https://aisafetyideas.com/?sort=likes], which has instant response to user input) * There's many other theories of change, like outreach or communicating to new donors or EAs, that would benefit from a good web page * I think each cause area can get a little more focus and polish (in addition to the polish described by Habryka's, which seems to be a given). * This requires attention from a dedicated professional or team. * A good website is just good for morale of employees and EAs? I think the budget justified by the above is large. Obviously, time cost or opportunity cost is a consideration. So, like, maybe they can make Aaron Gertler like a super project manager (" Director Gertler"? "Executive Vice President Gertler"?) and give him a big budget. Then he can get bids, hire an agency and get a service agreement, etc. This work isn't trivial, but presumably this should be possible for some up front effort without further encumbering him too much.
Announcing the launch of Open Phil's new website

A couple of reports had their footnotes get jumbled — a fix is in progress. Thanks for the note!

Announcing the launch of Open Phil's new website

Thanks for this feedback. The horizontal scroll is a matter of having long email addresses on those page, and I'll clean that up after checking with page owners.

Agree with info density dropping on the grants page — I think there's an easy improvement or two to be made here (e.g. removing the "Learn More" arrow), which I'll be aiming to make as the new site owner (with input from others at OP).

Where can I learn about how DALYs are calculated?

Thanks for the link! I was aware of the most recent study, but you prompted me to dig deep and see what they said about their survey methodology. 

The most relevant bits I found were sections 4.8 and 4.8.1 in this PDF, which describe multiple surveys done across a bunch of countries. 

I'm still not sure where to find actual response counts by country or demographic data on respondents — it's easy to find tons of data on how different health issues are ranked and how common they are, but not to find a full "factory tour" of how the estimates were pu... (read more)

1brb24317d
Yes, for the YLL estimates they combined different datasets to find accurate causes of death disaggregated by age, sex, location, and year. There should be little bias since data is objective and 'cleaned' using relevant expert knowledge. The authors * Used vital registration (VR)[1] [#fne4vpqkdvsdo]data and combined them with other sources if these were incomplete (2.2.1, p. 22 the PDF [https://www.thelancet.com/cms/10.1016/S0140-6736(20)30925-9/attachment/deb36c39-0e91-4057-9594-cc60654cf57f/mmc1.pdf] )[2] [#fnhyh9lvxo4b] * Disaggregated the data by "age, sex, location, year GBD cause" (p. 32 the PDF) and made various adjustments for mis-diagnoses and mis-classifications, noise, non-representative data, shocks, and distributed the cause of death data where it made most sense to them, using different complex modeling methods (Section 2 the PDF) * Calculated YLL by summing the products of "estimated deaths by the standard life expectancy at age of death"[3] [#fnnmtf0z6q01q] For YLD estimates, where subjectivity can have larger influence on the results, the authors also compiled and cleaned data, then estimated incidence[4] [#fnmb6k1wyu0c]and prevalence,[5] [#fnel4whkt3ba7]they severity, using disability weights (DWs) (Section 4 intro, p. 435 the PDF) * Used hospital visit data (disaggregated by "location, age group, year, and sex" (p. 438) to get incidence and prevalence of diseases/disabilities. Comorbidity correction used a US dataset. * 140 non-fatal causes were modeled (of which 11 (79–89) relate to mental health diagnoses) (pp. 478–482) * For each of the causes for a few different severity levels, sequelae were specified.[6] [#fni06083igx6s] * Disability weights were taken from a database (GBD 2019) and matched with the sequelae. * [Section 4.8.1]"For GBD 2010[7] [#fnhcrnsjqznhl][disability weights] focused on measuring health loss rather than welfare loss" (p. 472). Data was collected in 5 countri
4lukeprog17d
+1 to the question, I tried to figure this out a couple years ago and all the footnotes and citations kept bottoming out without much information having been provided.
Announcing the launch of Open Phil's new website

The license still applies! We'll have it back up on the footer soon.

Little (& effective) altruism

This was a nice little post!

One of the biggest draws to the EA community for me — and something that's kept me involved — is how much small-scale altruism goes on here. Unsurprisingly, a movement founded on practical altruism draws a lot of people who enjoy helping and actually care about providing good help. 

This manifests in a bunch of ways. Two that come to mind: EA Global participants swarming me to help carry heavy conference items through a shopping mall when I was at CEA, and a bunch of cases where someone in the community encountered a persona... (read more)

1Parmest Roy25d
Thank you for the supporting remarks! Glad you enjoyed the post.
"Big tent" effective altruism is very important (particularly right now)

I don't share your view about what a downvote means.

What does a downvote mean to you? If it means "you shouldn't have written this", what does a strong downvote mean to you? The same thing, but with more emphasis?

It'd be interesting to have some stats on how people on the forum interpret it.

Why not create a poll? I would, but I'm not sure exactly which question you'd want asked.

Most(?) readers won't know who either of them is, not to mention their relationship.

Which brings up another question — to what extent should a comment be written for an author vs. t... (read more)

Personally, I primarily downvote posts/comments where I generally think "reading this post/comment will on average make forum readers be worse at thinking about this problem than if they didn't read this post/comment, assuming that the time spent reading this post/comment is free."

I basically never strong downvote posts unless it's obvious spam or otherwise an extremely bad offender in the "worsens thinking" direction. 

9Guy Raveh23d
It's been over a week so I guess I should answer even if I don't have time for a longer reply. I think so, but I'm not very confident. I don't think private conversations can exist on a public platform. If it's not a DM, there's always an audience, and in most contexts, I'd expect much of a comment's impact to come from its effects on that audience. The polls in that specific group look like they have a very small and probably unrepresentative sample size. Though I don't we'll be able to get a much larger one on such a question, I guess.
Open Philanthropy's Cause Exploration Prizes: $120k for written work on global health and wellbeing

The flower was licensed from this site.

The designer saw and appreciated this comment, but asked not to be named on the Forum.

"Big tent" effective altruism is very important (particularly right now)

I didn't get that message at all. If someone tells me they downvoted something I wrote, my default takeaway is "oh, I could have been more clear" or "huh, maybe I need to add something that was missing" — not "yikes, I shouldn't have written this". *

I read Max's comment as "I thought this wasn't written very clearly/got some things wrong", not "I think you shouldn't have written this at all". The latter is, to me, almost the definition of a strong downvote.

If someone sees a post they think (a) points to important issues, and (b) gets important things wrong... (read more)

7Guy Raveh1mo
I don't share your view about what a downvote means. However, regardless of what I think, it doesn't actually have any fixed meaning beyond that which people a assign to it - so it'd be interesting to have some stats on how people on the forum interpret it. Most(?) readers won't know who either of them is, not to mention their relationship.
EA is more than longtermism

I'll read any reply to this and make sure CEA sees it, but I don't plan to respond further myself, as I'm no longer working on this project. 

 

Thanks for the response. I agree with some of your points and disagree with others. 

To preface this, I wouldn't make a claim like "the 3rd edition was representative for X definition of the word" or "I was satisfied with the Handbook when we published it" (I left CEA with 19 pages of notes on changes I was considering). There's plenty of good criticism that one could make of it, from almost any perspec... (read more)

4AnonymousEAForumAccount1mo
That’s helpful anecdata about your teaching experience. I’d love to see a more rigorous and thorough study of how participants respond to the fellowships to see how representative your experience is. I’m pretty sure I’ve heard it used in the context of a scenario questioning whether torture is justified to stop the threat dirty bomb that’s about to go off in a city. That’s a good excuse :) I misinterpreted Michael’s previous comment as saying his feedback didn’t get incorporated at all. This process seems better than I’d realized (though still short of what I’d have liked to see after the negative reaction to the 2nd edition). GiveWell’s Giving 101 [https://www.givewell.org/giving101] would be a great fit for global poverty. For animal welfare content, I’d suggest making the first chapter of Animal Liberation part of the essential content (or at least further reading), rather than part of the “more to explore” content. But my meta-suggestion would be to ask people who specialize in doing poverty/animal outreach for suggestions.
"Big tent" effective altruism is very important (particularly right now)

This is a minor point in some ways but I think explicitly stating "I downvoted this post" can say quite a lot (especially when coming from someone with a senior position in the community).

I ran the Forum for 3+ years (and, caveat, worked with Max). This is a complicated question.

Something I've seen many times: A post or comment is downvoted, and the author writes a comment asking why people downvoted (often seeming pretty confused/dispirited). 

Some people really hate anonymous downvotes. I've heard multiple suggestions that we remove anonymity from vo... (read more)

I think the problem isn't with saying you downvoted a post and why (I personally share the view that people should aim to explain their downvotes).

The problem is the actual reason:

I think you're pointing to some important issues... However, I worry that you're conflating a few pretty different dimensions, so I downvoted this post.

The message that, for me, stands out from this is "If you have an important idea but can't present it perfectly - it's better not to write at all." Which I think most of us would not endorse.

EA is more than longtermism

While at CEA, I was asked to take the curriculum for the Intro Fellowship and turn it into the Handbook, and I made a variety of changes (though there have been other changes to the Fellowship and the Handbook since then, making it hard to track exactly what I changed). The Intro Fellowship curriculum and the Handbook were never identical.

I exchanged emails with Michael Plant and Sella Nevo, and reached out to several other people in the global development/animal welfare communities who didn't reply. I also had my version reviewed by a dozen test readers (... (read more)

8AnonymousEAForumAccount1mo
Thanks for sharing this history and your perspective Aaron. I agree that 1) the problems with the 3rd edition were less severe than those with the 2nd edition (though I’d say that’s a very low bar to clear) and 2) the 3rd edition looks more representative if you weigh the “more to explore” sections equally with “the essentials” (though IMO it’s pretty clear that the curriculum places way more weight on the content it frames as “essential” than a content linked to at the bottom of the “further reading” section.) I disagree with your characterization of "The Effectiveness Mindset", "Differences in Impact", and "Expanding Our Compassion" as neartermist content in a way that’s comparable to how subsequent sections are longtermist content. The early sections include some content that is clearly neartermist (e.g. “The case against speciesism”, and “The moral imperative toward cost-effectiveness in global health”). But much, maybe most, of the "essential" reading in the first three sections isn’t really about neartermist (or longtermist) causes. For instance, “We are in triage every second of every day” is about… triage. I’d also put “On Fringe Ideas”, “Moral Progress and Cause X”, “Can one person make a difference?”, “Radical Empathy”, and “Prospecting for Gold” in this bucket. By contrast, the essential reading in the “Longtermism”, “Existential Risk”, and “Emerging technologies” section is all highly focused on longtermist causes/worldview; it’s all stuff like “Reducing global catastrophic biological risks”, “The case for reducing existential risk”, and “The case for strong longtermism”. I also disagree that the “What we may be missing?” section places much emphasis on longtermist critiques (outside of the “more to explore” section, which I don’t think carries much weight as mentioned earlier). “Pascal’s mugging” is relevant to, but not specific to, longtermism, and “The case of the missing cause prioritization research” doesn’t criticize longtermist ideas per se, i
Bad Omens in Current Community Building

This is a tricky question to answer, and there's some validity to your perspective here. 

I was speaking too broadly when I said there were "rare exceptions" when epistemics weren't the top consideration.

Imagine three people applying to jobs:

  • Alice: 3/5 friendliness, 3/5 productivity, 5/5 epistemics
  • Bob: 5/5 friendliness, 3/5 productivity, 3/5 epistemics
  • Carol: 3/5 friendliness, 5/5 productivity, 3/5 epistemics

I could imagine Bob beating Alice for a "build a new group" role (though I think many CB people would prefer Alice), because friendliness is so cru... (read more)

Aaron Gertler's Shortform

Memories from starting a college group in 2014

In August 2014, I co-founded Yale EA (alongside Tammy Pham). Things have changed a lot in community-building since then, and I figured it would be good to record my memories of that time before they drift away completely.

If you read this and have questions, please ask!

 

Timeline

I was a senior in 2014, and I'd been talking to friends about EA for years by then. Enough of them were interested (or just nice) that I got a good group together for an initial meeting, and a few agreed to stick around and help me r... (read more)

Some potential lessons from Carrick’s Congressional bid

I'd recommend cross-posting your critiques of the "especially useful" post onto that post — will make it easier for anyone who studies this campaign later (I expect many people will) to learn from you.

2mic1mo
Thanks for the suggestion, just copied the critiques of the "especially useful" post over!
Some potential lessons from Carrick’s Congressional bid

Thanks for sharing all of this!

I'm curious about your fear that these comments would negatively affect Carrick's chances. What was the mechanism you expected? The possibility of reduced donations/volunteering from people on the Forum? The media picking up on critical comments?

If "reduced donations" were a factor, would you also be concerned about posting criticism of other causes you thought were important for the same reason?  I'm still working out what makes this campaign different from other causes (or maybe there really are similar issues across a... (read more)

I think I was primarily concerned that negative information about the campaign could get picked up by the media. Thinking it over now though, that motivation doesn't make sense for not posting about highly visible negative news coverage (which the media would have already been aware of) or not posting concerns on a less publicly visible EA platform, such as Slack. Other factors for why I didn't write up my concerns about Carrick's chances of being elected might have been that:

  • no other EAs seemed to be posting much negative information about the campaign, a
... (read more)
Some potential lessons from Carrick’s Congressional bid

I think that the principal problem pointed out by the recent "Bad Omens" post was peer pressure towards conformity in ways that lead to people acting like jerks, and I think that we're seeing that play out here as well, but involving central people in EA orgs pushing the dynamics, rather than local EA groups. And that seems far more worrying.

What are examples of "pressure toward conformity" or "acting like jerks" that you saw among "central people in EA orgs"? Are you counting the people running the campaign as “central”? (I do agree with some of Matthew’s... (read more)

3Bluefalcon1mo
The only EA who's ever been an asshole to me was an asshole because I supported Flynn, so I don't think there was some hidden anti-donations-to-Flynn movement that self-censored. EAs who opposed the idea were quite loud about it.

Overall, I agree with Habryka's comment that "negative evidence on the campaign would be 'systematically filtered out'". Although I maxed out donations to the primary campaign and phone banked a bit for the campaign, I had a number of concerns about the campaign that I never saw mentioned in EA spaces. However, I didn't want to raise these concerns for fear that this would negatively affect Carrick's chances of winning the election.

Now that Carrick's campaign is over, I feel more free to write my concerns. These included:

  • The vast majority of media coverage
... (read more)
Some potential lessons from Carrick’s Congressional bid

Here are some impressions of him from various influential Oregonians. No idea how these six were chosen from the "more than a dozen" originally interviewed.

4Charles He1mo
It seems reasonable that most of these people were selected because they were expected to be hostile and aligned with other candidates. This belief is because it is probably this sort of “local establishment” to whom the EA candidate had the weakest comparative ties (or even was threatening to), and also because the WW put out specific aggressive pieces of low substance (the “owl” quote).
Choosing causes re Flynn for Oregon

Thanks for writing this. While I don’t personally enjoy being featured, I appreciate the post as a Forum reader and former mod.

A few notes on my approach to donating, since I was quoted:

  • Before choosing to donate, I spoke with two members of Carrick's campaign team about their plans and what the early donations would go toward, did some background reading on the district and Oregon politics, and looked over Carrick's campaign website and work history.
  • My view wound up looking similar to Zach's (updated) view — Carrick's chances weren't great, but this still
... (read more)
2Charles He1mo
You don’t understand the premise of my comments in that thread (I am guessing) you reported where I contravened _pk. The goal of my comments there is to mitigate issues of anonymous accounts that don’t contain new or verifiable information but are mainly emotional or appeal to norms or authority of various kinds (“I can see OR-6 from my house”). Anonymous comments like _pk (even if sweetly written and are >70% likely to be authentic) are problematic when emotional and influencing behaviour. This is extremely so when it involves movement of money which has no precedent on an Internet forum. I distrust your judgement about this issue described by this paragraph. As part of an number of considerations, I take ownership of the repercussions by sandbagging this persona, which I believe mitigates issues. It is possible that inept countervening will simply cause me to relax this sandbag, to achieve the aims in the above paragraph, but the resulting envelope of achievable results will be much worse (e.g if this establishes some wretched authority or something to this account, the issues of which are sort of being pointed at by the post and your own comment). The forum is really complicated and the norms under the previous moderator set things up for failure.
8Zach Stein-Perlman1mo
Thanks very much for your comments. I almost entirely approve/agree, and I think this is all useful. (And I'm sorry to quote you in particular, but that quote was one good example of the phenomenon). I'd just add two things: I wasn't surprised that Protect Our Future intervened (though I was surprised by how much it spent). Others with relevant knowledge might have been able to confidently predict that or other relevant factors in advance. I think donating was correct in your epistemic state. But in general, even if one believes donating to X is higher EV than donating to anything else, it doesn't imply that one should donate to X, if there's also the option to learn more first. Repeating something from my post, it's not worth spending so much time to guide a donation of $3K, but for a community donation of ~$900K it seems worth being more methodological. I tentatively agree. But harm arises even if there's no social punishment-- just from the fear of social punishment leading to self-censorship.
Leftism virtue cafe's Shortform

I found this post harder to understand than the rest of the series. The thing you're describing makes sense in theory, but I haven't seen it in practice and I'm not sure what it would look like.

 

What EA-related lifestyle changes people would other people find alienating? Veganism? Not participating in especially expensive activities? Talking about EA?

I haven't found "talking about EA" to be a problem, as long as I'm not trying to sell my friends on it without their asking first. I don't think EA is unique in this  way — I'd be annoyed if my relig... (read more)

Bad Omens in Current Community Building

This is a great post! Upvoted. I appreciate the exceptionally clear writing and the wealth of examples, even if I'm about 50/50 on agreeing with your specific points.

I haven't been involved in university community building for a long time, and don't have enough data on current strategies to respond comprehensively. Instead, a few scattered thoughts:

I was talking to a friend a little while ago who went to an EA intro talk and is now doing one of 80,000 Hours' recommended career paths, with a top score for direct impact. She’s also one of the most charismati

... (read more)
5nananana.nananana.heyhey.anon1mo
Appreciate your comments, Aaron. You say: But I am confident that leaders' true desire is "find people who have great epistemics [and are somewhat aligned]", not "find people who are extremely aligned [and have okay epistemics]". I think that’s true for a lot of hires. But does that hold equally true when you think of hiring community builders specifically? In my experience (5 ish people), leaders’ epistemic criteria seem less stringent for community building. Familiarity with EA, friendliness, and productivity seemed more salient.

Minor elaboration on your last point: a piece of advice I got from someone who did psychological research on how to solicit criticism was to try to brainstorm someone's most likely criticism of you would be, and then offer that up when requesting criticism, as this is a credible indication that you're open to it. Examples:

  • "Hey, do you have any critical feedback on the last discussion I ran? I talked a lot about AI stuff, but I know that can be kind of alienating for people who have more interest in political action than technology development... Does th
... (read more)
Bad Omens in Current Community Building

Privately discussed info in a CRM seems like an invasion of privacy.

I've seen non-EA college groups do this kind of thing and it seems quite normal. Greek organizations track which people come to which pledge events, publications track whether students have hit their article quota to join staff, and so on.

Doesn't seem like an invasion of privacy for an org's leaders to have conversations like "this person needs to write one more article to join staff" or  "this person was hanging out alone for most of the last event, we should try and help them feel more comfortable next time".

I keep going back and forth on this.

My first reaction was "this is just basic best practice for any people-/relationship-focused role, obviously community builders should have CRMs".

Then I realised none of the leaders of the student group I was most active in had CRMs (to my knowledge) and I would have been maybe a bit creeped out if they had, which updated me in the other direction.

Then I thought about it more and realised that group was very far in the direction of "friends with a common interest hang out", and that for student groups that were less like... (read more)

Bad Omens in Current Community Building

I've seen people make these complaints about EA since it first came to exist. 

As EA becomes bigger and better-known, I expect to see a higher volume of complaints even if the average person's impression remains the same/gets a bit better (though I'm not confident that's the case either).

This includes groups with no prior EA contact learning about it and deciding they don't like it — but I think they'd have had the same reaction at any point in EA's history.

Are there notable people or groups whose liking/trust of EA has, in your view, gone down over time?

EA is more than longtermism

The 80K board is an understandable proxy for "jobs in EA". But that description can be limiting.

Many non-student EA Global attendees had jobs at organizations that most wouldn't label "EA orgs", and that nevertheless fit the topics of the conference. 

Examples:

  • The World Bank
  • Schmidt Futures
  • Youth for the Treaty on the Prohibition of Nuclear Weapons
  • UK Department of Health and Social Care
  • US Treasury Department
  • House of Commons
  • Development Innovation Lab
  • A bunch of think tanks

Some of these might have some of their jobs advertised by 80K, but there are also ton... (read more)

EA is more than longtermism

A count of topics at EAG and EAGx's from this year show a roughly 3:1 AI/longtermist to anything else ratio

I'm not sure where to find agendas for past EAGx events I didn't attend. But looking at EAG London, I get a 4:3 ratio for LT/non-LT (not counting topics that fit neither "category", like founding startups):

LT

  • "Countering weapons of mass destruction"
  • "Acquiring and applying information security skills for long-term impact"
  • "How to contribute to the UN's 'Our Common Agenda' report" (maybe goes in neither category? Contributions from EA people so far have b
... (read more)
Aaron Gertler's Shortform

If you want recommendations, just take the first couple of  items in each category. They are rated in order of how good I think they are. (That's if you trust my taste — I think most people are better off just skimming the story summaries and picking up whatever sounds interesting to them.)

1Rubi2mo
Cool, thanks!
You Should Write a Forum Bio

Huzzah! Hope you enjoy your time here.

NunoSempere's Shortform

We publish our giving to political causes just as we publish our other giving (e.g. this ballot initiative).

As with contractor agreements, we publish investments and include them in our total giving if they are conceptually similar to grants (meaning that investments aren't part of the gap James noted).  You can see a list of published investments by searching "investment" in our grants database.

NunoSempere's Shortform

We're still in the process of publishing our 2021 grants, so many of those aren't on the website yet. Most of the yet-to-be-published grants are from the tail end of the year — you may have noticed a lot more published grants from January than December, for example. 

That accounts for most of the gap. The gap also includes a few grants that are unusual for various reasons (e.g. a grant for which we've made the first of two payments already but will only publish once we've made the second payment a year from now). 

We only include contractor agreeme... (read more)

Concerns about AMF from GiveWell reading - Part 4

A belated thanks for this reply! I've reached the end of my knowledge/spare time for research at this point, but I'll keep an eye out for any future posts of yours on these topics.

Aaron Gertler's Shortform

The group was small and didn't accomplish much, and this was a long time ago. I don't think the post would be interesting to many people, but I'm glad you enjoyed reading it!

Aaron Gertler's Shortform

Memories from running a corporate EA group

From August 2015 - October 2016, I ran an effective altruism group at Epic, a large medical software corporation in Wisconsin. Things have changed a lot in community-building since then, but I figured it would be good to record my memories of that time, and what I learned.

If you read this and have questions, please ask!

Launching the group

  • I launched with two co-organizers, both of whom stayed involved with the group while they were at Epic (but who left the company after ~6 months and ~1 year, respectively, leaving
... (read more)
3Austin2mo
Thank you for writing this up! This is a kind of experience I haven't seen expressed much on the forum, so found extra valuable to read about. Curious, any reason it's not a top level post?
Aaron Gertler's Shortform

The book poses an interesting and difficult problem that characters try to solve in a variety of ways. The solution that actually works involves a bunch of plausible game theory and feels like it establishes a realistic theory of how a populous universe might work. The solutions that don't work are clever, but fail for realistic reasons. 

Aside from the puzzle element of the book, it's not all that close to ratfic, but the puzzle is what compelled me. Certainly arguable whether it belongs in this category.

Aaron Gertler's Shortform

I think that "intense, fanatical dedication to worldbuilding" + "tons of good problem-solving from our characters, which we can see from the inside" adds up to ratfic for me, or at least "close to ratfic". Worm delivers both.

1JasperGeh2mo
Ah, that makes sense. I absolutely adore Fine Structrue and Ra but never considered it ratfic (though I don’t know whether Sam Hughes is hanging in rat circles)
2alexrjl2mo
Sounds right to me! I'm reading worth the candle at the moment :)
Aaron Gertler's Shortform

My ratings and reviews of rationalist fiction

I've dedicated far too much time to reading rationalist fiction. This is a list of stories I think are good enough to recommend.

Here's my entire rationalist fiction bookshelf —a mix of works written explicitly within the genre and other works that still seem to belong. (I've written reviews for some, but not all.)

Here are subcategories, with stories ranked in rough order from "incredible" to "good". The stories vary widely in scale, tone, etc., and you should probably just read whatever seems most interesting to... (read more)

1Rubi2mo
I like many books on the list, but I think you're doing a disservice by trying to recommend too many books at once. If you can cut it down to 2-3 in each category, that gives people a better starting point.
2EdoArad2mo
I also love Alexander Wales' ongoing This Used To Be About Dungeons [https://www.royalroad.com/fiction/45534/this-used-to-be-about-dungeons]
4alexrjl2mo
If be keen to hear right how you're defining the genre, especially when the author isn't obviously a member of the community. I loved worm and read it a couple of years ago, at least a year before I was aware rational fiction was a thing, and don't recall thinking "wow this seems really rationalist" so much as just "this is fun words go brrrrrrrr"
2Charles He2mo
Can you give your view why The Dark Forest is an example of near rationalist work? I guess it shows societal dysfunction, the (extreme) alienness or hostility of reality, and some intense applications of game theory. I think I want to understand “rationality” as much as the book.
How to use the Forum

The Forum doesn't have built-in support for internal links, in either editor. 

Internal links take the form of PostURL#subheading_title_with_spaces_represented_by_underscores, with punctuation and extra spaces taking the form of additional underscores. 

You can also right-click on a subheading and select "copy link address" to get the URL on your clipboard. Or just click the subheading and see what URL shows up in your address bar.

If you want to push for an internal links feature, use this thread (not sure if someone else has suggested yet, you may want to look around a bit).

1david_reinstein2mo
Ok I added a +1 for an existing comment suggesting this
With how many EA professionals have you noticed some degree of dishonesty about how impactful it would be to work for them?

If your intention was to elicit stories rather than to get a sense for how common dishonesty was, your wording makes sense.

I had assumed you were trying to do the second thing, and my comment was honest. I try to be straightforward with all of my Forum comments.

1stag2mo
Ah I see -- sorry for not giving you the benefit of the doubt! That may have been due to the curse of knowledge -- it felt obvious to me what I was trying to do with posting this question. [ETA: Ah hm, when I go back and read the question, it does seem like your interpretation is more reasonable than I thought; I think I was trying to elicit stories, but the wording of the question doesn't match that intention (as you tried to tell me!)]
Aaron Gertler's Shortform

Advice on looking for a writing coach

I shared this with someone who asked for my advice on finding someone to help them improve their writing. It's brief, but I may add to it later.

I think you'll want someone who:

* Writes in a way that you want to imitate. Some very skilled editors/teachers can work in a variety of styles, but I expect that most will make your writing sound somewhat more like theirs when they provide feedback.

* Catches the kinds of things you wish you could catch in your writing. For example, if you want to try someone out, you could ask t

... (read more)
How to use the Forum

Depends on your AI timelines.

There's no limit to when you can edit a post.

Open Thread: Spring 2022

Ahead of the full post, I'd like to know what you think the most compelling evidence is for non-invasive brain stimulation actually working. This could be a paper, a blog post from some self-experimenter, or something else — whatever made you think this was important to study further.

(I know nothing about this topic at all, and don't even have a mental picture of what NIBS would physically look like.)

1Jake Toth3mo
Thanks Aaron, I will make sure to include this information but hopefully this will help in the meantime: Non-invasive brain stimulation is any method of causing brain activity to change without surgery. This can include using electrodes to apply a small amount of current to the scalp with a headset like this: https://www.neuroelectrics.com/solutions/starstim [https://www.neuroelectrics.com/solutions/starstim] Creating a magnetic field in the brain with a device like this: https://www.healthline.com/health/tms-therapy#What-is-TMS-therapy? [https://www.healthline.com/health/tms-therapy#What-is-TMS-therapy?] Or by using ultrasound waves with a device that looks something like the image here: https://www.semanticscholar.org/paper/Technical-Review-and-Perspectives-of-Transcranial-Yoo/c26b8b3655561cfb24dfb262d4fbf5ad76bc6867 [https://www.semanticscholar.org/paper/Technical-Review-and-Perspectives-of-Transcranial-Yoo/c26b8b3655561cfb24dfb262d4fbf5ad76bc6867] The electrical and magnetic stimulation methods are well established with decades of research covering tens of thousands of participants and proven safety profiles. The magnetic method is too bulky for a consumer headset, and the electrical method has issues with reliability across subjects (my research plays a small part in helping to address this.) The ultrasound method is more new, but with the promise of much more accurate stimulation. Without going too deep into the technical challenges that remain I think an electrical stimulation based headset that increases intelligence significantly could be available to consumers within 5 years. With an ultrasound-based headset superseding that once the research is more firmly established.
Announcing Alvea—An EA COVID Vaccine Project

Thanks! This is exactly the kind of thing I was looking for.

Should We Studio – high-quality, EA-aligned video content

If you share info about this project elsewhere, I'd recommend mentioning your background with TED-Ed! I thought the videos linked from the SWS website were great, and people might be more interested in checking out the open roles if they know more about who they'll work with before they click through to the site.

Load More