59Joined Aug 2020


Forum? I'm against 'em!


Oh I see! Ya, crazy stuff. I liked the attention it paid to the role of foundation funding. I've seen this critique of foundations included in some intro fellowships, so I wonder if it would also especially resonate with leftists who are fed up with cancel culture in light of the Intercept piece.

I don't think anything here attempts a representation of "the situation in leftist orgs" ? But yes lol same




This is a response to D0TheMath, quinn, and Larks, who all raise some version of this epistemic concern:

(1) Showing how EA is compatible with leftist principles requires being disingenuous about EA ideas —> (2) recruit people who join solely based on framing/language —> (3) people join the community who don't really understand what EA is about —> (4) confusion!

The reason I am not concerned about this line of argumentation is that I don't think it attends to the ways people decide whether to become more involved in EA.

(2) In my experience, people are most likely to drop out of the fellowship during the first few weeks, while they're figuring out their schedules for the term and weighing whether to make the program one of their commitments. During this period, I think newcomers are easily turned off by the emphasis on quantification and triage. The goal is to find common ground on ideas with less inferential distance so fellows persevere through this period of discomfort and uncertainty. To earn yourself some weirdness points that you can spend in the weeks to come, eg when introducing X risks. So people don't join solely based on framing/language; rather, these are techniques to extend a minimal degree of familiarity to smart and reasonable people who would otherwise fail to give the fellowship a chance.

(3) I think it's very difficult to maintain inaccurate beliefs about EA for long. These will be dispelled as the fellowship continues and students read more EA writing, as they continue on to an in-depth fellowship, as they begin their own exploration of the forum, and as they talk to other students who are deeper in the EA fold. Note that all of these generally occur prior to attending EAG or applying for an EA internship/job, so I think it is likely that they will be unretained before triggering the harms of confusion in the broader community. 

(I'm also not conceding (1), but it's not worth getting into here.)

Ya maybe if your fellows span a broad political spectrum, then you risk alienating some and you have to prioritize. But the way these conversations actually go in my experience is that one fellow raises an objection, eg "I don't trust charities to have the best interests of the people they serve at heart." And then it falls to the facilitator to respond to this objection, eg "yes, PlayPumps illustrates this exact problem, and EA is interested in improving these standards so charities are actually accountable to the people they serve," etc. 

My sense is that the other fellows during this interaction will listen respectfully, but they will understand that the interaction is a response to one person's idiosyncratic qualms, and that the facilitator is tailoring their response to that person's perspective. The interaction is circumscribed by that context, and the other fellows don't come away with the impression that EA only cares about accountability. In other words, the burden of representation is suspended somewhat in these interactions.

If we were writing an Intro to EA Guide, for example, I think we would have to be much more careful about the political bent of our language because the genre would be different.

I agree with quinn. I'm not sure what the mechanism is by which we end up with lowered epistemic standards. If an intro fellow is the kind of person who weighs reparative obligations very heavily in their moral calculus, then deworming donations may very well satisfy this obligation for them. This is not an argument that motivates me very much, but it may still be a true argument. And making true arguments doesn't seem bad for epistemics? Especially at the point where you might be appealing to people who are already consequentialists, just consequentialists with a developed account of justice that attends to reparative obligations.

Thanks for the reply! I'm satisfied with your answer and appreciate the thought you've put into this area :) I do have a couple follow-ups if you have a chance to share further:

I expected to hear about the value of the connections made at EAG, but I'm not sure how to think about the counterfactual here. Surely some people choose to meet up at EAG but in the absence of the conference would have connected virtually, for example? 

I also wonder about the cause areas of the EA-aligned orgs you cited. Ie, I could imagine longtermist orgs that are more talent-constrained estimating higher dollar value for a connection than, say, a global health org that is more funding-constrained. So I think EAs with different priorities might have different bliss points for conference funding levels.

It also seems like there might be tension between more veteran vs newcomer EAs? Eg, people who have been in the fold for longer might prefer simpler arrangements. In particular, I worry about pandering to "potential donors." Who are these donors who are unaligned to the extent that their conference experience will determine the size of their future donations? Even if they do exist, this seems like  a reason to have a "VIP ticket" or something.

Ultimately, the conference budget is one lens that raises the question, who is EAG for? And I wonder if that question is resolved in favor of longtermist orgs and new donors, at least right now.

Load More