AA

Alejandro Acelas

100 karmaJoined Dec 2019Pursuing an undergraduate degreeBogotá, Bogota, Colombia

Comments
13

Oh, sorry. I'll expand the abbreviation in the original comment. It's 'Community Building resources'.

Thanks for putting numbers to my argument! I was expecting a greater proportion of left-leaning individuals among the college educated, so this was a useful update.

A reason why the political orientation gap might be less worrying than it appears at first sight is that it probably stems partly from the overwhelmingly young bent of EA. Young people from many countries (and perhaps especially in the countries where EA has greater presence) tend to be more left leaning than the general population.

This might be another reason to try onboarding older people to EA more relative to the pool of new members, but if you thought that would involve significant costs (e.g. having less young talented EAs because less community building resources were directed towards that demographic), then perhaps in equilibrium we should have a somewhat skewed distribution of political orientations.

Thanks for taking the time to respond! I find your point of view more plausible now that I understand it a little bit better (though I'm still not sure of how convincing I find it overall).

I'm not sure if I understand where you're coming from, but I'd be curious to know: do you think similarly of EAs who are Superforecasters or have a robust forecasting record?

In my mind, updating may as well be a ritual, but if it's a ritual that allows us to better track reality then there's little to dislike about it. As an example of how precise numerical reasoning could help, the book Superforcasting describes how rounding Superforecasters predictions (interpreting .67 probability of X happens as .7 probability of X happening) increases the error of the prediction. The book also includes many other examples where I think numerical reasoning confers a sizable advantage to its user.

What stops AI Safety orgs from just hiring ML talent outside EA for their junior/more generic roles?

Oh, I totally agree that giving people the epistemics is mostly preferable to hanging them the bottom line. My doubts come more from my impression that forming good epistemics in a relatively unexplored environment (e.g. cause prioritization within Colombia) is probably harder than in other contexts.
I know that at least our explicit aim with the group was to exhibit the kind of patience and rigour you describe and that I ended up somewhat underwhelmed with the results. I initially wanted to try to parse out where our differing positions came from, but this comment eventually got a little long and rambling.
For now I'll limit myself to thanking you for making what I think it's a good point.

Hi, thanks for the detailed reply. I mostly agree with the main thrust of your comment, but I think I feel less optimistic about what happens when you actually try to implement it.   

Like, we've had discussions in my group about how to prioritize cause areas and in principle everyone agrees with how we should work on causes that are bigger, more neglected and tractable, but when it comes to specific causes it turns out that the unmeasured effects are the most important thing and the flow through effects of the intervention I've always liked turn out to compensate for its lack of demonstrated impact.  

I'm not saying it's impossible to have those discussions, just that for a group constrained on people who've engaged with EA cause priorization arguments, being able to rely on the arguments that others have put forward (like we can often do for international causes) makes the job much easier. However, I'm open to the possibility that that the best compromise might be to simply to let people focus on local causes and double down on cultivating better epistemics.    

(P.s.: I now realize that answering to every comment on your post might be quite demanding, so feel free to not answer. I'll make sure to still update on the fact that you weren't convinced by my initial comment. If anything at least your comment is making me consider alternatives that don't restrict the growth and avoid the epistemic consequences I delineated. I'm not sure if I'll find something that pleases me, but I'll muse on it a little further)

Hmm, it’s funny, this post comes at a moment when I’m heavily considering moving in the opposite direction with my EA university group (towards being more selective and focused on EA-core cause areas). I’d like to know what you think of the reason I thought for doing so.

My main worry is that as the interests of EA members broaden (e.g. to include helping locally), the EA establishment will have less concrete recommendations to offer and people will not have a chance to truly internalize some core EA principles (e.g. amounts matter, doubt in the absence of measurement). 

That has been an especially salient problem for my group, given that we live in a middle-income country (Colombia) and many people feel most excited about helping within our country. However, when I’ve heard them make plans for how they would help, I struggle to see what difference we made by presenting them EA ideas. They tend to choose causes more by their previous emotional connection than by attributes that suggest a better opportunity to help (e.g. by using the SNT framework). My expectations are that if we emphasize more the distinctive aspects of EA (and the concrete recommendations they imply), people will have a better chance to update on the ways that mainstream EA differs from what they already believed and we will have a better shot at producing some counterfactual impact. 

(Though, as a caution, it’s possible that the tendency for members of my group to not realize when EA ideas differ from their own may come from my particular aversion to openly questioning or contradicting people, rather than from the member’s interest in less-explored areas for helping.)

If I wanted to be charitable to their answer of the cost of saving a life I'd point out that $5000 is roughly the cost of saving a life reliably and at scale. If you relax any of those conditions, saving a life might be cheaper (e.g. Givewell sometimes finances opportunities more cost-effective than AMF, or perhaps you're optimistic about some highly leveraged interventions like political advocacy). However, I wouldn't bet that this phenomenon would be behind a significant fraction of the divergence of their answers.

Load more