Thanks for putting numbers to my argument! I was expecting a greater proportion of left-leaning individuals among the college educated, so this was a useful update.
A reason why the political orientation gap might be less worrying than it appears at first sight is that it probably stems partly from the overwhelmingly young bent of EA. Young people from many countries (and perhaps especially in the countries where EA has greater presence) tend to be more left leaning than the general population.
This might be another reason to try onboarding older people to EA more relative to the pool of new members, but if you thought that would involve significant costs (e.g. having less young talented EAs because less community building resources were directed towards that demographic), then perhaps in equilibrium we should have a somewhat skewed distribution of political orientations.
I agree this may stem partly from EA's very strong age skew, but I don't think this can explain a very large part of the difference.
Within the US, Gen Z are 17% Republican, 31% Democrat (52% Independent), while Millenials are 21% Republican - 27% Democrat (52% Independent). This is, even among the younger group, only a ~2:1 skew, whereas US EAs are 77% left-leaning and 2.1% right-leaning (a ~37:1 skew). Granted, the young Independents may also be mostly left-leaning, which would increase the disparity in the general population. Of course, this is loo...
Thanks for taking the time to respond! I find your point of view more plausible now that I understand it a little bit better (though I'm still not sure of how convincing I find it overall).
I'm not sure if I understand where you're coming from, but I'd be curious to know: do you think similarly of EAs who are Superforecasters or have a robust forecasting record?
In my mind, updating may as well be a ritual, but if it's a ritual that allows us to better track reality then there's little to dislike about it. As an example of how precise numerical reasoning could help, the book Superforcasting describes how rounding Superforecasters predictions (interpreting .67 probability of X happens as .7 probability of X happening) increases the error of the prediction. The book also includes many other examples where I think numerical reasoning confers a sizable advantage to its user.
What stops AI Safety orgs from just hiring ML talent outside EA for their junior/more generic roles?
Oh, I totally agree that giving people the epistemics is mostly preferable to hanging them the bottom line. My doubts come more from my impression that forming good epistemics in a relatively unexplored environment (e.g. cause prioritization within Colombia) is probably harder than in other contexts.
I know that at least our explicit aim with the group was to exhibit the kind of patience and rigour you describe and that I ended up somewhat underwhelmed with the results. I initially wanted to try to parse out where our differing positions came from, but this comment eventually got a little long and rambling.
For now I'll limit myself to thanking you for making what I think it's a good point.
Hi, thanks for the detailed reply. I mostly agree with the main thrust of your comment, but I think I feel less optimistic about what happens when you actually try to implement it.
Like, we've had discussions in my group about how to prioritize cause areas and in principle everyone agrees with how we should work on causes that are bigger, more neglected and tractable, but when it comes to specific causes it turns out that the unmeasured effects are the most important thing and the flow through effects of the intervention I've always liked turn ...
You don't need to convince everyone of everything you think in a single event. 🙂 You probably didn't form your worldview in the space of two hours either. 😉
When someone says they think giving locally is better, ask them why. Point out exactly what you agree with (e.g. it is easier to have an in-depth understanding of your local context) and why you still hold your view (e.g. that there are such large wealth disparities between different countries that there are some really low hanging fruit, like basic preventative measures of diseases like malaria, that...
Hmm, it’s funny, this post comes at a moment when I’m heavily considering moving in the opposite direction with my EA university group (towards being more selective and focused on EA-core cause areas). I’d like to know what you think of the reason I thought for doing so.
My main worry is that as the interests of EA members broaden (e.g. to include helping locally), the EA establishment will have less concrete recommendations to offer and people will not have a chance to truly internalize some core EA principles (e.g. amounts matter, doubt in the absence of ...
A few points. First, I think we need to be clear that effective altruism is a movement encouraging use of evidence to do as much good as we can - and choosing what to work on should happen after gathering evidence. Listening to what senior EA movement members have concluded is a shortcut, and in many cases an unfortunate one. So the thing I would focus on is not the EA recommendations, but the concept of changing your mind based on evidence. It's fine for people to decide to focus locally instead of internationally, or to do good, but not the utmost good -...
If I wanted to be charitable to their answer of the cost of saving a life I'd point out that $5000 is roughly the cost of saving a life reliably and at scale. If you relax any of those conditions, saving a life might be cheaper (e.g. Givewell sometimes finances opportunities more cost-effective than AMF, or perhaps you're optimistic about some highly leveraged interventions like political advocacy). However, I wouldn't bet that this phenomenon would be behind a significant fraction of the divergence of their answers.
Thanks for the post, Jan! I follow AI Alignment debates only superficially and I had heard of the continuity assumption as a big source of disagreement, but I didn't have a clear concept of where it stemmed from and what were it's practical implications. I think your post does a very good job at grounding the concept and filling those gaps.
These are just the first questions that came to mind, but may not necessarily overlap with Adreas' interests or knowledge:
Thank you Shen, this is wonderful! With my local group in Colombia we're getting ready to stage a fellowship for the second time and hearing about your experience gave me many ideas for things we may try to improve on.
Oh, sorry. I'll expand the abbreviation in the original comment. It's 'Community Building resources'.