All of Alejandro Acelas's Comments + Replies

Oh, sorry. I'll expand the abbreviation in the original comment. It's 'Community Building resources'.

2
Geoffrey Miller
10mo
OK! Thanks for the explanation.

Thanks for putting numbers to my argument! I was expecting a greater proportion of left-leaning individuals among the college educated, so this was a useful update.

A reason why the political orientation gap might be less worrying than it appears at first sight is that it probably stems partly from the overwhelmingly young bent of EA. Young people from many countries (and perhaps especially in the countries where EA has greater presence) tend to be more left leaning than the general population.

This might be another reason to try onboarding older people to EA more relative to the pool of new members, but if you thought that would involve significant costs (e.g. having less young talented EAs because less community building resources were directed towards that demographic), then perhaps in equilibrium we should have a somewhat skewed distribution of political orientations.

3
Geoffrey Miller
10mo
Alejandro - I think you're right that the leftward skew is partly explained by the youth-skew in the EA age distribution, plus the commonly observed correlation between age and conservatism. I also agree that more active recruitment of older people could help balance this out somewhat. (I've critiqued EA's implicit ageism a number of times in EA Forum comments).  What do you mean by 'CB resources' though? Not familiar with the term.

I agree this may stem partly from EA's very strong age skew, but I don't think this can explain a very large part of the difference. 

Within the US, Gen Z are 17% Republican, 31% Democrat (52% Independent), while Millenials are 21% Republican - 27% Democrat (52% Independent). This is, even among the younger group, only a ~2:1 skew, whereas US EAs are 77% left-leaning and 2.1% right-leaning (a ~37:1 skew). Granted, the young Independents may also be mostly left-leaning, which would increase the disparity in the general population. Of course, this is loo... (read more)

Thanks for taking the time to respond! I find your point of view more plausible now that I understand it a little bit better (though I'm still not sure of how convincing I find it overall).

1
Noah Scales
1y
Sure, and thank you for being interested in what I have written here. I didn't offer an argument meant to convince, more a listing of perspectives on what is actually happening around EA "updating". For example, to know that an EA is confusing judgments of feeling intensity and truth probability, I would have to have evidence that they are acting in good faith to" update" in the first place, rather than (unconsciously) pursuing some other agenda. As another example, to know that an EA has a betting problem, the psychological kind, I would have see pathological betting behavior on their part (for example, that ruined their finances or their relationships). Different EA's are typically doing different things with their "updating" at different times. Some of them are sure to have a bit of risk-seeking that is unhealthy and distorts their perspective, others are doing their best to measure their feelings of certainty as a fractional number, and still others are being cynical with their offerings of numbers. If the superforecasters among the EA's are doing so well with their probability estimates, I wish they would offer some commentary on free will, game theory, and how they model indeterminism. I would learn something. If there were a tight feedback loop about process, if everyone understood not only the math, but also the evidence that moves your credence numbers, if there were widespread agreement that a new bit of evidence X should move probability of credence Y an amount Z , then I could believe that there was systematic care about epistemics involved in the community. I could believe that EA folks are always training to improve their credence probability assignments. I could believe that EA folks train their guts, hone their rationality, and sharpen their research skills, all at the same time. But what actually goes on in EA seems very subjective, anyone can make up any number and claim their evidence justifies it and don't really have to prove anything, and in

I'm not sure if I understand where you're coming from, but I'd be curious to know: do you think similarly of EAs who are Superforecasters or have a robust forecasting record?

In my mind, updating may as well be a ritual, but if it's a ritual that allows us to better track reality then there's little to dislike about it. As an example of how precise numerical reasoning could help, the book Superforcasting describes how rounding Superforecasters predictions (interpreting .67 probability of X happens as .7 probability of X happening) increases the error of the prediction. The book also includes many other examples where I think numerical reasoning confers a sizable advantage to its user.

2
Noah Scales
1y
Thank you for the question! In my understanding, superforecasters claim to do well at short-term predictions, in the range of several years, and with respect to a few domains. That is not me speaking about my judgement, that's some discussion that I read about them. They have no reason to update on their own forecasts outside a certain topic and time domain, so to speak.I can track down the references and offer them if you like, but I think the problem I'm referring to is well-known. I want to learn more about the narrow context and math where superforecasters are considered "accurate" versus in error, and why that is so. Offering odds as predictions is not the same as offering a straightforward prediction, and interpretation and use of odds as a prediction is not the same as acting on a prediction. I suspect that there's a mistaken analogy about what superforcaster's actually do and what it means for assigning subjective probabilities to credences in general. EA's offer probabilities for just about any credence, but especially credences whose truth is very hard to ever determine, such as their belief in an upcoming existential harm. Accordingly, I don't believe that the mystique that supeforecasters have can rub off on EA's, and certainly superforecaster success cannot. Other approaches for different purposes, like fermi estimates, where you attempt to estimate the size of something by breaking it down into components and multiplying, are good ways to get a better estimate of whatever is being estimated, but I don't consider that an attempt to assign a probability to a credence by an EA in a typical context, and that is all I was focused on with my critique. Statistical estimation is used in a lot of domains, but not in the domain of beliefs. If I were taking samples of runs in a factory, looking to estimate numbers of widgets with construction errors coming off the line with some random sampling, I wouldn't be thinking as I do here about EA updating. EA's don

What stops AI Safety orgs from just hiring ML talent outside EA for their junior/more generic roles?

1
JakubK
1y
I'd love to see a detailed answer to this question. I think a key bottleneck for AI alignment at the moment is finding people who can identify research directions (and then lead relevant projects) that might actually reduce x-risk, so I'm also confused why some career guides include software and ML engineering as one of the best ways to contribute. I struggle to see how software and ML engineering could be a bottleneck given that there are so many talented software and ML engineers out there. Counterpoint: infohazards mean you can't just hire anyone.

Oh, I totally agree that giving people the epistemics is mostly preferable to hanging them the bottom line. My doubts come more from my impression that forming good epistemics in a relatively unexplored environment (e.g. cause prioritization within Colombia) is probably harder than in other contexts.
I know that at least our explicit aim with the group was to exhibit the kind of patience and rigour you describe and that I ended up somewhat underwhelmed with the results. I initially wanted to try to parse out where our differing positions came from, but this comment eventually got a little long and rambling.
For now I'll limit myself to thanking you for making what I think it's a good point.

Hi, thanks for the detailed reply. I mostly agree with the main thrust of your comment, but I think I feel less optimistic about what happens when you actually try to implement it.   

Like, we've had discussions in my group about how to prioritize cause areas and in principle everyone agrees with how we should work on causes that are bigger, more neglected and tractable, but when it comes to specific causes it turns out that the unmeasured effects are the most important thing and the flow through effects of the intervention I've always liked turn ... (read more)

4
Karthik Tadepalli
2y
I don't know if this is what you mean by cultivating better epistemics, but it seems super plausible to me that the comparative advantage of a Colombian EA university group is to work towards effective solutions to problems in Colombia. If you think most of your members will continue to stay in Colombia, and some of them might go into careers that could potentially be high impact for solving Colombian issues, that seems like a much more compelling thing to do than be the Nth group talking about AI or which GiveWell charity is better.

You don't need to convince everyone of everything you think in a single event. 🙂 You probably didn't form your worldview in the space of two hours either. 😉

When someone says they think giving locally is better, ask them why. Point out exactly what you agree with (e.g. it is easier to have an in-depth understanding of your local context) and why you still hold your view (e.g. that there are such large wealth disparities between different countries that there are some really low hanging fruit, like basic preventative measures of diseases like malaria, that... (read more)

Hmm, it’s funny, this post comes at a moment when I’m heavily considering moving in the opposite direction with my EA university group (towards being more selective and focused on EA-core cause areas). I’d like to know what you think of the reason I thought for doing so.

My main worry is that as the interests of EA members broaden (e.g. to include helping locally), the EA establishment will have less concrete recommendations to offer and people will not have a chance to truly internalize some core EA principles (e.g. amounts matter, doubt in the absence of ... (read more)

A few points. First, I think we need to be clear that effective altruism is a movement encouraging use of evidence to do as much good as we can - and choosing what to work on should happen after gathering evidence. Listening to what senior EA movement members have concluded is a shortcut, and in many cases an unfortunate one. So the thing I would focus on is not the EA recommendations, but the concept of changing your mind based on evidence. It's fine for people to decide to focus locally instead of internationally, or to do good, but not the utmost good -... (read more)

If I wanted to be charitable to their answer of the cost of saving a life I'd point out that $5000 is roughly the cost of saving a life reliably and at scale. If you relax any of those conditions, saving a life might be cheaper (e.g. Givewell sometimes finances opportunities more cost-effective than AMF, or perhaps you're optimistic about some highly leveraged interventions like political advocacy). However, I wouldn't bet that this phenomenon would be behind a significant fraction of the divergence of their answers.

2
ryancbriggs
2y
I think that's fair (see also, footnote 2). Fwiw this was the actual question: "Consider a charity whose programs are among the most cost-effective ways of saving the lives of children. In other words, thinking across all charities that currently exist, this one can save a child’s life for the smallest amount of money. Roughly what do you think is the minimum amount of money that you would have to donate to this charity in order to expect that your money has saved the life of one child?”

Thanks for the post, Jan! I follow AI Alignment debates only superficially and I had heard of the continuity assumption as a big source of disagreement, but I didn't have a clear concept of where it stemmed from and what were it's practical implications. I think your post does a very good job at grounding the concept and filling those gaps.

These are just the first questions that came to mind, but may not necessarily overlap with Adreas' interests or knowledge:

  • Given his deontological leanings, is there something he would like to see people in the EA community doing less/more of?
  • What's the paper/line of investigation from GPI that has changed his view on practical priorities for EA the most?
  • How involved in philosophical discussions should the median EA be? (e.g. should we all read Parfit or  just muddle through with what we hear from informal discussions of ethics within the community?)
  • Wh
... (read more)

Thank you Shen, this is wonderful! With my local group in Colombia we're getting ready to stage a fellowship for the second time and hearing about your experience gave me many ideas for things we may try to improve on.

3
Shen Javier
3y
This is good to hear! We will be running our second intro fellowship soon too and I'm wishing you the best of luck for yours.