Alejandro Acelas

Pursuing an undergraduate degree
87Bogotá, Bogota, ColombiaJoined Dec 2019


Thanks for taking the time to respond! I find your point of view more plausible now that I understand it a little bit better (though I'm still not sure of how convincing I find it overall).

I'm not sure if I understand where you're coming from, but I'd be curious to know: do you think similarly of EAs who are Superforecasters or have a robust forecasting record?

In my mind, updating may as well be a ritual, but if it's a ritual that allows us to better track reality then there's little to dislike about it. As an example of how precise numerical reasoning could help, the book Superforcasting describes how rounding Superforecasters predictions (interpreting .67 probability of X happens as .7 probability of X happening) increases the error of the prediction. The book also includes many other examples where I think numerical reasoning confers a sizable advantage to its user.

What stops AI Safety orgs from just hiring ML talent outside EA for their junior/more generic roles?

Oh, I totally agree that giving people the epistemics is mostly preferable to hanging them the bottom line. My doubts come more from my impression that forming good epistemics in a relatively unexplored environment (e.g. cause prioritization within Colombia) is probably harder than in other contexts.
I know that at least our explicit aim with the group was to exhibit the kind of patience and rigour you describe and that I ended up somewhat underwhelmed with the results. I initially wanted to try to parse out where our differing positions came from, but this comment eventually got a little long and rambling.
For now I'll limit myself to thanking you for making what I think it's a good point.

Hi, thanks for the detailed reply. I mostly agree with the main thrust of your comment, but I think I feel less optimistic about what happens when you actually try to implement it.   

Like, we've had discussions in my group about how to prioritize cause areas and in principle everyone agrees with how we should work on causes that are bigger, more neglected and tractable, but when it comes to specific causes it turns out that the unmeasured effects are the most important thing and the flow through effects of the intervention I've always liked turn out to compensate for its lack of demonstrated impact.  

I'm not saying it's impossible to have those discussions, just that for a group constrained on people who've engaged with EA cause priorization arguments, being able to rely on the arguments that others have put forward (like we can often do for international causes) makes the job much easier. However, I'm open to the possibility that that the best compromise might be to simply to let people focus on local causes and double down on cultivating better epistemics.    

(P.s.: I now realize that answering to every comment on your post might be quite demanding, so feel free to not answer. I'll make sure to still update on the fact that you weren't convinced by my initial comment. If anything at least your comment is making me consider alternatives that don't restrict the growth and avoid the epistemic consequences I delineated. I'm not sure if I'll find something that pleases me, but I'll muse on it a little further)

Hmm, it’s funny, this post comes at a moment when I’m heavily considering moving in the opposite direction with my EA university group (towards being more selective and focused on EA-core cause areas). I’d like to know what you think of the reason I thought for doing so.

My main worry is that as the interests of EA members broaden (e.g. to include helping locally), the EA establishment will have less concrete recommendations to offer and people will not have a chance to truly internalize some core EA principles (e.g. amounts matter, doubt in the absence of measurement). 

That has been an especially salient problem for my group, given that we live in a middle-income country (Colombia) and many people feel most excited about helping within our country. However, when I’ve heard them make plans for how they would help, I struggle to see what difference we made by presenting them EA ideas. They tend to choose causes more by their previous emotional connection than by attributes that suggest a better opportunity to help (e.g. by using the SNT framework). My expectations are that if we emphasize more the distinctive aspects of EA (and the concrete recommendations they imply), people will have a better chance to update on the ways that mainstream EA differs from what they already believed and we will have a better shot at producing some counterfactual impact. 

(Though, as a caution, it’s possible that the tendency for members of my group to not realize when EA ideas differ from their own may come from my particular aversion to openly questioning or contradicting people, rather than from the member’s interest in less-explored areas for helping.)

If I wanted to be charitable to their answer of the cost of saving a life I'd point out that $5000 is roughly the cost of saving a life reliably and at scale. If you relax any of those conditions, saving a life might be cheaper (e.g. Givewell sometimes finances opportunities more cost-effective than AMF, or perhaps you're optimistic about some highly leveraged interventions like political advocacy). However, I wouldn't bet that this phenomenon would be behind a significant fraction of the divergence of their answers.

Thanks for the post, Jan! I follow AI Alignment debates only superficially and I had heard of the continuity assumption as a big source of disagreement, but I didn't have a clear concept of where it stemmed from and what were it's practical implications. I think your post does a very good job at grounding the concept and filling those gaps.

These are just the first questions that came to mind, but may not necessarily overlap with Adreas' interests or knowledge:

  • Given his deontological leanings, is there something he would like to see people in the EA community doing less/more of?
  • What's the paper/line of investigation from GPI that has changed his view on practical priorities for EA the most?
  • How involved in philosophical discussions should the median EA be? (e.g. should we all read Parfit or  just muddle through with what we hear from informal discussions of ethics within the community?)
  • What's the thrust of his argument in "Against Large Number Sceptism"? How would he characterize how people who feel uncomfortable with arguments resting on large numbers think about the subject?
  • Where does the interest in Decision Theroy among EAs come from? Is it because it could have practical implications or something else entirely? What would change if we had an answer to the top open questions in Decision Theory?

Thank you Shen, this is wonderful! With my local group in Colombia we're getting ready to stage a fellowship for the second time and hearing about your experience gave me many ideas for things we may try to improve on.

Load more