Manuel Del Río Rodríguez

Satellite School Head of Studies - Noia (Spain) @ EOI Santiago (Official School of Languages, Santiago)
156 karmaJoined Dec 2022Working (6-15 years)


English teacher for adults and teacher trainer, a lover of many things (languages, literature, art, maths, physics, history) and people. Head of studies at the satellite school of Noia, Spain.

How others can help me

I am omnivorous in my interests, but from a work perspective, I am very interested in the confluence of new technologies and education. As for other things that could profit from assistance, I am trying to self-teach myself undergraduate level math and to seriously explore and engage with the intellectual and moral foundations of EA.

How I can help others

Reach out to me if you have any questions about Teaching English as a Foreign Language, translation and , generally, anything Humanities-orientated. Also, anything you'd like to know about Spain in general and its northwestern corner, Galicia, in particular.


Just listened to a podcast interview of yours, Geoffrey Miller (Manifold, with Steve Hsu). Do you really believe that it is viable to impose a very long pause (you mention 'just a few centuries'). The likelihood of such a thing taking place seems to me more than extremely remote -at least until we get a pragmatic example of the harm AI can do, a Trinity test of sorts.

Another probably very silly question: in what sense isn't AI alignment just plain inconceivable to begin with? I mean, given the premise that we could and did create a superintelligence many orders of magnitude superior to ourselves, how could it even make sense to have any type of fail-safe mechanism to 'enslave it' to our own values? A priori, it sounds like trying to put shackles on God. We can't barely manage to align ourselves as a species.

Wonderful! This will make me feel (slightly) less stupid for asking very basic stuff. I actually had 3 or so in mind, so I might write a couple of comments.

Most pressing: what is the consensus on the tractability of the Alignment problem? Have there been any promising signs of progress? I've mostly just heard Yudkowky portray the situation in terms so bleak that, even if one were to accept his arguments, the best thing to do would be nothing at all and just enjoy life while it lasts.

Thanks, Martijn. I would like to give it a go, even if I am rather busy with work, reading and studying at the moment.

спасибо, Alex! I quickly checked with the search engine if there were any ongoing bookclubs but didn't find yours.

Just joined the EA Anywhere Slack channel, and might join you for your book club, although I imagine you've already gone through the most obvious first choices.

Thanks for the other links too!

Well, that looks a bit like some twitter-level trolling and a textbook example of 'begging the question', doesn't it? But let me follow the guidelines...

I wouldn't say I am a 'convinced EA' or consider correct the assumption that posting on the forum is a necessary and sufficient condition thereof. I am interested in EA, and feel that some degree of 'effective altruism' with small caps is probably a valid moral obligation whatever your philosophical stance. 

As for the books, I am a bit of a bookworm and appreciate being persuaded by detailed arguments, which I tend to find more in books -and they are less taxing on my eyes. And there are aspects of EA that I probably need to read solid arguments for, as they feel alien to some of my presuppositions (utilitarianism as a moral framework, rights of non rational and non-moral creatures, etc...).

Hi there, and thanks for the post. I find myself agreeing a lot with what it says, so probably my biases are aligning with it, and that has to be said. I am still trying to catch up with the main branches of ethical thought and giving them a fair chance, which I think utilitarianism deserves (by instinct and inclination I am probably a very Kantian deontologist), even if it instinctively feels 'wrong' to me.

I haven't read enough on the topic yet, but my impression is that my train of belief would indeed be something somewhat like 'a contractualist who wants to maximize utility'.

Thanks a lot for this post. I have found it a superb piece and well worth meditating about, even if I have to say that I am probably biased because, a priori, I am not too inclined towards Utilitarianism in the first place. But I think the point you make is complex and not necessarily against consequentialism as such, and would probably go some way to accommodate the views of those who find a lot of it too alien and unpalatable.

Thanks for the advice! I have also discovered the 'block quote' and inserted it too.

Load more