Luca Parodi

I am a management consultant with a background in Political Science, Philosophy (Bachelor Degree) and Cognitive Science and Decision Theory (Master Degree). I create rationalist/longtermist/EA-friendly contents for my educational project on social media for the italian general public (@school_of_thinking on Instagram)

Posts

Sorted by New

Wiki Contributions

Comments

FTX EA Fellowships

I am considering the option of working on an EA-related full-time project that I will start in December in part remotely as a digital-nomad, especially in order to know people from other EA communities mainly all around Europe.

Prioritization Research for Advancing Wisdom and Intelligence

Since I've been interested in these topics for years (and I have almost started a PhD at Leiden University about this) I am pondering the possibility of writing something in the same cluster of this post but slightly different - e.g. like "The case for cognitive enhancement as a priority cause", a reading list or something like that. 

But before that I want briefly to tell you my story. I think it could be valuable for this conversation by looking at like at a Minimum Viable Product for what you said here

"...For example, we could improve at teaching rationality, or we could make progress on online education..."

Since July 2020 I am running an educational project on Instagram (named @school_of_thinking) with the intention of teaching rationality to the general public (at the moment I have 12.500 follower). Not only rationality, actually, but also critical thinking, decision theory, strategic thinking, epistemology and effective communication. I've been a passive member of the rationalist community for several years and I decided to start this project in part because in Italy (the project is all runned in italian) we lack the critical thinking culture that is present in other anglo-saxon countries. An example of this is that I haven't found on amazon.it any updated serious textbook on critical thinking written in italian. This project is all EA-value based. 

I've had a constant organic growth, a high and stable engagement rate (between 8% and 15%) and a decent number of positive and unbiased detailed feedbacks. It is all based on some informal pedagogic considerations that I have in mind about how to teach things in general. My idea now is to expand this project into other platforms, to create courses and books and to start a rationalist podcast. 

There will be too much to say about my project, but if anyone want to ask me questions I am completely open.  I also think I will write an entire post about it. 

Prioritization Research for Advancing Wisdom and Intelligence

Your initial point reminds me in some sense the orthogonality thesis by Nick Bostrom, but applied to humans. High IQ individuals acted in history to pursue completely different goals, so it's not automatic to assume that by improving humanity's intelligence as a whole we would assure for sure a better future to anyone.

At the same time I think we could be pretty much confident to assume that an higher IQ-level of humanity could at least enable more individuals to find optimal solutions to minimize the risks of undesirable moral outcomes from the actions of high-intelligent but morally-questionable individuals, while at the same time working on solving more efficiently other and more relevant problems.