As a side and personal comment, I don't like too much the tendency in EA to link to articles when trying to make a point. Years ago I hung out a bit with Objectivists, both in person and online. Something that frustrated me a lot was that, for every question I asked, they linked (if online) or referred (if offline) an article from Ayn Rand or from Leonard Peikoff, saying "read this". Instead of linking articles, I think it's way better to try to explain ourselves in our own words.
P.s. I am referring to this passage
...As a minor point, I want to push back a ti
I totally agree.
In my couple of years of experience as a fully committed EA, I've noticed IQ signalling is too many times more valued than trying to be clear and socially aware. I think that EA tends to attract a certain type of person (we know the drill: neurodivergent, high IQ, introvert, socially awkward, upper-class, UK/US born) and that's great if this grouping-tendency makes people comfortable to be themselves. But the other side of the story is that a communication culture which is designed to favour a certain kind of person will become unwelc...
Thanks for the comment, Lorenzo. A few random counterpoints to what you said, some positive and some critical.
1. On point one I don't share at all your confidence, quite the opposite. I would like that - i.e. that by asking for clarifications I'll look smarter - but my impression is that on a deep and untrained-by-system-2 level, you will be perceived as the dumb foreigner who doesn't understand for a long time, let's say for the first few years in which you interact daily with native English speakers. To find out who's right should we try to test ou...
Hi Serena. I am the CEO and co-founder of School of Thinking, a fully EA-aligned media project with 25.000 followers across three social media profiles (YouTube, TikTok and Instagram) in two languages (Italian and English). Last month we had 700.000 views on the Italian IG profile only. So I guess we can already qualify as EA micro-influencers.
That's exactly what I need at this exact moment since I am going to establish my company here in the UK. Can I contact you in private to ask more questions?
Hi Baptiste! Not even a single connection to the Continental tradition could be found in School of Thinking :D.
Actually, I have always been pretty clear in defining my approach as strongly analytical/anglo-saxon. Until now, fortunately, I haven't received any particular resistance regarding my approach but mainly positive feedback, probably because most of my followers are into STEM or already into analytical philosophy.
Hi Jack. I am really into cognitive enhancement. In 2020 (right before COVID) I did a two months research period at Bernhard Hommel's cognitive enhancement lab in Leiden. While I was a Cognitive Science student in Milan I did an exam with Roberta Ferrucci and one with Alberto Priori, two prominent TDCS as a cognitive enancher experts. At the last EAxOxford I spoke with Anders Sandberg about cognitive enhancement as an EA cause area. All to say that I am interested in what you are doing and that could be valuable to connect more people that are into "seriou...
I have hang-ups about money in general. For several years after university I lived on about $12k a year (which is low by UK standards, though high by world ones). It's pretty surreal to be able to even consider applying for say 5x this as a salary. It's like going to a fancy restaurant for the first time ("the waiters bring the food to the table?") I just can't shake how surreal this all is.
I grow up in a relatively poor family in Italy. I learned English from scratch at 21. I graduated at 26 (way above the UK/US standards). I was able to survive at ...
It's a good question. The contents will not be the same, the strategy will. The idea is to enable different chapters to create content independently following common guidelines and best practices (in terms of graphics, tone of voice, etc).
I am considering the option of working on an EA-related full-time project that I will start in December in part remotely as a digital-nomad, especially in order to know people from other EA communities mainly all around Europe.
Since I've been interested in these topics for years (and I have almost started a PhD at Leiden University about this) I am pondering the possibility of writing something in the same cluster of this post but slightly different - e.g. like "The case for cognitive enhancement as a priority cause", a reading list or something like that.
But before that I want briefly to tell you my story. I think it could be valuable for this conversation by looking at like at a Minimum Viable Product for what you said here
"...For example, we could improve at teaching ra...
Your initial point reminds me in some sense the orthogonality thesis by Nick Bostrom, but applied to humans. High IQ individuals acted in history to pursue completely different goals, so it's not automatic to assume that by improving humanity's intelligence as a whole we would assure for sure a better future to anyone.
At the same time I think we could be pretty much confident to assume that an higher IQ-level of humanity could at least enable more individuals to find optimal solutions to minimize the risks of undesirable moral outcomes from the actions of high-intelligent but morally-questionable individuals, while at the same time working on solving more efficiently other and more relevant problems.
To be fair mine regarding the link-to-articles tendency is not a well-formed opinion, just something I've felt during some online and offline conversations. Especially from other fellow rationalists, when they quote a Scott's article or an obscure post on the sequences when not absolutely needed.
By the way, I think it's also a bad idea to demand more work from people you are communicating with, like informally requesting them to read a full article instead of trying to explain your point in plain terms.
Let's put it this way: we can have the pri... (read more)