Happy holidays to you too.
I think your comment largely addresses a version of the post that doesn't exist.
In brief:
I don't think I claimed novelty; the post is explicitly about existing concepts that seem obvious once you have them. I even used specific commonly known terms for them.
Theory of mind, mentalization, cognitive empathy, and perspective taking are, of course, not actually "rare" but are what almost all people are doing almost all the time. The interesting question is what kinds of failures you think are common. The more opinionated you are about this, and the more you diverge from consensus opinions of experts such as psychologists and researchers in social work, the more likely you are to be wrong.
The post gave specific examples of people with the capacity for ToM nonetheless failing to consistently apply it to political outgroups, foreign adversaries, story characters etc. Also the specific wording I wrote was:
The core idea is very simple: treat other agents as real. It sounds banal, until you realize how rare it can be, and how frequently people mess up."
You harp on the word "rare" but miss the surrounding context. You consistently make technically true but irrelevant points.
so if the point is to understand world hunger or global poverty, it would be a better idea to just read an introductory text on international development than to think further about how the concept of net present value might or might not shed new light on global poverty.
Are you seriously implying that it takes less effort to read an entire textbook on developmental economics than it is to write a paragraph on a related question? Besides, that wasn't the point of the post anyway, which was more like "here's a specific conceptual error people make, NPV dissolves it."
I don't think anybody disagrees that ideas matter. I would say everyone agrees with that.
This blog post initially grew out of a conversation with a popular blogger about whether ideas actually matter. It's also commonly believed in Silicon Valley that ideas are almost irrelevant compared to execution.
I personally don't find any value in Grice's maxims.
Clearly.
I had the same initial reaction! I'd guess others would have the same misreading too, so it's worth rewriting. fyi @Yulia Chekhovska
For Inkhaven, I wrote 30 posts in 30 days. Most of them are not particularly related to EA, though a few of them were. I recently wrote some reflections. @Vasco Grilo🔸 thought it might be a good idea to share on the EA Forum; I don't want to be too self-promotional so I'm splitting the difference and posting just a shortform link here:
https://linch.substack.com/p/30-posts-in-30-days
The most EA-relevant posts are probably
https://inchpin.substack.com/p/skip-phase-3
https://inchpin.substack.com/p/aging-has-no-root-cause
https://inchpin.substack.com/p/legible-ai-safety-problems-that-dont
There are a number of implicit concepts I have in my head that seem so obvious that I don't even bother verbalizing them. At least, until it's brought to my attention other people don't share these concepts.
It didn't feel like a big revelation at the time I learned the concept, just a formalization of something that's extremely obvious. And yet other people don't have those intuitions, so perhaps this is pretty non-obvious in reality.
Here’s a short, non-exhaustive list:
If you have not heard any of these ideas before, I highly recommend you look them up! Most *likely*, they will seem obvious to you. You might already know those concepts by a different name, or they’re already integrated enough into your worldview without a definitive name.
However, many people appear to lack some of these concepts, and it’s possible you’re one of them.
As a test: for every idea in the above list, can you think of a nontrivial real example of a dispute where one or both parties in an intellectual disagreement likely failed to model this concept? If not, you might be missing something about each idea!
My overall objection/argument is that you appear to selectively portray data points that show one side, and selectively dismiss data points that show the opposite view. This makes your bottom-line conclusion pretty suspicious.
I also think the rationalist community overreached and their epistemics and speed in early COVID were worse compared to, say, internet people, government officials, and perhaps even the general public in Taiwan. But I don't think the case for them being slower than Western officials or the general public in either the US or Europe is credible, and your evidence here does not update me much.
See eg traviswfisher's prediction on Jan 24:
https://x.com/metaculus/status/1248966351508692992
Or this post on this very forum from Jan 26:
I wrote this comment on Jan 27, indicating that it's not just a few people worried at the time. I think most "normal" people weren't tracking covid in January.
I think the thing to realize/people easily forget is that everything was really confusing and there was just a ton of contentious debate during the early months. So while there was apparently a fairly alarmed NYT report in early Feb, there were also many other reports in February that were less alarmed, many bad forecasts, etc.
I wrote a short intro to stealth (the radar evasion kind). I was irritated by how bad existing online introductions are, so I wrote my own!
I'm not going to pretend it has direct EA implications. But one thing that I've updated more towards in the last few years is how surprisingly limited and inefficient the information environment is. Like obvious concepts known to humanity for decades or centuries don't have clear explanations online, obvious and very important trends have very few people drawing attention to them, you can just write the best book review of a popular book that's been around for decades, etc.
I suppose one obvious explanation here is that most people who can write stuff like this have more important and/or interesting things to do with their time. Which is true, but also kind of sad.
Thanks, the feeling is mutual.