Hide table of contents

by chaosmage

Building on the recent SSC post Why Doctors Think They’re The Best...

What it feels like for meHow I see others who feel the same
There is controversy on the subject but there shouldn't be because the side I am on is obviously right.They have taken one side in a debate that is unresolved for good reason that they are struggling to understand
I have been studying this carefullyThey preferentially seek out conforming evidence
The arguments for my side make obvious sense, they're almost boring.They're very ready to accept any and all arguments for their side.
The arguments for the opposing side are contradictory, superficial, illogical or debunked.They dismiss arguments for the opposing side at the earliest opportunity.
The people on the opposing side believe these arguments mostly because they are uninformed, have not thought about it enough or are being actively misled by people with bad motives.The flawed way they perceive the opposing side makes them confused about how anyone could be on that side. They resolve that confusion by making strong assumptions that can approach conspiracy theories.

The scientific term for this mismatch is: confirmation bias

What it feels like for meHow I see others who feel the same
My customers/friends/relationships love me, so I am good for them, so I am probably just generally good.They neglect the customers / friends / relationships that did not love them and have left, so they overestimate how good they are.
When customers / friends / relationships switch to me, they tell horror stories of who I'm replacing for them, so I'm better than those.They don't see the people who are happy with who they have and therefore never become their customers / friends / relationships.

The scientific term for this mismatch is: selection bias

What it feels like for meHow I see others who feel the same
Although I am smart and friendly, people don't listen to me.Although they are smart and friendly, they are hard to understand.
I have a deep understanding of the issue that people are too stupid or too disinterested to come to share.They are failing to communicate their understanding, or to give unambiguous evidence if they even have it.
This lack of being listened to affects several areas of my life but it is particularly jarring on topics that are very important to me.This bad communication affects all areas of their life, but on the unimportant ones they don't even understand that others don't understand them.

The scientific term for this mismatch is: illusion of transparency

What it feels like for meHow I see others who feel the same
I knew at the time this would not go as planned.They did not predict what was going to happen.
The plan was bad and we should have known it was bad.They fail to appreciate how hard prediction is, so the mistake seems more obvious to them than it was.
I knew it was bad, I just didn't say it, for good reasons (e.g. out of politeness or too much trust in those who made the bad plan) or because it is not my responsibility or because nobody listens to me anyway.In order to avoid blame for the seemingly obvious mistake, they are making up excuses.                                                                                                                                                                 

The scientific term for this mismatch is: hindsight bias

What it feels like for meHow I see others who feel the same
I have a good intuition; even decisions I make based on insufficient information tend to turn out to be right.They tend to recall their own successes and forget their own failures, leading to an inflated sense of past success.
I know early on how well certain projects are going to go or how well I will get along with certain people.They make self-fulfilling prophecies that directly influence how much effort they put into a project or relationship.
Compared to others, I am unusually successful in my decisions.They evaluate the decisions of others more level-headedly than their own.
I am therefore comfortable relying on my quick decisions.They therefore overestimate the quality of their decisions.
This is more true for life decisions that are very important to me.Yes, this is more true for life decisions that are very important to them.

The scientific term for this mismatch is: optimism bias

Why this is better than how we usually talk about biases

Communication in abstracts is very hard. (See: Illusion of Transparency: Why No One Understands You) Therefore, it often fails. (See: Explainers Shoot High. Aim Low!) It is hard to even notice communication has failed. (See: Double Illusion of Transparency) Therefore it is hard to appreciate how rarely communication in abstracts actually succeeds.

Rationalists have noticed this. (Example) Scott Alexander uses a lot of concrete examples and that should be a major reason why he’s our best communicator. Eliezer’s Sequences work partly because he uses examples and even fiction to illustrate. But when the rest of us talk about rationality we still mostly talk in abstracts.

For example, this recent video was praised by many for being comparatively approachable. And it does do many things right, such as emphasize and repeat that evidence alone should not generate probabilities, but should only ever update prior probabilities. But it still spends more than half of its runtime displaying mathematical notation that no more than 3% of the population can even read. For the vast majority of people, only the example it uses can possibly “stick”. Yet the video uses its single example as no more than a means for getting to the abstract explanation.

This is a mistake. I believe a video with three to five vivid examples of how to apply Bayes’ Theorem, preferably funny or sexy ones, would leave a much more lasting impression on most people.

Our highly demanding style of communication correctly predicts that LessWrongians are, on average, much smarter, much more STEM-educated and much younger than the general population. You have to be that way to even be able to drink the Kool Aid! This makes us homogeneous, which is probably a big part of what makes LW feel tribal, which is emotionally satisfying. But it leaves most of the world with their bad decisions. We need to be Raising the Sanity Waterline and we can’t do that by continuing to communicate largely in abstracts.

The tables above show one way to do better that does the following.

  • It aims low - merely to help people notice the flaws in their thinking. It will not, and does not need to, enable readers to write scientific papers on the subject.
  • It reduces biases into mismatches between Inside View and Outside View. It lists concrete observations from both views and juxtaposes them.
  • These observations are written in a way that is hopefully general enough for most people to find they match their own experiences.
  • It trusts readers to infer from these juxtaposed observations their own understanding of the phenomena. After all, generalizing over particulars is much easier than integrating generalizations and applying them to particulars. The understanding gained this way will be imprecise, but it has the advantage of actually arriving inside the reader’s mind.
  • It is nearly jargon free; it only names the biases for the benefit of that small minority who might want to learn more.

What do you think about this? Should we communicate more concretely? If so, should we do it in this way or what would you do differently?

Would you like to correct these tables? Would you like to propose more analogous observations or other biases?

Thanks to Simon, miniBill and others for helping with the draft of this post.

 

 

 

This work is licensed under a Creative Commons Attribution 4.0 International License.

Comments6


Sorted by Click to highlight new comments since:

I feel I tend towards self-deprecation which is a bias unconsidered, but that is just because I have depression and am abnormal in this respect to most (this statement is supposed to be funny in its irony - I apologise). I genuinely identfy with a lot of these self-inflating and other-deprecating tendencies, and this was very enlightening as I will now have in mind to identify them and the extent to which they are flawed.

I believe that the 3Blue1Brown video is a bad example for the point you are trying to make. 
His videos are made for an audience who already have some kind of background in abstract mathematics, so his communication is in-fact comparatively approachable for its target audience.

Overall I agree with this article but found this example quite ironic. 

(The post was not written by Eliezer, but by chaosmage).

I love this table for it's relatability. However, I felt lost in the text putting the table into context. Specifically the sentence,

"But when the rest of us talk about rationality we still mostly talk in abstracts.".

What is really actually meant by this? Both, what is meant by "rationality" in general and what examples should be thought of for "abstractions", are lost on me. Does it apply to everyday talk with friends, what are abstractions there? Does it apply to truth seeking communities like this forum, and what abstractions are used here? Does it appply to scientific communication, other than when using formulas instead of text to carry the concept to the audience?

I feel like what it means is - even when you think you're speaking rationally & logically - your words may still be misunderstood. Message may be ambiguous and perceived incorrectly by the receiver. As communication is so complex, it stresses our need to be conscious of our message, to be concrete (with evidence and examples if applicable), clarify instead of making assumptions and to incorporate active listening (tone, body language, content holistically) when listening. 

There's a typo: 

They are failing to communicate their understanding, or to give unambiguous evidence they even have it.


 IF they even*

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f