Epistemic status: casual observation written as first draft but published as-is because it was unlikely to ever actually get edited. 

There's a common failure mode I see among EAs and particularly rationalists where they fail to translate their thoughts back when communicating. This post is just explaining what I mean by this. 

Visual Summary

Made with excalidraw

Verbal thought

Many people think verbally. Some people think pictorially or in some other fashion, but most people, most of the time, construct their thoughts as something-like an ongoing sentence made up of words. 

For most people, this is a really useful feature! When you want to share things with somebody, the level of 'translation' from your thoughts to language is very low. Everybody I know who struggles with verbal speech thinks in a non-verbal format (which could of course cut both ways causally). 

Less-verbal thought

There are three key problems introduced by rationalism to the usual process above; jargon, the rationalist desire for intense precision, and thinking truly weird thoughts in general.

This is less specific to rationalism/EA, but many people think in words which other people don't know. Even when they try to translate their jargon into something more understandable, they underestimate the inferential distance, or they map words poorly (where an equivalent doesn't exist in 'normal English') such that most of the meaning is lost. 

When this is combined with the rationalist desire for extreme precision in speech and thought, it contributes to unpleasant listening and bad communication. If people can get a 1-sentence summary which means 90% the same thing as what you're thinking, this is (usually!) better than people getting an hour-long lecture on your specific subdomain of AI alignment just to understand a peripheral point. It's ok if people don't know your precise epistemics for a passing comment. No really, I promise.

This is worsened further by the fact that a key recommendation for rationalists is 'thinking weird thoughts'. Rationality often involves inspecting the terms you're using, defining them as narrowly as possible, and trying to actually use them in this way. This is good for thought, but bad for communication. If you're planning to use a word a lot, or if your definition varies a lot from the standard one, it's worth defining it-- but track this actively! Some of the time it's ok to be misunderstood a little to avoid being unlistened to a lot. 

There's always a relevant XKCD

 

Remember to interpret

So, if you're a rationalist who thinks very strange thoughts and/or has invented a whole buch of concepts for which only you have verbal handles, remember to translate your thoughts back into English before saying them aloud. Similarly, if you seem to be miscommunicating about something important, try interpreting again into something closer to your true thoughts. 

Here's a worked example (which is by its nature hard to give, because the 'less-verbal' thought necessarily has to be verbalised to be written down):

Precise, but lengthy: Benzodiazepines are [unlikely to cause physiological withdrawal symptoms if stopped appropriately, compared to other drugs] but probably [cause people to narrow the level of anxiety they find tolerable] and [cause them to become less accustomed to using other coping mechanisms, such that people are practically worse off for a period after stopping benzodiazepines], and this is [likely a major contributor to the creation of guidelines which encourage doctors to limit the length of a course of benzodiazepines].

Less precise, but easier to read: Benzodiazepines are not very physiologically addictive (withdrawal symptoms are rare), but may be psychologically addictive (people get used to how they feel), and this is why doctors don't like to prescribe benzodiazepines.

Imprecise and very brief: Benzodiazepines are not very physiologically addictive, but may be psychologically addictive, so doctors try to limit their use.

Note I'm not claiming any of these is superior to the others! Each of these has its place, depending on audience and importance. The first one is closest to my thoughts, and is represented in a much more concise way inside of my brain such that it's hard to notice how compressed it is until I attempt to verbalise it. The square brackets in the first example are each roughly a single thing I can point at in my head easily, but which I doubt are that for most readers. 

Conclusion

All speech sits along an axis of thought-like-ness (thoughtiness?). Most of the time, thought-like-ness is a secondary concern to speed and immediate clarity, but not always. 

Actively tracking thought-like-ness, or having it come to mind when you find yourself confused about a disagreement or find yourself being misunderstood, may be a useful skill to improve your ability to communicate well with a wide variety of audiences. 

 

See also:  excessive nuance, inferential distance, illusion of transparency, disputing definitions, aim low

New to LessWrong?

New Comment
11 comments, sorted by Click to highlight new comments since: Today at 7:40 AM

Precise, but lengthy: Benzodiazepines are [unlikely to cause physiological withdrawal symptoms if stopped appropriately, compared to other drugs] but probably [cause people to narrow the level of anxiety they find tolerable] and [cause them to become less accustomed to using other coping mechanisms, such that people are practically worse off for a period after stopping benzodiazepines], and this is [likely a major contributor to the creation of guidelines which encourage doctors to limit the length of a course of benzodiazepines].

Less precise, but easier to read: Benzodiazepines are not very physiologically addictive (withdrawal symptoms are rare), but may be psychologically addictive (people get used to how they feel), and this is why doctors don't like to prescribe benzodiazepines.

Imprecise and very brief: Benzodiazepines are not very physiologically addictive, but may be psychologically addictive, so doctors try to limit their use.

Personally, if someone told me exactly that precise but lengthy thing, I would consider it one of the highest value-per-word things I had ever heard about a drug. It has so many useful gears in it!

On the other hand, if someone told me that a drug is "psychologically but not physiologically addictive" I'd assume that the claim is either total horseshit, or oversimplification to the point of uselessness. It gives me no useful gears, and tells me so little about what the actual gears-level model even is that I'm left unsure whether there's any underlying gears in the model at all.

Also, there's signal value: I generally expect that most people say the "less precise" things most of the time, not for communications' sake, but because they do not have the gears of the more precise thing in their own head. If someone has enough gears in their model to say the precise thing, then that is extremely important information for me to know; I'll update very differently on claims with a lot of gears attached.

Insofar as this example is representative, I would strongly prefer that people just do the precise but lengthy thing, the vast majority of the time.

I strongly agree with the Johnswentworth's point! I think my most productive discussions have come from a gears-level/first-example style of communication. 

What I'm arguing in this post is very much not that this communication style is bad. I'm arguing that many people will stop listening if you jump straight to this, and you should explicitly track this variable in your head when communicating. 

Obviously 'know your audience and adjust complexity appropriately' is quite a trivial point, but to me thinking about it with a 'thought-like-ness' frame helps me to actually implement tis by asking "how much translating do I need to do for this audience?" 

Maybe I should rewrite the post as "Gears in Conversation" or so.

I guess brook took some time to write down the content if the square brackets even if the thoughts themselves are clear (because they have a lot of gears that do not map on to one to words). If you tried to say the square bracket parts in a spoken conversation it might not be unlikely that you'd stumble and struggle fir words and the result would in practice be worse than option two. At least I notice this problem when I try to do option one on complex topics.

Sometimes, we can include both the expanded and compressed versions - as in this post. In a talk, we can provide digital handouts that expand or compress topics (well-constructed powerpoints do this, although they're often badly made). And in print, we can link to websites, perhaps via QR code, to make it more convenient to go deeper or shallower. These options seem massively underused compared to what would be optimal for efficient learning.

When I take notes, I like to make a multirow table with two columns. On the left, I put 1-2 word bolded summaries of each topic. On the right, I put detailed mechanistic information. I think there's a lot of room for improvement in using the flexibility computers offer in managing text to present summaries at varied levels of detail, permitting users to toggle between them as necessary.

In particular, I'd love to have a "3D text editor." This would give you more options for how to manage text. Some examples would include making it convenient to add various formats of hovertext, "click to expand/summarize" features that let you increase or decrease the complexity of the information presented, and more options for annotations (such as multimedia annotations that can be flexibly linked to individual multiscale chunks of text, but also to things like word groups any time they appear int he text).

This is good for some formats; I think in verbal communication I like to track this because the key variable I'm optimising on is listener attention/time; giving both loses a lot. I find it can be useful to save the gears-level stuff for the cruxes and try to keep the rest brief.

I mostly think the phrase "psychologically addictive" is way less clear than necessary to communicate to me.

I think I would write the paragraph as something vaguely like:

"The physiological withdrawal symptoms of Benzodiazepines can be avoided—but often people have a bad time coming of Benzodiazepines because they start relying on them over other coping mechanisms. So doctors try to avoid them."

It seems possible to come up with something that is both succinct and actually communicates the gears.

What exactly is "speech of appropriate thought-like-ness"?  It sound from the rest of the article like moderate precision speech, or something along those lines,  but not quite.  Perhaps fittingly, there seems to be an inferential gap here.  Also seconding johnswentworth here:  your precise but lengthy take on benzodiazepines is easy to understand and extremely valuable, at least to someone curious about the subject.  

I think "speech of appropriate thought-like-ness" is, unfortunately, wildly contextual. I would have predicted that the precise lengthy take would go down well on LW and especially with ACX readers. This specific causal gears-level type of explanation is common and accepted here, but for audiences that aren't expecting it, it can be jarring and derail a discussion. 

Similarly, many audiences are not curious about the subject! Appropriate is the operative word. Sometimes it will be appropriate to gloss over details either because the person is not likely to be interested (and will tune out lengthy sentences about causal models of how doctors behave), or because it's non-central to the discussion at hand. 

For instance, if I was chatting to a friend with a medical (but non-rationalist) background about marijuana legalisation, the lengthy take is probably unwise; benzodiazepines are only peripherally relevant to the discussion, and the gears-level take easily leads us into one of several rabbit holes (Are they actually unlikely to cause withdrawal symptoms? What do you mean by unlikely? Does psychological addiction mean precisely that? Is that why those guidelines exist? why are you modelling doctors in this way at all, is that useful? should I be using gears-level models?).

Any of these questions can lead to a fruitful discussion (especially the last few!), but if you have specific reason to keep discussions on track I would save your gears-explanations for cruxes and similar. 

I mean, what is the concept “speech of appropriate thoughtness”? Perhaps which speech fits that concept is highly contextual, but what is the concept that you are checking that speech against? Your last comment makes it sound like appropriate level of detail; are you simply using thoughtness here as a synonym for detail (perhaps to indicate the fact that nonverbal thoughts are often extremely highly detailed?), or is there an additional subtlety here? If I say “alright, I’ll try to use appropriate levels of detail when communicating”, is your response “good, you understand my point” or “that’s a start, but you’d do better still if you considered X”?

Even more brief: "Benzodiazepenes work. That's why doctors don't like to prescribe them."

To make a "precise but lengthy" statement shorter, make it "precise and brief", not "vague and brief". It is like making a line drawing from a photograph, conveying the essentials with pinpoint clarity, rather than just blurring it out.