Written by an anonymous LW user.
This is part of LessWrong for EA, a LessWrong repost & low-commitment discussion group (inspired by this comment). Each week I will revive a highly upvoted, EA-relevant post from the LessWrong Archives, more or less at random
Excerpt from the post:
Last summer I was talking to my sister about something. I don't remember the details, but I invoked the concept of "truth", or "reality" or some such. She immediately spit out a cached reply along the lines of "But how can you really say what's true?".
Of course I'd learned some great replies to that sort of question right here on LW, so I did my best to sort her out, but everything I said invoked more confused slogans and cached thoughts. I realized the battle was lost. Worse, I realized she'd stopped thinking. Later, I realized I'd stopped thinking too.
I went away and formulated the concept of a "Philosophical Landmine".
I used to occasionally remark that if you care about what happens, you should think about what will happen as a result of possible actions. This is basically a slam dunk in everyday practical rationality, except that I would sometimes describe it as "consequentialism".
The predictable consequence of this sort of statement is that someone starts going off about hospitals and terrorists and organs and moral philosophy and consent and rights and so on. This may be controversial, but I would say that causing this tangent constitutes a failure to communicate the point. Instead of prompting someone to think, I invoked some irrelevant philosophical cruft. The discussion is now about Consequentialism, the Capitalized Moral Theory, instead of the simple idea of thinking through consequences as an everyday heuristic.
It's not even that my statement relied on a misused term or something; it's that an unimportant choice of terminology dragged the whole conversation in an irrelevant and useless direction.
That is, "consequentialism" was a Philosophical Landmine. (Full Post on LW)
Please feel free to,
- Discuss in the comments
- Subscribe to the LessWrong for EA tag to be notified of future posts
- Tag other LessWrong reposts with LessWrong for EA.
- Recommend additional posts
As someone interested in messaging, I liked this! Carefully choosing one's words - and being aware of how someone might perceive a word you use, like "consequentialism" or "moral" - can be important in ensuring a conversation goes well.