How I've been using it:
If you're not feeling sad about some tradeoffs/facts about the world (or if you notice that someone else doesn't seem to be), then you might not be tracking something important (you might be biased, etc.). The “missing mood” is a signal.
Note: I’m sharing this short post with some thoughts to hear disagreements, get other examples, and add nuance to my understanding of what’s going on. I might not be able to respond to all comments.
1. Immigration restrictions
An example from the linked essay: immigration restrictions are sometimes justified. But "the reasonable restrictionist mood is anguish that a tremendous opportunity to enrich mankind and end poverty must go to waste." You might think that restricting immigration is sometimes the lesser evil, but if you don't have this mood, you're probably just ~xenophobic.
2. Long content
The example from Ben — a simplified sketch of our conversation:
- Me: How seriously do you hold your belief that “more people should have short attention spans?” And that long content is bad?
- Ben: I think I mostly just mean that there’s a missing mood: it’s ok to create long content, but you should be sad that you’re failing to communicate those ideas more concisely. I don’t think people are. (And content consumers should signal that they’d prefer shorter content.)
(Related: Distillation and research debt, apparently Ben had written a shortform about this a year ago, and Using the “executive summary” style: writing that respects your reader’s time)
3-6. Selective spaces, transparency, cause prioritization, and slowing AI
I had been trying to (re)invent the phrase for situations like the following, where I want to see people acknowledging tradeoffs:
- Some spaces and events have restricted access. I think this is the right decision in many cases. But we should notice that it's sad to reject people from things, and there are negative effects from the fact that some people/groups can make those decisions.
- I want some groups of people to be more transparent and more widely accountable (and I frequently want to prioritize transparency-motivated projects on my team, and am sad when we drop them). In some cases, it's just true that I think transparency (or accountability) is more valuable than the other person does. But as I learn more about or start getting involved in any given situation, I usually notice that there are real tradeoffs; transparency has costs like time, risks, etc. There are two ways missing moods pop up in this case:
- When I'm just ~rallying for transparency, I'm missing a mood of "yes, it's costly in many ways, and it's awful that prioritizing transparency might mean that some good things don’t happen, but I still want more of it." If I don't have this mood, I might be biased by a vibe of "transparency good.” When I start thinking more about the tradeoffs, I sometimes entirely change my opinion to agree with the prioritization of whoever it is I’m disagreeing with. Alternatively, my position becomes closer to: "Ok, I don't really know what tradeoffs you're making, and you might be making the right ones. I'm sad that you don't seem to be valuing transparency that much. Or I just wish that you were transparent — I don't actually know how much you're valuing transparency."
- The people I’m disagreeing with might also be missing a mood. They might just not care about transparency or acknowledge its benefits. There’s a big difference (to me) between someone deciding not to prioritize transparency because the costs are too high and someone not valuing it at all, and if I’m not sensing the mood, it might be the latter. (This is especially true if I don’t have a lot of trust/familiarity with them and their thinking.) (Or an alternative framing: if I’m not sad about not prioritizing transparency when decide not to go for it, I should worry that my mindset has turned into something like “why are people griping about transparency — this is my business.)
- Cause prioritization. (If you're working on civilizational resiliency and you're not feeling at least a bit sad about the fact that you can't use that time to help people struggling today, then your reasons might not be what you think.)
- Slowing down AI — I really appreciated this recent post.
H/t @Ben_West for using it in a way that made me actually pay attention to it as a useful phrase)
This is my attempt at a sketch of this phrase, but I might actually be misusing it. Please feel free to clarify or disagree. I think I'm focusing on a narrow use case in this post, but uses that are broader than this haven't properly clicked for me.
I appreciated Ruby's comment here: "I don't feel great about being the one to decide whether or not a person's post or comment or self belongs on LessWrong. I will make mistakes. But also – tradeoffs – I don't want LessWrong to get massively diluted because I wasn't willing to reject enough people."
(And sometimes it's the opposite.)
Trust/familiarity lets you have conversations that are higher context; you know that the person you’re talking to shares a lot of your values. (Beware inferential distances and illusions of transparency, though — I think it can be useful to make things explicit even when you think they might be obvious.)
In fact, when there’s some expectation of mutual trust, explicitly caveating or flagging tradeoffs might have a negative effect, too; it can make you appear defensive in a way that signals that you don’t expect the other person to trust you enough to know that you care about the relevant tradeoff. (Imagine my brother and I had an exchange where I said that I might not visit my family for my mom’s birthday, and I really stressed the fact that I care about my mom and wanted to see her. I expect that my brother would be confused that I was belaboring that point.) H/t @Clifford for this caveat.