Subtitle: Costly virtue signaling is an irreplaceable source of empirical information about character.

The following is cross-posted from my blog, which is written for a more general audience: https://hollyelmore.substack.com/p/acceptance-traps?s=w


We all hate virtue signaling, right? Even “virtue” itself has taken on a negative connotation. When we’re too preoccupied with how we appear to others, or even too preoccupied with being virtuous, it makes us inflexible and puts us out of touch with our real values and goals.

But I believe the pendulum has swung too far against virtue signaling. A quality virtue signal shows that a person follows through with their best understanding of the right thing to do, and is still one of the only insights we have into others’ characters and our own. I don’t care to defend empty “cheap talk” signals, but the best virtue signals offer some proof of their claim by being difficult to fake. Maybe, like being vegan, they take a great deal of forethought, awareness, and require regular social sacrifices. Being vegan proves dedication to a cause like animal rights or environmentalism proportional to the level of sacrifice required. The virtuous sacrifice of being vegan isn’t what makes veganism good for the animals or the environment, but it is a costly signal of character traits associated with the ability to make such a sacrifice. So the virtue signal of veganism doesn’t mean you are necessarily having a positive impact or that veganism is the best choice, but it does show that you as a person are committed, conscientious, gentle, or deeply bought into the cause such that the sacrifice becomes easier for you than it would be for other people. It shows character and acting out your values. Out of your commitment to doing the most good possible, you may notice that you start to think veganism isn’t actually the best way to help animals for a lot of people.1 I believe this represents a step forward for helping animals, but one problem is that now it’s much easier to hide lack of virtuous character traits from measurement.2 It’s harder to know where the lines are or how to track the character of the people you may one day have to decide to trust or not to trust, it’s harder to support virtuous norms that make it easier for the community to act out its values, and it’s harder to be accountable to yourself.

Many will think that it is good when a person stops virtue signaling, or that ostentatiously refusing to virtue signal is a greater sign of virtue. But is it really better when we stop offering others proof of positive qualities that are otherwise hard to directly assess? Is it better to give others no reason to trust us? Virtue signals are a proxy for what actually matters— what we are likely to do and the goals that are likely to guide our behavior in the future. There is much fear about goodharting (when you take the proxy measure as an end in itself, rather than the thing it was imperfectly measuring) and losing track of what really matters, but we cannot throw out the baby with the bathwater. All measures are proxy measures, and using proxies is the only way to ask empirical questions. Goodharting is always a risk when you measure things, but that doesn't mean we shouldn't try to measure character.

The cost of virtue signals can be high, and sometimes not worth it, but I submit that most people undervalue quality virtue signals. Imagine if Nabisco took the stance that it didn’t have anything to prove about the safety and quality of its food, and that food safety testing is just a virtue signal that wastes a bunch of product. They could be sincere, and somehow keep product quality and safety acceptably high, but they are taking away your way of knowing that. Quality control is a huge part of what it is to sell food, and monitoring your adherence to your values should be a huge part of your process of having positive impact on the world.

Virtue signaling is bad when signaling virtue is confused for possessing the signal of virtue is confused for having the desired effect upon the world. It is at its worst when all your energy goes to signaling virtue at the expense of improving the world. But signals of virtue, especially costly signals that are difficult to fake, are very useful tools. Even if I don’t agree with someone else’s principles, I trust them more when I see they are committed to living by the principles they believe in, and I trust them even more if they pay an ongoing tithe in time or effort or money that forces them to be very clear about their values. I also think that person should trust themselves more if they have a track record of good virtue signals. Trust, but verify.

The most common objections to the first version of this post were not actually objections to virtue signals per se, I claim, but disagreements about what signals are virtuous. My support of virtue signals requires some Theory of Mind— a quality virtue signal demonstrates character given that person’s beliefs about what is good. Say a person virtue signals mainly as signal of group membership— I may still judge that to show positive character traits if they believe that taking cues from the group and repping the group are good. If someone uses “virtue signals” cynically to manipulate others, I do not think they have virtuous character. Might an unvirtuous person be able to fool me with their fake virtue signals? Sure, but that will be a lot harder than to do that emitting a genuine virtue signal. Signals don’t have to be 100% reliable to be useful evidence.

Why care about virtue signals? Why not just look at what people do? Because we need to make educated guesses about cooperating with people in the future, especially ourselves. “Virtue” or “character” are names we give to our models of other people, and those models give us predictions about how they will act across a range of anticipated and unanticipated situations. (In our own case, watching our virtue metrics can not only be a way to assess if we are falling into motivated reasoning or untrustworthiness, but also the metric we use to help us improve and become more aligned with our values.) Sometimes you can just look at results instead of evaluating the character of the people involved, but sometimes a person’s virtue is all we have to go on.

Take the lofty business of saving the world. It’s important to be sure that you are really trying to help the world and, for example, not just doing what makes you feel good about yourself or allows you to see the world in a way you like. Sometimes, we can track the impact of our actions and interventions well, and so it doesn’t matter if the people who implement them are virtuous or not as long as the job is getting done. But for the biggest scores, like steering the course of the longterm future, we’re operating in the dark. Someone can sketch out their logic for how a longtermist intervention should work, but there are thousands of judgment calls they will have to make, a million empirical unknowns as to how the plan will unfold over the years, and, if any of us somehow live long enough to see the result, it will be far too late to do anything about it. Beyond evaluating the idea itself, the only insight most of us realistically have into the likelihood of this plan’s success is the virtue of the person executing it. Indeed, if the person executing the plan doesn’t have any more insight into his own murky depths than the untested stories he tells, he probably just has blind confidence.

Quality virtue signals are better than nothing. We should not allow ourselves to be lulled into the false safety of dwelling in a place of moral ambiguities that doesn’t permit real measurements. It’s not good to goodhart, but we also can’t be afraid of approximation when that’s the best we have. Judging virtue and character gives us approximations that go into our complex proprietary models, developed over millenia of evolution, of other human beings, and we need to avail ourselves of that information where little else is available.

I urge you to do the prosocial thing and develop and adopt more legible and meaningful virtue signals— for others and especially for yourself.

(This post was edited after publication, which is a common practice for me. See standard disclaimer.)

1

I’m not taking a position here. In fact, I think a mixed strategy with at least some people pushing the no-animals-as-food norm and others reducing animal consumption in various ways is best for the animals. At the time I of writing I am in a moral trade that involves me eating dairy, i.e. no longer being vegan, and the loss of the clean virtue signal was one of the things that prompted me to write this post.

2

Discussed this example in depth with Jacob Peacock, which partly inspired the post.

61

0
0

Reactions

0
0

More posts like this

Comments11
Sorted by Click to highlight new comments since: Today at 6:05 PM

I don’t care to defend empty “cheap talk” signals, but the best virtue signals offer some proof of their claim by being difficult to fake.

"Cheap talk" isn't the only kind of virtue signaling that can go bad. During the Cultural Revolution, "cheap talk" would have been to chant slogans with everyone else, whereas "real" virtue signals (that are difficult to fake) would have been to physically pick up a knife and stab the "reactionary" professor, or pick up a gun and shoot at the "enemies of the revolution" (e.g., other factions of Red Guards).

To me, the biggest problem with virtue signaling is the ever-present possibility of the underlying social dynamics (that drives people to virtue signal) spiraling out of control and causing more harm than good, sometimes horrendous amounts of harm as in cases like the Cultural Revolution. At the same time, I have to acknowledge (like I did in this post) that without virtue signaling, civilization itself probably wouldn't exist. I think ideally we'd study and succeed in understanding the dynamics and then use that understanding to keep an optimal balance where we can take advantage of the positive side effects of virtue signaling while keeping the harmful ones at bay.

Thanks for the post. However, I find it weird that this actually has to be written down and be made explicit.

(or perhaps I spent too much time thinking about credit scoring)

I think this issue probably reflects the biggest cleavage between the rationalist community and the effective altruist community, to the extent the groups can really be separated. From a rationalist point of view, the truth is the most important thing, so virtue signaling is bad because it's (suspected to be) dishonest. From an EA point of view, doing the most good is the most important thing, so socially-motivated virtue signaling is defensible if it consequentially results in more good.

Obviously this is an extreme oversimplification and I'm sure there are people in both communities who wouldn't agree with the positions I've assigned them, but I would guess that as a general heuristic it is more accurate than not.

lol, see the version of this on less wrong to have your characterization of the rationalist community confirmed: https://www.lesswrong.com/posts/hpebyswwhiSA4u25A/virtue-signaling-is-sometimes-the-best-or-the-only-metric-we

 

From an EA point of view, doing the most good is the most important thing, so socially-motivated virtue signaling is defensible if it consequentially results in more good.

EAs may be more likely to think this, but this is not what I'm saying. I'm saying there is real information value in signals of genuine virtue and we can't afford to leave that information on the table.  I think it's prosocial to monitor your own virtue and offer proof of trustworthiness (and other specific virtues) to others, not because fake signals somehow add up to good social consequences, but because it helps people to be more virtuous. 

Rationalists are erring so far in the direction of avoiding false or manipulative signals that they are operating in the dark, when at the same time they are advocating more and more opaque and uncertain ways to have impact. I think that by ignoring virtue and rejecting virtue signals, rationalists are not treating the truth as "the most important thing". (In fact I think this whole orientation is a meta-virtue-signal that they don't need validation and they don't conform-- which is a real virtue, but I think is getting in the way of more important info.) It's contradicting our values of truth and evidence-seeking not to get what information we can about character, at least own own characters. 

I just want to reiterate, I am not advocating doing something insincere for social benefit. I'm advocating getting and giving real data about character.

From a rationalist point of view, the truth is the most important thing, so virtue signaling is bad because it's (suspected to be) dishonest

It's a good way of framing it (if by "rationalist" you mean something like the avg member of LW). I think the problem in this description is that we often emphasize so much the need of being aware of one's own biases that we picture ourselves as "lonely reasoners" - neglecting, e.g., the frequent necessity  to communicate one is something like a reliable cooperator. 

Yeah. Another piece of this that I didn't fully articulate before is that I think the "honesty" of virtue signaling is very often hard to pin down. I get why people have a visceral and negative reaction to virtue signaling when it's cynically and transparently being used as a justification or distraction for doing things that are not virtuous at all, and it's not hard to find examples of people doing this in practice. Even in that scenario, though, I think it's a mistake to focus on the virtue signaling itself rather than the not-virtuous actions/intentions as the main problem. Like, if you have an agent with few or no moral boundaries who wants to do a selfish thing, why should we be surprised that they're willing to be manipulative in the course of doing that?

I think cases like these are pretty exceptional though, as are cases when someone is using virtue signaling to express profound and stable convictions. I suspect it's much more often the case that virtue signaling occupies a sort of ambiguous space where it might not be completely authentic but does at least partly reflect some aspiration towards goodness, on the part of either the person doing it or the community they're a part of, that is authentic. And I think that aspiration is really important on a community level, or at least any community that I'd want to be a part of, and virtue signaling in practice plays an important role in keeping that alive.

Anyway, since "virtue" is in the eye of the beholder, it would be pretty easy to say that rationalists define "truth-seeking" as virtue and that there's a whole lot of virtue-signaling on LessWrong around that (see: epistemic status disclaimers, "I'm surprised to hear you say that," "I'd be happy to accept a bet on this at x:y odds," etc.)

Even in that scenario, though, I think it's a mistake to focus on the virtue signaling itself rather than the not-virtuous actions/intentions as the main problem. Like, if you have an agent with few or no moral boundaries who wants to do a selfish thing, why should we be surprised that they're willing to be manipulative in the course of doing that?

If you think of virtue signalling as a really important coordination mechanism, then abusing that system is additionally very bad on top of the object-level bad thing.

Excellent piece. I think I can only agree with this.

I agree somewhat, but I think this represents a real difference between rationalist communities like LessWrong and the EA community. Rationalists like LessWrong focus on truth, Effective Altruism is focused on goodness. Quite different goals when we get down to it.

While Effective Altruism uses a lot more facts than most moral communities, it is a community focused on morality, and their lens is essentially "weak utilitarianism." They don't accept the strongest conclusions of utilitarianism, but there is no "absolute dos or don'ts", unlike dentologists.

The best example is "What if P=NP?" was proven true. It isn't, but I will use it as an example of the difference between rationalists and EAs. Rationalists would publish it for the world, focusing on the truth. EAs would not, because one of the problems we'd be able to solve efficiently is encryption. Essentially this deals a death blow to any sort of security on computers. It's a hacker's paradise. They would focus on how bad such an information hazard it would be, this for the good of the world, they wouldn't publish it.

So what's all those words for? To illustrate point of view differences between rationalists like LessWrong and EAs on the question of prioritization of truth vs goodness.

I disagree-- Rationalists (well, wherever you want to put Bostrom) invented the term infohazard. See Scott Alexander on Virtue of Silence. They take the risks of information as power very seriously, and if knowledge of P  equaling NP posed a threat to lots of beings and they thought the best thing was suppress that, they would do it. In my experience, both EAs and rationalists are very respectful of the need for discretion.

I think I see the distinction you're making and I think the general idea is correct, but this specific example is wrong.

Curated and popular this week
Relevant opportunities