Hide table of contents

Meta: This is a draft of a paid book review commissioned by CEA. The purposes of the commission include being used as a potential starting point for discussion on the new EA forum.

The book being reviewed is Jonathan Haidt's The Righteous Mind, released in 2012. The review is split into two parts. The first part summarizes the book, and the second offers commentary.

Feedback for this draft is welcome, about either form or content.

Summary

In one sentence:

You might think that your moral judgments are reasoned, but they are really driven by the “elephant” of our moral intuition, with reasons coming afterwards.

In one paragraph:

Haidt makes three claims, using three metaphors:

  1. The Elephant and the Rider: It feels like reason (the elephant rider) is driving our moral judgements, but in fact, our reactions to moral questions seem to be determined by intuitive judgements (the elephant). Reason is mainly used to justify and support those intuitive judgements.
  2. Moral Taste Receptors: Haidt sets out 5 moral tastes (Care, Fairness, Loyalty, Authority, Sanctity). He claims that different people and groups have different “taste receptors”: for instance, that liberals care mostly about Care and Fairness, while conservatives have a more even split between all 5 taste receptors.
  3. 90% Chimp, 10% Bee: We are mostly individualistic chimps, but we also have some capacity to absorb ourselves into a collective project, or “hive”. Haidt claims that this capacity allows us to cooperate, and avoid free rider problems. But are cooperative tendencies are often limited to some parochial hive, rather than the collective good: Haidt thinks that the best we can aspire to may be “parochial altruism” for those we are close to, rather than universal cooperation.

In one page:

The Righteous Mind is organized into three parts.

In the first part, Haidt discusses the development of moral reasoning. On Haidt’s account, humans build morality from a tangle of intuition, rather than consciously assembling a moral edifice. Though parents play some role in teaching and reinforcing behaviors, children largely create their moralities in play with other children (p. 6), as they try things out and discover which conventions are enforced and how. Thus, children’s morality is influenced by what is allowed in the culture they grow up in. However, this influence is not absolute: some boundaries seem to be universal and innate. For example, children tend to think that harming people is wrong, even if the harm is allowed (p. 10).

On Haidt’s view, morality is partly cultural and partly innate, but above all intuitive; we figure out instantly if something seems morally off, then seek justification. In Haidt’s lab, he would start by asking his subjects why incest was wrong. Then he would shift the ground of the question to undermine the reasons they gave: if they said that siblings shouldn’t have sex because of the risk of genetic abnormalities, he would ask them whether it would still be wrong if the siblings were infertile. If they worried about power dynamics, he would ask them to assume that they were both enthusiastically consenting. But as they gradually ran out of reasons, their faith in incest’s immorality was usually not shaken. Intuition, rather than reason, was determining what they thought.

According to Haidt’s results, once the “elephant” of intuition has chosen its path, it is very difficult for the “rider” of rationality to change its course (p. 40). In other words, undermining someone’s explicit reasoning is very unlikely to change their mind on moral matters.

In the book’s second part, Haidt samples moral variety. He describes a taxonomy of “moral foundations”, which he likens to different sorts of taste receptors on the tongue. Different people, cultures, and political coalitions have different moral tastes, using some receptors more than others. Which of these flavors are more appealing to liberals, and which to conservatives?

  1. Care/harm: it is bad to cause harm, and good to take care of one’s charges, especially children (p. 131-134).
  2. Fairness/cheating: it is bad to lie, cheat, and exploit others, and good to make and honor mutually beneficial agreements (p. 134-138).
  3. Loyalty/betrayal: it is good to respect and positively view one’s own tribe/team, and not betray its interests to favor oneself or outgroups (p. 138-141).
  4. Authority/subversion: it is good to respect the social order, and bad to be insubordinate to justified authorities (p. 142-145).
  5. Sanctity/degradation: it is good to perform purifying rituals to avoid contaminants both physical and abstract, and bad to allow contamination to set in (p. 146-153).

The first two foundations are savored by liberals, to the (partial) exclusion of the other three. Conservatives, by contrast, tend to experience all five.

In a different chapter, Haidt also proposes a sixth, more speculative foundation: the liberty/oppression foundation. On his view, this foundation explains the moral fervor of revolutionaries, as well as the extreme preference for freedom shared by libertarians. On Haidt’s view, this foundation likely emerged as a coordination mechanism for early humans to take down bullies and keep them from using superior physical strength to dominate their tribes (p. 171-172). Counting this foundation as shared by liberals and conservatives, liberals are sensitive to three moral tastes, while conservatives prefer six.

Not only do liberals prefer to define on a smaller set of behaviors as morally salient, there are cases where certain liberals prefer to focus just on one favored taste. Haidt notices that in certain WEIRD cultures - Western, Educated, Industrialized, Rich, and Democratic ones - people tend to provide moral reasons exclusively in terms of care/harm: if something is intuitively bad but victimless, people in more individualistic WEIRD cultures may be forced to admit that nothing immoral happened (or else invent a victim). In cultures that are less WEIRD, it is more common for something to be declared wrong because it violates sanctity norms: morality is more than wellbeing (p. 95-98).

Haidt acknowledges that his grouping is just one way to organize our fluid moral impulses - these five tastes (and later six) became salient to him during his research, but there might be other ways to understand moral flavours.

In the book’s final part, Haidt describes human beings as “90% chimp and 10% bee”. We are partly selfish, but under the right circumstances, we can melt into group identities and feel like a mere part of a greater whole. He believes that this ability to act as a hive is largely responsible for humans’ ability to devote themselves to the group, and avoid free rider problems (p. 244-245). Haidt sees this ability to form hives, to melt into group identities, as a way to access transcendent happiness. At the same time, it makes us more parochial: we find it hard to be selfless to the world more generally, so instead we devote ourselves to some smaller group. Haidt believes that parochial altruism, rather than universal love, may be all we can aspire to (p. 244-245).

Commentary

Hadit’s claims about moral foundations theory are supported by his own research, as are his claims about how people respond to moral ideas intuitively first and rationally second. Many of his other claims, however, are more like grand narratives or surveys of historical thought. Haidt defends ideas as broad as group selection and as old as Glaucon’s case for appearances over moral substance in Plato’s Republic. His grand narratives are difficult to falsify, though it is possible to question their backing.

There is a replication crisis in psychology, and The Righteous Mind was released in 2012, so it is reasonable to make sure that the book isn’t on too shaky ground. Overall, the case is mixed. Moral foundations theory has apparently replicated, while part two of this excellent review by Zach Jacobi suggests that the evidence for intuitionism is mixed. Overall, The Righteous Mind does not seem to have fared too badly. Assuming there is something to Haidt’s combination of historical narrative, modern research survey, and original models, how can we respond to it?

Many people feel like they and their friends are mostly rational, while their ideological opponents are irrational. But doing the most good possible requires understanding the world, and that means you must be able to confront your own irrationality. Scope insensitivity causes people to place a similar value on saving 2,000 birds and saving 20,000 birds, when clearly the latter is about ten times more valuable than the former. To save more birds, or effectively solve any of the problems we face, we need to make models, instead of being led only by our emotional reactions. Effective altruism is about trying to build a more rational approach to doing good, which requires fighting our biases.

But can we even fight our biases? The Righteous Mind makes a quite concerning claim: we would like to think that human beings are basically rational but prone to biases. On Haidt’s account, however, we are basically intuitive and use rationality only to justify the intuition. So on Haidt’s model, we still can’t trust someone who is aware of many biases and statistical methods - we’d expect them to make isolated demands for rigor, create models that are unconsciously weighted to provide their desired result, and elevate the sorts of things their ingroups do to a special moral status. This view of the world - that elephants rule and riders can only passively assist them - is pretty bleak, and itself unintuitive. Most people can remember changing their minds sometimes, and even simple tasks like budgeting or going on a diet involve using long-term planning to overcome intuitive desires (like buying whatever you want, or eating tons of very sugary food). We won’t get anywhere seeing ourselves as helpless and unable to be reasonable at all. And besides, given that arguments do change our minds on some things sometimes, the idea that riders are powerless isn’t strictly true. So what can we do?

There are a few ways to respond to Haidt here, for someone interested in optimizing for metrics other than intuitive pull:

  1. Take intuitionism seriously, and work to compromise with and accommodate our intuitive apparatuses. This would look like taking time to figure out intuitive wants, offering small rewards for otherwise unappealing good habits (like eating a piece of candy every time after going to the gym), and being willing to take otherwise suboptimal courses of action for the sake of satisfying basic drives. Paul Christiano discusses this approach, among other things, in an excellent blog post.
  2. Try to use rationality to overcome irrationality, and find ways to correct for bias that co-opt intuition. For example, make reasonably high-stakes bets on claims. If these bets are incorrect, then the material loss will provide negative feedback for the elephant, which provides an incentive for accuracy (rather than, for example, accidentally holding beliefs that are likely to make you look good).
  3. Respect the elephant. Even if we work hard to become more rational, we can be extra careful about the sorts of things that are likely to change our elephant’s direction. If certain beliefs are highly correlated with social status or material reward, for example, we should expect our biases to be much more powerful than usual. We should expect to need novel, elephant-tailored methods to defuse them, such as privately journaling about these ideas so that social feedback is less likely to be a consideration, or discussing them with people in a separate peer group that does not confer status for those beliefs.
  4. Explore morality collectively: Haidt’s recommendation is to engage robustly with the best peer groups we can find. Even if we can’t easily alter our own intuitions, the claims of people we trust and respect can modulate our views, and average group views may tend to be more accurate than idiosyncratic ones.

A final implication of Haidt’s work is that we should learn to respect and talk to other people’s elephants. When an array of logical arguments fails, an intuitive (but still honest) reframing may help make data much more palatable.

The Righteous Mind is well worth reading, especially for people interested in group psychology or political intractability.

34

0
0

Reactions

0
0

More posts like this

Comments2
Sorted by Click to highlight new comments since: Today at 3:54 PM

You did a great job at looking at Haidt book from the EA perspective. I think trying to do the opposite is also interesting: how can EA be understood in the light of the moral foundations theory?

I'm in the introduction to EA program and yesterday we were talking about triage as part of reviewing the readings. Even though we bought the argumentation by MacAskill about triaging being the best thing to do and something we must not look away from it felt conflicting. In the framework of Haidt that's easily explained as the sanctity/degradation foundation kicking in and making very difficult to compare the value of different lives.

Haidt says that each culture and group uses all the foundations but with different weights. I would bet that the foundations test performed in this community will show  above average weights for some foundations (care/harm) and below average for some others (sanctity/degradation, authority/subversion).

This is an excellent review, thanks! Really like (a) the visual metaphors in bullet points and (b) the critical commentary with updates on replication. As someone who's only skimmed the Righteous Mind (didn't feel worth reading after Moral Tribes), I got a really good picture of the core concepts from this review (which I typically don't - it's really hard to efficiently compress the whole book). Thanks a lot :)