Hide table of contents

Update, March 2021: I wrote this post ten years ago. Since then, we've learned more about the replication crisis in many fields, including psychology. Some of the specific studies I've mentioned here may be tainted by poor methodology or other problems. However, I still believe that the main ideas of the post (that human reasoning is flawed, and that there are ways we can improve it) hold true.


The last 40 years of cognitive science have taught us a great deal about how our brains produce errors in thinking and decision making, and about how we can overcome those errors. These methods can help us form more accurate beliefs and make better decisions.

Long before the first Concorde supersonic jet was completed, the British and French governments developing it realized it would lose money. But they continued to develop the jet when they should have cut their losses, because they felt they had "invested too much to quit"[1] (sunk cost fallacy[2]).

John tested positive for an extremely rare but fatal disease, using a test that is accurate 80% of the time. John didn't have health insurance, and the only available treatment — which his doctor recommended — was very expensive. John agreed to the treatment, his retirement fund was drained to nothing, and during the treatment it was discovered that John did not have the rare disease after all. Later, a statistician explained to John that because the disease is so rare, the chance that he had had the disease even given the positive test was less than one in a million. But neither John's brain nor his doctor's brain had computed this correctly (base rate neglect).

Mary gave money to a charity to save lives in the developing world. Unfortunately, she gave to a charity that saves lives at a cost of $100,000 per life instead of one that saves lives at 1/10th that cost, because the less efficient charity used a vivid picture of a starving child on its advertising, and our brains respond more to single, identifiable victims than to large numbers of victims (identifiability effect[3] and scope insensitivity[4]).

During the last four decades, cognitive scientists have discovered a long list of common thinking errors like these. These errors lead us to false beliefs and poor decisions.

How are these errors produced, and how can we overcome them? Vague advice like "be skeptical" and "think critically" may not help much. Luckily, cognitive scientists know a great deal about the mathematics of correct thinking, how thinking errors are produced, and how we can overcome these errors in order to live more fulfilling lives.

Rationality

First, what is rationality? It is not the same thing as intelligence, because even those with high intelligence fall prey to some thinking errors as often as everyone else.[5] But then, what is rationality?

Cognitive scientists recognize two kinds of rationality:

  • Epistemic rationality is about forming true beliefs, about getting the map in your head to accurately reflect the territory of the world. We can measure epistemic rationality by comparing the rules of logic and probability theory to the way that a person actually updates their beliefs.
  • Instrumental rationality is about making decisions that are well-aimed at bringing about what you want. Due to habit and bias, many of our decisions don't actually align with our goals. We can measure instrumental rationality with a variety of techniques developed in economics, for example testing whether a person obeys the 'axioms of choice'.[6] In short, rationality improves our choices concerning what to believe and what to do.

Unfortunately, human irrationality is quite common, as shown in popular books like Predictably Irrational: The Hidden Forces that Shape Our Decisions and Kluge: The Haphazard Evolution of the Human Mind.

Ever since Aristotle spoke of humans as the "rational animal," we've had a picture of ourselves as rational beings that are hampered by shortcomings like anger, fear, and confirmation bias.

Cognitive science says just the opposite. Cognitive science shows us that humans just are a collection of messy little modules like anger and fear and the modules that produce confirmation bias. We have a few modules for processing logic and probability and rational goal-pursuit, but they are slow, energy-expensive, and rarely used.

As we'll see, our brains avoid using these expensive modules whenever possible. Robert Boyd and Pete Richerson explain:

...all animals are under stringent selection pressure to be as stupid as they can get away with.[7]

Or, as philosopher David Hull put it:

The rule that human beings seem to follow is to engage [rational thought] only when all else fails — and usually not even then.[8]

Human reasoning

So how does human reasoning work, and why does it so often produce mistaken judgments and decisions?

Today, cognitive scientists talk about two kinds of processes, what Daniel Kahneman (2011) calls "fast and slow" processes:

  • Type 1 processes are fast, do not require conscious attention, do not need input from conscious processes, and can operate in parallel.
  • Type 2 processes are slow, require conscious effort, and generally only work one at a time.

Type 1 processes provide judgments quickly, but these judgments are often wrong, and can be overridden by corrective Type 2 processes.

Type 2 processes are computationally expensive, and thus humans are 'cognitive misers'. This means that we (1) default to Type 1 processes whenever possible, and (2) when we must use Type 2 processes, we use the least expensive kinds of Type 2 processes, those with a 'focal bias' — a disposition to reason from the simplest model available instead of considering all the relevant factors. Hence, we are subject to confirmation bias (our cognition is focused on what we already believe) and other biases.

So, cognitive miserliness can cause three types of thinking errors:

  1. We default to Type 1 processes when Type 2 processes are needed.
  2. We fail to override Type 1 processes with Type 2 processes.
  3. Even when we override with Type 2 processes, we use Type 2 processes with focal bias.

But the problem gets worse. If someone is going to override Type 1 processes with Type 2 processes, then she also needs the right content available with which to do the overriding. For example, she may need to override a biased intuitive judgment with a correct application of probability theory, or a correct application of deductive logic. Such tools are called 'mindware'.[9]

Thus, thinking can also go wrong if there is a 'mindware gap' — that is, if an agent lacks crucial mindware like probability theory.

Finally, thinking can go wrong due to 'contaminated mindware' — mindware that exists but is wrong. For example, an agent may have the naive belief that they know their own minds quite well, which is false. Such mistaken mindware can lead to mistaken judgments.

Types of errors

Given this understanding, a taxonomy of thinking errors could begin like this:[10]

The circles on the left capture the three normal sources of thinking errors. The three rectangles to the right of 'Cognitive Miserliness' capture the three categories of error that can be caused by cognitive miserliness. The rounded rectangles to the right of 'Mindware Gap' and 'Corrupted Mindware' propose some examples of (1) mindware that, if missing, can cause a mindware gap, and (2) common contaminated mindware.

The process for solving a reasoning task, then, may look something like this:[11]

First, do I have mindware available to solve the reasoning problem before me with slow, deliberate, Type 2 processes? If not, my brain must use fast but inaccurate Type 1 processes to solve the problem. If I do have mindware available to solve this problem, do I notice the need to engage it? If not, my brain defaults to the cheaper Type 1 processes. If I do notice the need to engage Type 2 processes and have the necessary mindware, is sustained (as opposed to momentary) 'Type 2 override' required to solve the problem? If not, then I use that mindware to solve the problem. If sustained override is required to solve the reasoning problem and I don't have the cognitive capacity (e.g. working memory) needed to complete the override, then my brain will default back to Type 1 processes. Otherwise, I'll use my cognitive capacities to sustain Type 2 override well enough to complete the reasoning task with my Type 2 processes (mindware).

That may be something like how our brains determine how to solve a reasoning task.

It's this model that Stanovich and colleagues (2010) use to explain why, among other things, IQ is correlated with performance on some reasoning tasks but not others. For example, IQ correlates with performance on tests of outcome bias and hindsight bias, but not with performance on tests of anchoring effects and omission bias. To overcome these latter biases, subjects seem to need not just high cognitive capacity (fluid intelligence, working memory, etc.), but also specific rationality training.

If this is right, then we may talk of three different 'minds' at work in solving reasoning problems:

  • The autonomous mind, made of unconscious Type 1 processes. There are few individual differences in its operation.

  • The algorithmic mind, made of conscious Type 2 processes. There are significant individual differences in fluid intelligence in particular and cognitive capacity in general — that is, differences in perceptual speed, discrimination accuracy, working memory capacity, and the efficiency of the retrieval of information stored in long-term memory.[12]

  • The reflective mind, which shows individual differences in the disposition to use rationality mindware — the disposition to generate alternative hypotheses, to use fully disjunctive reasoning, to engage in actively open-minded thinking, etc.[13]

Rationality Skills

But it is not enough to understand how the human brain produces thinking errors. We also must find ways to meliorate the problem if we want to have more accurate beliefs and more efficiently achieve our goals. As Milkman et al. (2010) say:

...the time has come to move the study of biases in judgment and decision making beyond description and toward the development of improvement strategies.

Stanovich (2009) sums up our project:

To jointly achieve epistemic and instrumental rationality, a person must display judicious decision making, adequate behavioral regulation, wise goal prioritization, sufficient thoughtfulness, and proper evidence calibration. For example, epistemic rationality — beliefs that are properly matched to the world — requires probabilistic reasoning and the ability to calibrate theories to evidence. Instrumental rationality — maximizing goal fulfillment — requires adherence to all of the axioms of rational choice. People fail to fulfill the many different strictures of rational thought because they are cognitive misers, because they lack critical mindware, and because they have acquired contaminated mindware. These errors can be prevented by acquiring the mindware of rational thought and the thinking dispositions that prevent the overuse of the strategies of the cognitive miser.

This is the project of 'debiasing' ourselves[14] with 'ameliorative psychology'.[15]

What we want is a Rationality Toolkit: a set of skills and techniques that can be used to overcome and correct the errors of our primate brains so we can form more accurate beliefs and make better decisions.

Our goal is not unlike Carl Sagan's 'Baloney Detection Kit', but the tools in our Rationality Toolkit will be more specific and better grounded in the cognitive science of rationality.

I mentioned some examples of debiasing interventions that have been tested by experimental psychologists in my post Is Rationality Teachable? I'll start with those, then add a few techniques for ameliorating the planning fallacy, and we've got the beginnings of our Rationality Toolkit:

  1. A simple instruction to "think about alternatives" can promote resistance to overconfidence and confirmation bias. In one study, subjects asked to generate their own hypotheses are more responsive to their accuracy than subjects asked to choose from among pre-picked hypotheses.[16] Another study required subjects to list reasons for and against each of the possible answers to each question on a quiz prior to choosing an answer and assessing the probability of its being correct. This process resulted in more accurate confidence judgments relative to a control group.[17]
  2. Training in microeconomics can help subjects avoid the sunk cost fallacy.[18]
  3. Because people avoid the base rate fallacy more often when they encounter problems phrased in terms of frequencies instead of probabilities,[19] teaching people to translate probabilistic reasoning tasks into frequency formats improves their performance.[20]
  4. Warning people about biases can decrease their prevalence. So far, this has been demonstrated to work with regard to framing effects,[21] hindsight bias,[22] and the outcome effect,[23] though attempts to mitigate anchoring effects by warning people about them have produced weak results so far.[24]
  5. Research on the planning fallacy suggests that taking an 'outside view' when predicting the time and resources required to complete a task will lead to better predictions. A specific instance of this strategy is 'reference class forecasting',[25] in which planners project time and resource costs for a project by basing their projections on the outcomes of a distribution of comparable projects.
  6. Unpacking the components involved in a large task or project helps people to see more clearly how much time and how many resources will be required to complete it, thereby partially meliorating the planning fallacy.[26]
  7. One reason we fall prey to the planning fallacy is that we do not remain as focused on the task at hand throughout its execution as when we are planning its execution. The planning fallacy can be partially meliorated, then, not only by improving the planning but by improving the execution. For example, in one study[27] students were taught to imagine themselves performing each of the steps needed to complete a project. Participants rehearsed these simulations each day. 41% of these students completed their tasks on time, compared to 14% in a control group.

But this is only the start. We need more rationality skills, and we need step-by-step instructions for how to teach them and how to implement them at the 5-second level.

References

Ackerman, Kyllonen & Richards, eds. (1999). Learning and individual differences: Process, trait, and content determinants. American Psychological Association.

Arkes & Blumer (1985). The psychology of sunk cost. Organizational Behavior and Human Decision Processes, 35: 124-140.

Arkes & Ayton (1999). The sunk cost and Concorde effects: Are humans less rational than lower animals? Psychological Bulletin, 125: 591-600.

Arkes & Hutzel (2000). The role of probability of success estimates in the sunk cost effect. Journal of Behavioral Decision Making, 13: 295-306.

Baron (2007). Thinking and Deciding, 4th edition. Cambridge University Press.

Bishop & Trout (2004). Epistemology and the Psychology of Human Judgment. Oxford University Press.

Block & Harper (1991). Overconfidence in estimation: testing the anchoring-and-adjustment hypothesis. Organizational Behavior and Human Decision Processes, 49: 188–207.

Buehler, Griffin, & Ross (1994). Exploring the 'planning fallacy': Why people underestimate their task completion times. Journal of Personality and Social Psychology, 67: 366-381.

Buehler, Griffin, & Ross (1995). It's about time: Optimistic predictions in work and love. European Review of Social Psychology, 6: 1-32.

Buehler, Griffin, & Ross (2002). Inside the planning fallacy: The causes and consequences of optimistic time predictions. In Gilovich, Griffin, & Kahneman (eds.), Heuristics and biases: The psychology of intuitive judgment (pp. 250-270). Cambridge University Press.

Buehler, Griffin, & Peetz (2010). The planning fallacy: cognitive, motivational, and social origins. Advances in Experimental Social Psychology, 43: 1-62.

Carson & Mitchell (1995). Sequencing and Nesting in Contingent Valuation Surveys. Journal of Environmental Economics and Management, 28: 155-73.

Cheng & Wu (2010). Debiasing the framing effect: The effect of warning and involvement. Decision Support Systems, 49: 328-334.

Clarkson, Emby, & Watt (2002). Debiasing the effect of outcome knowledge: the role of instructions in an audit litigation setting. Auditing: A Journal of Practice and Theory, 21: 1–14.

Connolly & Dean (1997). Decomposed versus holistic estimates of effort required for software writing tasks. Management Science, 43: 1029–1045.

Dawes (1998). Behavioral decision making and judgment. In Gilbert, Fiske, & Lindzey (eds.), The handbook of social psychology (Vol. 1, pp. 497–548). McGraw-Hill.

Deary (2000). Looking down on human intelligence: From psychometrics to the brain. Oxford University Press.

Deary (2001). Intelligence: A very short introduction. Oxford University Press.

Desvousges, Johnson, Dunford, Boyle, Hudson, & Wilson (1992). Measuring non-use damages using contingent valuation: experimental evaluation accuracy. Research Triangle Institute Monograph 92-1.

Elqayam & Evans (2011). Subtracting 'ought' from 'is': Descriptivism versus normativism in the study of human thinking. Brain and Behavioral Sciences.

Evans (1989). Bias in Human Reasoning: Causes and Consequences. Lawrence Erlbaum Associates.

Evans (2007). Hypothetical Thinking: Dual Processes in Reasoning and Judgment. Psychology Press.

Fetherstonhaugh, Slovic, Johnson, & Friedrich (1997). Insensitivity to the value of human life: A study of psychophysical numbing. Journal of Risk and Uncertainty, 14: 238-300.

Flyvbjerg (2008). Curbing optimism bias and strategic misrepresentation in planning: Reference class forecasting in practice. European Planning Studies, 16: 3–21.

Flyvbjerg, Garbuio, & Lovallo (2009). Delusion and deception in large infrastructure projects: Two models for explaining and preventing executive disaster. California Management Review, 51: 170–193.

Foley (1987). The Theory of Epistemic Rationality. Harvard University Press.

Forsyth & Burt (2008). Allocating time to future tasks: The effect of task segmentation on planning fallacy bias. Memory and Cognition, 36: 791–798.

George, Duffy, & Ahuja (2000). Countering the anchoring and adjustment bias with decision support systems. Decision Support Systems, 29: 195–206.

Gigerenzer & Hoffrage (1995). How to improve Bayesian reasoning without instruction: Frequency formats. Psychological Review, 102: 684–704.

Gilovich, Griffin, & Kahneman (eds.) (2002). Heuristics and Biases: The Psychology of Intuitive Judgment. Cambridge University Press.

Harman (1995). Rationality. In Smith & Osherson (eds.), Thinking (Vol. 3, pp. 175–211). MIT Press.

Hasher, Attig, & Alba (1981). I knew it all along: or did I? Journal of Verbal and Learning Behavior, 20: 86-96.

Hastie & Dawes (2009). Rational Choice in an Uncertain World, 2nd edition. Sage.

Hull (2000). Science and selection: Essays on biological evolution and the philosophy of science. Cambridge University Press.

Hunt (1987). The next word on verbal ability. In Vernon (ed.), Speed of information-processing and intelligence (pp. 347–392). Ablex.

Hunt (1999). Intelligence and human resources: Past, present, and future. In Ackerman & Kyllonen (Eds.), The future of learning and individual differences research: Processes, traits, and content (pp. 3-30) American Psychological Association.

Jenni & Loewenstein (1997). Explaining the 'identifiable victim effect.' Journal of Risk and Uncertainty, 14: 235–257.

Kahneman (1986). Comments on the contingent valuation method. In Cummings, Brookshie, & Schulze (eds.), Valuing environmental goods: a state of the arts assessment of the contingent valuation method. Roweman and Allanheld.

Kahneman & Tversky (2000). Choices, Values, and Frames. Cambridge University Press.

Kahneman (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.

Kane & Engle (2002). The role of prefrontal cortex working-memory capacity, executive attention, and general fluid intelligence: An individual differences perspective. Psychonomic Bulletin and Review, 9: 637–671.

Knox & Inkster (1968). Postdecision dissonance at post time. Journal of Personality and Social Psychology, 8: 319-323.

Koehler (1994). Hypothesis generation and confidence in judgment. Journal of Experimental Psychology: Learning, Memory, and Cognition, 20: 461-469.

Kogut & Ritov (2005a). The 'identified victim effect': An identified group, or just a single individual? Journal of Behavioral Decision Making, 18: 157–167.

Kogut & Ritov (2005b). The singularity effect of identified victims in separate and joint evaluations. Organizational Behavior and Human Decision Processes, 97: 106–116.

Kogut & Ritov (2010). The identifiable victim effect: Causes and boundary conditions. In Oppenheimer & Olivola (eds.), The Science of Giving: Experimental Approaches to the Study of Charity (pp. 133-146). Psychology Press.

Koriat, Lichtenstein, & Fischhoff (1980). Reasons for confidence. Journal of Experimental Psychology: Human Learning and Memory, 6: 107-118.

Koole & Vant Spijker (2000). Overcoming the planning fallacy through willpower: Effects of implementation intentions on actual and predicted task-completion times. European Journal of Social Psychology, 30: 873–888.

Krueger (2000). Individual differences and Pearson's r: Rationality revealed? Behavioral and Brain Sciences, 23: 684–685.

Kruger & Evans (2004). If you don’t want to be late, enumerate: Unpacking reduces the planning fallacy. Journal of Experimental Social Psychology, 40: 586–598.

Larrick (2004). Debiasing. In Koehler & Harvey (eds.), Blackwell Handbook of Judgment and Decision Making (pp. 316-337). Wiley-Blackwell.

Larrick, Morgan, & Nisbett (1990). Teaching the use of cost-benefit reasoning in everyday life. Psychological Science, 1: 362-370.

Lohman (2000). Complex information processing and intelligence. In Sternberg (ed.), Handbook of intelligence (pp. 285–340). Cambridge University Press.

Lovallo & Kahneman (2003). Delusions of success: How optimism undermines executives' decisions. Harvard Business Review, July 2003: 56-63.

Manktelow (2004). Reasoning and rationality: The pure and the practical. In Manktelow & Chung (eds.), Psychology of reasoning: Theoretical and historical perspectives (pp. 157–177). Psychology Press.

McFadden & Leonard (1995). Issues in the contingent valuation of environmental goods: methodologies for data collection and analysis. In Hausman (ed.), Contingent valuation: a critical assessment. North Holland.

Milkman, Chugh, & Bazerman (2010). How can decision making be improved? Perspectives on Psychological Science 4: 379-383.

Mussweiler, Strack, & Pfeiffer (2000). Overcoming the inevitable anchoring effect: Considering the opposite compensates for selective accessibility. Personality and Social Psychology Bulletin, 26: 1142–50.

Oreg & Bayazit. Prone to Bias: Development of a Bias Taxonomy From an Individual Differences Perspective. Review of General Psychology, 3: 175-193.

Over (2004). Rationality and the normative/descriptive distinction. In Koehler & Harvey (eds.), Blackwell handbook of judgment and decision making (pp. 3–18). Blackwell Publishing.

Peetz, Buehler & Wilson (2010). Planning for the near and distant future: How does temporal distance affect task completion predictions? Journal of Experimental Social Psychology, 46: 709-720.

Perkins (1995). Outsmarting IQ: The emerging science of learnable intelligence. Free Press.

Pezzo, Litman, & Pezzo (2006). On the distinction between yuppies and hippes: Individual differences in prediction biases for planning future tasks. Personality and Individual Differences, 41: 1359-1371.

Reimers & Butler (1992). The effect of outcome knowledge on auditor's judgmental evaluations. Accounting, Organizations and Society, 17: 185–194.

Richerson & Boyd (2005). Not By Genes Alone: How Culture Transformed Human Evolution. University of Chicago Press.

Ross, Greene, & House (1977). The false consensus phenomenon: An attributional bias in self-perception and social perception processes. Journal of Experimental Social Psychology, 13: 279–301.

Roy, Christenfeld, & McKenzie (2005). Underestimating the duration of future events: Memory incorrectly used or memory bias? Psychological Bulletin, 131: 738-756.

Sedlmeier (1999). Improving Statistical Reasoning: Theoretical Models and Practical Implications. Erlbaum.

Shafir & LeBoeuf (2002). Rationality. Annual Review of Psychology, 53: 491–517.

Slovic (2007). If I look at the mass I will never act: Psychic numbing and genocide. Judgment and Decision Making, 2: 1–17.

Slovic, Zionts, Woods, Goodman, & Jinks (2011). Psychic numbing and mass atrocity. In E. Shafir (ed.), The behavioral foundations of policy. Sage and Princeton University Press.

Small & Loewenstein (2003). Helping a victim or helping the victim: Altruism and identifiability. Journal of Risk and Uncertainty, 26: 5–16.

Small, Loewenstein, & Slovic (2007). Sympathy and callousness: The impact of deliberative thought on donations to identifiable and statistical victims. Organizational Behavior and Human Decision Processes, 102: 143–153.

Soll & Klayman (2004). Overconfidence in interval estimates. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30: 299–314.

Stanovich (1999). Who is rational? Studies of individual differences in reasoning. Erlbaum.

Stanovich (2009). What Intelligence Tests Miss: The Psychology of Rational Thought. Yale University Press.

Stanovich & West (2008). On the failure of cognitive ability to predict myside bias and one-sided thinking biases. Thinking and Reasoning, 14: 129–167.

Stanovich, Toplak, & West (2008). The development of rational thought: A taxonomy of heuristics and biases. Advances in Child Development and Behavior, 36: 251-285.

Stanovich, West, & Toplak (2010). Individual differences as essential components of heuristics and biases research. In Manktelow, Over, & Elqayam (eds.), The Science of Reason: A Festschrift for Jonathan St B.T. Evans (pp. 355-396). Psychology Press.

Stanovich, West, & Toplak (2011). Intelligence and rationality. In Sternberg & Kaufman (eds.), Cambridge Handbook of Intelligence, 3rd edition (pp. 784-826). Cambridge University Press.

Staw (1976). Knee-deep in the big muddy: a study of escalating commitment to a chosen course of action. Organizational Behavior and Human Performance, 16: 27-44.

Sternberg (1985). Beyond IQ: A triarchic theory of human intelligence. Cambridge University Press.

Sternberg (1997). Thinking Styles. Cambridge University Press.

Sternberg (2003). Wisdom, intelligence, and creativity synthesized. Cambridge University Press.

Taylor, Pham, Rivkin & Armor (1998). Harnessing the imagination: Mental simulation, self-regulation, and coping. American Psychologist, 53: 429–439.

Teger (1980). Too Much Invested to Quit. Pergamon Press.

Tversky & Kahneman (1979). Intuitive prediction: Biases and corrective procedures. TIMS Studies in Management Science, 12: 313-327.

Tversky & Kahneman (1981). The framing of decisions and the psychology of choice. Science, 211: 453–458.

Unsworth & Engle (2005). Working memory capacity and fluid abilities: Examining the correlation between Operation Span and Raven. Intelligence, 33: 67–81.

Whyte (1986). Escalating Commitment to a Course of Action: A Reinterpretation. The Academy of Management Review, 11: 311-321.

Wu, Zhang, & Gonzalez (2004). Decision under risk. In Koehler & Harvey (eds.), Blackwell handbook of judgment and decision making (pp. 399–423). Blackwell Publishing.


  1. Teger (1980). ↩︎

  2. A sunk cost is a cost from the past that cannot be recovered. Because decision makers should consider only the future costs and benefits of the choices before them, sunk costs should be irrelevant to human decisions. Alas, sunk costs regularly do effect human decisions: Knox & Inkster (1968); Arkes & Blumer (1985); Arkes & Ayton (1999); Arkes & Hutzel (2000); Staw (1976); Whyte (1986). ↩︎

  3. People are more generous (say, in giving charity) toward a single identifiable victim than toward unidentifiable or statistical victims (Kogut & Ritov 2005a, 2010; Jenni & Loewenstein 1997; Small & Loewenstein 2003; Small et al. 2007; Slovic 2007), even though they say they prefer to give to a group of people (Kogut & Ritov 2005b). ↩︎

  4. Yudkowsky summarizes scope insensitivity:

    Once upon a time, three groups of subjects were asked how much they would pay to save 2000 / 20000 / 200000 migrating birds from drowning in uncovered oil ponds. The groups respectively answered $80, $78, and $88 [Desvousges et al. 1992]. This is scope insensitivity or scope neglect: the number of birds saved — the scope of the altruistic action — had little effect on willingness to pay.

    See also: Kahneman (1986); McFadden & Leonard (1995); Carson & Mitchell (1995); Fetherstonhaugh et al. (1997); Slovic et al. (2011). ↩︎

  5. Stanovich & West (2008); Ross et al. (1977); Krueger (2000). ↩︎

  6. Stanovich et al. (2008) writes:

    Cognitive scientists recognize two types of rationality: instrumental and epistemic... [We] could characterize instrumental rationality as the optimization of the individual’s goal fulfillment. Economists and cognitive scientists have refined the notion of optimization of goal fulfillment into the technical notion of expected utility. The model of rational judgment used by decision scientists is one in which a person chooses options based on which option has the largest expected utility... The other aspect of rationality studied by cognitive scientists is termed epistemic rationality. This aspect of rationality concerns how well beliefs map onto the actual structure of the world. Instrumental and epistemic rationality are related. The aspect of beliefs that enter into instrumental calculations (i.e., tacit calculations) are the probabilities of states of affairs in the world. Also see the discussion in Stanovich et al. (2011). On instrumental rationality as the maximization of expected utility, see Dawes (1998); Hastie & Dawes (2009); Wu et al. (2004). On epistemic reality, see Foley (1987); Harman (1995); Manktelow (2004); Over (2004). How can we measure an individual's divergence from expected utility maximization if we can't yet measure utility directly? One of the triumphs of decision science is the demonstration that agents whose behavior respects the so-called 'axioms of choice' will behave as if they are maximizing expected utility. It can be difficult to measure utility, but it is easier to measure whether one of the axioms of choice are being violated, and thus whether an agent is behaving instrumentally irrationally. Violations of both instrumental and epistemic rationality have been catalogued at length by cognitive psychologists in the 'heuristics and biases' literature: Baron (2007); Evans (1989, 2007); Gilovich et al. (2002); Kahneman & Tversky (2000); Shafir & LeBoeuf (2002); Stanovich (1999). For the argument against comparing human reasoning practice with normative reasoning models, see Elqayam & Evans (2011).

    ↩︎
  7. Boyd & Richerson (2005), p. 135. ↩︎

  8. Hull (2000), p. 37. ↩︎

  9. Perkins (1995). ↩︎

  10. Adapted from Stanovich et al. (2008). ↩︎

  11. Adapted from Stanovich et al. (2010). ↩︎

  12. Ackerman et al. (1999); Deary (2000, 2001); Hunt (1987, 1999); Kane & Engle (2002); Lohman (2000); Sternberg (1985, 1997, 2003); Unsworth & Engle (2005). ↩︎

  13. See table 17.1 in Stanovich et al. (2010). The image is from Stanovich (2010). ↩︎

  14. Larrick (2004). ↩︎

  15. Bishop & Trout (2004). ↩︎

  16. Koehler (1994). ↩︎

  17. Koriat et al. (1980). Also see Soll & Klayman (2004); Mussweiler et al. (2000). ↩︎

  18. Larrick et al. (1990). ↩︎

  19. Gigerenzer & Hoffrage (1995). ↩︎

  20. Sedlmeier (1999). ↩︎

  21. Cheng & Wu (2010). ↩︎

  22. Hasher et al. (1981); Reimers & Butler (1992). ↩︎

  23. Clarkson et al. (2002). ↩︎

  24. Block & Harper (1991); George et al. (2000). ↩︎

  25. Lovallo & Kahneman (2003); Buehler et al. (2010); Flyvbjerg (2008); Flyvbjerg et al. (2009). ↩︎

  26. Connolly & Dean (1997); Forsyth & Burt (2008); Kruger & Evans (2004). ↩︎

  27. Taylor et al. (1998). See also Koole & Vant Spijker (2000). ↩︎

14

0
0

Reactions

0
0

More posts like this

Comments3
Sorted by Click to highlight new comments since: Today at 5:55 PM

With my current research together with John Vervaeke and Johannes Jaeger, I'm continuing the work on the cognitive science of rationality under uncertainty, bringing together the axiomatic approach (on which Stanovich et al. build) and the ecological approach. 

Here I talk about Rationality and Cognitive Science on the ClearerThinking Podcast. Here is a YouTube conversation between me and John, explaining our work and the "The paradigm shift in rationality". Here is the preprint of the same argumentation as "Rationality and Relevance Realization". John also mentions our research multiple times on the Jim Rutt Show.

I've always admired your writings on the topic and you were one of the voices that led me to my current path.

Thanks, Anna!

If this is right, then we may talk of three different 'minds' at work in solving reasoning problems:

  • The autonomous mind, made of unconscious Type 1 processes. There are few individual differences in its operation.

(emphasis mine)

This feels wildly counterintuitive to me, unless "few differences" is much weaker than I'm expecting or "autonomous mind" is a way narrower concept than it looks.  On LW the author gives further elaboration in the comments, which I understand as "some autonomous processes like face recognition seem to be mostly the same between people".

Maybe it's true that most people have nearly-identical performance in those domains. But to me it looks like almost all of the differences between people lie in the autonomous mind. The vast majority of actions I take throughout the day are autonomous. When I observe skill differences between myself and someone else, most of the variance seems to come from differences in our intuitions and pattern-matching, rather than our mindware or algorithmic thinking.

I can't even imagine a worldview that says otherwise, so I'd be curious to hear from anyone who legitimately agrees with the "few individual differences in autonomous reasoning" model. If this turned out to be correct then I would restructure a lot of how I'm trying to become more generally competent.