Heuristics for clueless agents: how to get away with ignoring what matters most in ordinary decision-making

by Global Priorities Institute4 min read31st May 2020No comments

3

CluelessnessGlobal Priorities InstituteDecision theoryLongtermismLong-term futureRationality
Frontpage

Abstract

Even our most mundane decisions have the potential to significantly impact the long-term future, but we are often clueless about what this impact may be. In this paper, we aim to characterize and solve two problems raised by recent discussions of cluelessness, which we term the Problems of Decision Paralysis and the Problem of Decision-Making Demandingness. After reviewing and rejecting existing solutions to both problems, we argue that the way forward is to be found in the distinction between procedural and substantive rationality. Clueless agents have access to a variety of heuristic decision-making procedures which are often rational responses to the decision problems that they face. By simplifying or even ignoring information about potential long-term impacts, heuristics produce effective decisions without demanding too much of ordinary decision-makers. We outline two classes of problem features bearing on the rationality of decision-making procedures for clueless agents, and show how these features can be used to shed light on our motivating problems.

Introduction

Recent finds in the Jebel Irhoud cave in Morocco indicate that Homo sapiens has been on Earth for at least 300,000 years (Hublin et al. 2017). If we play our cards right, we could be around for many more. This planet will continue to be hospitable to complex life for around another billion years, at which point the increasingly brighter Sun will drive a catastrophic runaway greenhouse effect. If we are lucky, humanity will survive throughout this time and spread to other worlds, giving us 100 trillion years before the last stars burn out (Adams 2008). Countless lives could be lived, filled with flourishing, suffering, freedom, and oppression on a scale unparalleled in human history.

Suppose there were something you could do to significantly impact humanity's long-term future. Perhaps you could lower the probability of existential catastrophe by working against risks such as nuclear proliferation that threaten to bring our future to an early close. There are so many people yet to live that anything which improves their chances of leading flourishing lives appears to have tremendous moral significance. The expected value associated with actions of this kind seems to dwarf the expected value of just about anything else you could do (Beckstead 2013; Bostrom 2003, 2013; Greaves and MacAskill ms). Assuming a total utilitarian axiology, Bostrom (2013: 18-19) argues that a conservative projection of the total future population yields an estimate of the expected moral value of reducing extinction risk by one millionth of one percentage point that is at least the value of a hundred million human lives. Giving a mere one percent credence to less conservative estimates that take into account the potential for (post-) humanity to spread to the stars and for future minds to be implemented in computational hardware, Bostrom calculates the expected value of reducing the risk of extinction by as little as one billionth of one billionth of one percentage point to be one hundred billion times the value of a billion human lives.

Notice, however, that most of your actions have some probability of impacting the long-term future. Whether you sleep in today or get up early determines what you will eat, who you will interact with, and what you will accomplish today, all of which have myriad effects on others, carrying far into the future. If you get up early, you might be more productive. You might get in more reading and more writing. There is some very slim probability that this boost to your productivity will result in a work of philosophy that will be studied by future readers for as long as Plato’s Republic has been studied today. If nothing else, it might influence the thinking of students who will one day be in positions of political power and whose decisions will impact generations to come.

Recent theorists have taken this to suggest that the expected values of most options available to us are dominated by their possible long-term impacts (Beckstead 2013; Bostrom 2003, 2013; Greaves and MacAskill ms):

Ex Ante Axiological Longtermism (EAAL): In most cases, the vast majority of an option’s ex ante (expected) value is determined by its effects on the long-term future.

While EAAL has plausible consequences for decision-making in what we intuitively take to be high-stakes contexts, it raises a pair of puzzles for decision-making in more mundane contexts. We are often clueless about the long-term effects of our actions. We do not know whether we will change the future for the better by getting up early instead of sleeping in. Decision paralysis threatens. It is unclear if and how rational agents can ever be justified in acting. Worse still, longtermist decision-making can be highly demanding. To correctly evaluate the long-term effects of our actions we must consider a huge number of future contingencies. Perhaps by getting up early and going to work we will speed up the rate of technological progress, accelerating the interstellar expansion of humanity in the 24th century. But in most decision-making contexts we cannot spare enough time and cognitive resources to consider even a handful of the relevant future contingencies. Does this mean that we are often doomed to choose irrationally?

In this paper, we aim to sharpen and solve these challenges. In Section 2, we give precise statements of each challenge. In Sections 3 and 4, we review apparent solutions that ultimately do not work. In Section 5, we introduce a number of constraints on a successful solution. In Section 6, we suggest that the problems can be solved by turning from substantive to procedural rationality. In Sections 7 and 8 we develop a procedural solution to both problems.

Read the rest of the paper

3

New Comment