This is a link post with a summary for the paper “The Harsanyi-Rawls debate: political philosophy as decision theory under uncertainty.” https://doi.org/10.1590/0100-6045.2021.V44N2.RP
Why this might interest EAs:
- Decision theory: there’s a discussion on decision theory under ignorance and Knightian uncertainty. Personally, I’ve read better things on this subject, but I like the way the paper connects it to social decision-making and political philosophy.
- Moral philosophy: the best part of the paper is the discussion on Harsanyi’s average-utilitarianism and Rawls’s liberalism. Not the way it links each philosopher to different criteria for decision theory under ignorance (there’s plenty of material on that), but how it argues that these criteria are appealing because of the specific contractualist counterfactual scenarios (the Impartial Observer and the Original Position) where they are chosen[1] - i.e., they use different information sets.
- Shared intuitions are Schelling points: The conjecture that some of our intuitions (such as the appeal of the difference principle in the original position, or the practice of being highly risk averse when deciding for the sake of others) derive from something like salient Schelling points we can converge to in shaping social practices. Again, this might explain the general appeal of egalitarian principles in some scenarios; but then, it implies these principles are not justifiably applicable in very distinct contexts from those we used to justify them. This text reminds me of being very careful with philosophical intuitions.
Conflict of interest: I am the author – thus probably not the best person to talk about it.
Abstract
Social decisions are often made under great uncertainty - in situations where political principles, and even standard subjective expected utility, do not apply smoothly. In the first section, we argue that the core of this problem lies in decision theory itself - it is about how to act when we do not have an adequate representation of the context of the action and of its possible consequences. Thus, we distinguish two criteria to complement decision theory under ignorance - Laplace’s principle of insufficient reason and Wald’s maximin criterion. After that, we apply this analysis to political philosophy, by contrasting Harsanyi’s and Rawls’s theories of justice, respectively based on Laplace’s principle of insufficient reason and Wald’s maximin rule - and we end up highlighting the virtues of Rawls’s principle on practical grounds (it is intuitively attractive because of its computational simplicity, so providing a salient point for convergence) - and connect this argument to our moral intuitions and social norms requiring prudence in the case of decisions made for the sake of others.
Introduction
How should we act in social contexts of great uncertainty - when we find it hard to apply our standard political principles and face some sort of decision paralysis? Since an action aims to an end, the decision to act is irrational if we cannot justifiably believe that we can achieve the end - or it is self-defeating, if acting in accordance with the decision prevents us from reaching that end.
[…]
The subfield of decision theory that deals with scenarios where there is no probability distribution over possible outcomes is called decision under ignorance; there are four different criteria to complement decision theory under ignorance: Laplace’s principle of insufficient reason (a.k.a. “principle of indifference”), Wald’s maximin criterion, Savage’s minimax regret and the Hurwicz’s criterion. In the first half of this paper, we will on decision theory and: a) provide an introduction to a canonical model of decision theory, the subjective expected utility theory, and also to the common obstacles this model faces concerning Knightian uncertainty; b) though we do not equate decision under ignorance to Knightian uncertainty (actually, our intent is to highlight their differences), we explain how Laplace’s principle and Wald’s maximin aim to overcome them; c) we argue that detaching the notion of risks from subjective probabilities is not a solution to the problem posed by uncertainty - we criticize Pritchard (2015) as an example of this failed proposal. In the last half of the text, we extrapolate this discussion to political philosophy: first, we aim to make clear that this problem is not restricted to consequentialist theories; second, we present and contrast two competing conceptions of theories of justice with contractualist grounds - i.e., Harsanyi’s utilitarianism and Rawls’s difference principle2. We show that the former uses Laplace’s principle of indifference to cope with the uncertainty of the contratualist thought experiment, while the latter uses a version of Wald’s maximin rule - leading to the much-debated difference principle. We highlight how framing the original position as a social contract favors the difference principle on the grounds that it better incentivizes ex post stable cooperation, and that, thanks to its simplicity, it works as a salient point3; this is consistent with common social practices regarding decisions made for the sake of others, such as norms requiring decision-makers to be prudent, which display high uncertainty-aversion.
[…]
Conclusion
First, we have seen that the problem of uncertainty is pervasive: we cannot escape the problem of ignoring the consequences of a decision - otherwise such a theory might be inapplicable. We argued that, in situations of social risks, the adoption of the difference principle, according to the maximin criteria, justifies the action to all parties involved; moreover, as noted by Harsanyi himself, this principle is easier to apply than the utilitarian principle because it has lower informational requirements - it is epistemically simpler to identify and avoid worst outcomes. We showed how this reasoning may explain our moral intuitions and how it is consistent with social norms concerning risk allocation.
We must remark, though, the limitations of our argument: it doesn’t mean that the maximin is a good criterion for decision theory in general; it does not extrapolate to individual decision-making, nor even to cases where the boundaries of a decision problem can be well-defined. We only argued that, in uncertain social contexts, it provides a more acceptable justification for policies than utilitarianism. It is a decision rule, for coping with uncertainty, not a judgement procedure, for reducing it; i.e., it is a policy to select actions in the face of uncertainty, not a procedure to precisify our credences when we lack information - so it doesn’t solve the problem of assigning probabilities to different possible states. Finally, we highlight that we ignored population ethics and intergenerational conflict - i.e., our argument explicitly appeals to the need for stable cooperation between present agents, not future ones.
Rawlsians may dislike this conventional, even naturalistic, account of a theory of justice; it seems to lack the normative ‘flavor’ we usually expect from arguments of principle. However, instead of thinking of this as a reduction of a normative theory of justice to a non-normative theory of conventions, we suggest one should see it as an argument over the conditions under which principles of justice can be applied: even in the absence of a common agreement on what precise norms should be chosen and followed, or on what is the best conception of good, bounded rational agents can converge in a meta-level, particularly if they know they need to cooperate with each other. Actually, we dare to conclude by suggesting that this might be the main function of a normative theory - a theory about how agents should proceed: to allow for some guidance for the cooperation of bounded rational agents under uncertainty. If we could determine a cardinal utility function for each agent, and a corresponding precise probability distribution over outcomes, we would have no need for a normative theory of any kind; game theory would be enough to provide us with an answer about what decisions would be observed.
[1] I.e., maybe too much ink has been spilled on theoretical arguments over these scenarios: Harsanyi’s principle of utility is a good way of thinking about which society would you like to live in, absent any other information except the distribution of utilities (answer: the one with the ex ante highest general utility); the difference principle is a what you would choose to regulate the distribution of resources in a society where you would be cooperating with others, given the need of justifying this distribution every now and then (answer: ensure no one can complain their bundle is too small - and that others have too much).
I find this a rather challenging post, even if I like the high-level topic a lot! I didn't read the entire linked paper, but I'd be keen to understand whether you think you can make a concise, simple argument as to why my following view may be missing sth crucial that immediately follows from the Harsanyi vs. Rawls debate (if you think it does; feel free to ignore):
The Harsanyi 1975 paper which your linked post also cites (and which I recommend to any EA), is a great and rather complete rebuttal of Rawls core maximin-claim. The maximin principle, if taken seriously, can trivially be seen to lead to all sorts of preposterous choices that are quite miraculously improved by adding a smaller or larger portion of utilitarianism (one does by no means need to be a full utilitarian to agree with this), end of story.
Thank you so much for your comment. Yeah, I think Harsanyi's review is awesome, too (particularly his criticism of Ralws's position on future generations); I don't know why it seems to be quite often neglected by philosophers.
What you might be missing: notice everyone agrees that maximin sort of sucks as a decision principle - Rawls never endorsed it this way. However, I'd add:
a) It's only discussed as a decision criteria in cases where you don't have probabilities (it's Wald's criterion). Now the standard response to that would be something like "but you can build credences by applying Laplace's Principle," and I tend to agree. But I'm not sure this is always the best thing to do; not even Savage thought so. Rawls thinks that, particularly in the original position, this would not be done... and really, it's not clear why.
b) Notice that people often display higher risk-aversion when making decisions for the sake of others - they're supposed to be "prudent." Of course, I don't think this should be represented as following some sort of maximin principle - but maybe as some kind of "pessimistic" Hurwicz criterion; yet, and I can't stress this enough, my point is that people are not actually implying that negative outcomes are more likely.
c) Actually, Harsanyi himself remarks (in the addendum of the review) that the maximin (or the difference principle) is a useful proxy to use, e.g., in the theory of the optimal taxation - even though it's not a "fundamental principle of morality." I think this point is more relevant than it might seem at first sight; actually, my interpretation / defense of the difference principle basically depends on that.
I think (almost) all I'd have to say on this matter is in the Section 3 of the paper (especially the first part). But TL;DR: the difference principle is not a basic moral principle (and maximin is not an alternative to expected utility theorey). And the problem of the original position should be seen as a complex bargaining problem - no originality here, as K. Binmore and D. Gautier are quite explicit about it (but they're not so popular). I wouldn't say this is the best solution for "the problem of justice" (I should deeply study the Kalai-Smorodinsky solution to bargaining games first), but I do think it's a salient one - and this would explain its appeal.
Thank you! I was actually always surprised by H's mention of the taxation case as an example where maximin would be (readily) applicable.
IMHO, exactly what he explains in the rest of the article, can also be used to see how optimal taxation/public finance should rather only in exceptional cases be using a maximin principle as the proxy rule for a good redistributive process.
On the other hand, if you asked me whether I'd be happy if our actual very flawed tax/redistribution systems would be reformed such as to conform to the maximin - es, I'd possibly very happily agree on the latter, simply as a lesser of two evils. And maybe that's part of the point; in this case, fair enough!