Hide table of contents

This is a new paper in the Global Priorities Institute working paper series by Jacob Barrett (Global Priorities Institute, University of Oxford) and Andreas T Schmidt (University of Groningen).

Abstract

 Moral uncertainty and disagreement pervade our lives. Yet we still need to make decisions and act, both in individual and political contexts. So, what should we do? The moral uncertainty approach provides a theory of what individuals morally ought to do when they are uncertain about morality. Public reason liberals, in contrast, provide a theory of how societies should deal with reasonable disagreements about morality. They defend the public justification principle: state action is permissible only if it can be justified to all reasonable people. In this article, we bring these two approaches together. Specifically, we investigate whether the moral uncertainty approach supports public reason liberalism: given our own moral uncertainty, should we favor public justification? We argue that while the moral uncertainty approach cannot vindicate an exceptionless public justification principle, it gives us reason to adopt public justification as a pro tanto institutional commitment. Furthermore, it provides new answers to some intramural debates among public reason liberals and new responses to some common objections.

Introduction

Moral disagreement pervades our lives. We disagree about the rightness or wrongness of actions, the goodness or badness of outcomes, and the justice or injustice of institutions. These disagreements often seem quite reasonable – and equally intractable. Moral reasoning is hard, requiring us to navigate complex concepts and their intricate and often surprising implications. We come to this task with different life experiences, educations, and social networks, and so with different biases, priors, and evidence bases. And even when we agree about the considerations at issue in some case, we often disagree about their weights. Moral thinking, in other words, is subject to the “burdens of judgment” ((Rawls 2005, 55–57); compare (MacAskill, Bykvist, and Ord 2020, 11–14)). And it is a predictable consequence of these burdens that intelligent people reasoning in good faith will come to different conclusions about morality. 

Given the many plausible moral views available to us, and their many capable and eager champions, it is difficult to know how to proceed. We must reckon both with the fact of our own uncertainty about morality, and with the fact that others will inevitably come to different conclusions than we do. These two facts, though closely related, have spawned two very different research programs in contemporary analytic philosophy: public reason liberalism in political philosophy and the moral uncertainty approach in ethics. Public reason liberals ask what laws are justified among individuals who reasonably disagree about morality. They argue that we should take all reasonable positions into account – holding that a law is justified not when the moral view we find most plausible says it is, but when the law can be justified to all reasonable people. Moral uncertainty theorists are concerned with what we morally ought to do in the face of our own uncertainty about morality. They argue that we should take all plausible moral positions into account – holding that what we morally ought to do depends not only on the moral theory we find most plausible, but also on the verdicts of all other moral theories in which we place some positive credence. 

Our goal in this article is to bring these research programs into contact. To frame our 3 discussion, we investigate the hypothesis that the moral uncertainty approach lends support to public reason liberalism. Our tentative conclusion is that while the moral uncertainty approach cannot vindicate the stringent principle that all laws be publicly justified, it nevertheless provides several reasons to take public justification seriously. Specifically, from the perspective of moral uncertainty, a good case can be made for treating public justification as a weighty pro tanto institutional commitment – albeit one that can be overridden when the moral stakes are high. 

Along the way, we also highlight some attractive features of our novel defense of public justification. For example, critics often argue that existing defenses of the public justification principle fail to cohere with the principle itself, because they assume controversial first-order views about morality or justice, either explicitly or in the way they narrowly delineate the class of “reasonable” people. The moral uncertainty approach sidesteps this issue, because it permits uncertainty about morality and justice all the way down and relies on a thin and independently motivated notion of reasonableness. Moreover, the moral uncertainty approach offers a fresh perspective from which to resolve some contested intramural debates among public reason liberals, not only about when to count someone as reasonable, but also about what it takes to justify a law to a reasonable person, and about the role of “shared reasons” in public justification. 

We proceed as follows. In section 2, we outline public reason liberalism and the moral uncertainty approach and introduce our hypothesis. In sections 3, 4 and 5, we discuss arguments in support of this hypothesis. We comment on intramural debates on public justification in section 6 and conclude in section 7.

Read the rest of the paper

4

0
0

Reactions

0
0

More posts like this

Comments3


Sorted by Click to highlight new comments since:

Would you be able to provide a plainer language summary of the papers conclusions or arguments? I think I'm interested in the topics discussed in the paper. But it’s unclear me what the arguments actually are, so I’m inclined to disengage. 

Take this sentence, which seems important: 

“We argue that while the moral uncertainty approach cannot vindicate an exceptionless public justification principle, it gives us reason to adopt public justification as a pro tanto institutional commitment.”

I do not understand this and so I do not see how this is a valuable addition to the critical topic of moral uncertainty.

I know this doesn't solve the actual problem you're getting at, but here's a translation of that sentence from philosophese to English. "Pro tanto" essentially means "all else equal": a "pro tanto" consideration is a consideration, but not necessarily an overriding one. "Public justification" just means justifying policy choices with reasons that would/could be persuasive to the public/to the people they will affect. So the sentence as a whole means something like "While moral uncertainty doesn't mean that governments (and other institutions) should always justify their decisions to the people, it does mean they should do so when they can."

Oops, one correction: "public justification" doesn't mean "justification to the people a policy will affect", it means "justification to all reasonable people"; "reasonable people" is roughly everyone except Nazis and others with similarly extreme views.

Curated and popular this week
 ·  · 10m read
 · 
I wrote this to try to explain the key thing going on with AI right now to a broader audience. Feedback welcome. Most people think of AI as a pattern-matching chatbot – good at writing emails, terrible at real thinking. They've missed something huge. In 2024, while many declared AI was reaching a plateau, it was actually entering a new paradigm: learning to reason using reinforcement learning. This approach isn’t limited by data, so could deliver beyond-human capabilities in coding and scientific reasoning within two years. Here's a simple introduction to how it works, and why it's the most important development that most people have missed. The new paradigm: reinforcement learning People sometimes say “chatGPT is just next token prediction on the internet”. But that’s never been quite true. Raw next token prediction produces outputs that are regularly crazy. GPT only became useful with the addition of what’s called “reinforcement learning from human feedback” (RLHF): 1. The model produces outputs 2. Humans rate those outputs for helpfulness 3. The model is adjusted in a way expected to get a higher rating A model that’s under RLHF hasn’t been trained only to predict next tokens, it’s been trained to produce whatever output is most helpful to human raters. Think of the initial large language model (LLM) as containing a foundation of knowledge and concepts. Reinforcement learning is what enables that structure to be turned to a specific end. Now AI companies are using reinforcement learning in a powerful new way – training models to reason step-by-step: 1. Show the model a problem like a math puzzle. 2. Ask it to produce a chain of reasoning to solve the problem (“chain of thought”).[1] 3. If the answer is correct, adjust the model to be more like that (“reinforcement”).[2] 4. Repeat thousands of times. Before 2023 this didn’t seem to work. If each step of reasoning is too unreliable, then the chains quickly go wrong. Without getting close to co
 ·  · 1m read
 · 
JamesÖz
 ·  · 3m read
 · 
Why it’s important to fill out this consultation The UK Government is currently consulting on allowing insects to be fed to chickens and pigs. This is worrying as the government explicitly says changes would “enable investment in the insect protein sector”. Given the likely sentience of insects (see this summary of recent research), and that median predictions estimate that 3.9 trillion insects will be killed annually by 2030, we think it’s crucial to try to limit this huge source of animal suffering.  Overview * Link to complete the consultation: HERE. You can see the context of the consultation here. * How long it takes to fill it out: 5-10 minutes (5 questions total with only 1 of them requiring a written answer) * Deadline to respond: April 1st 2025 * What else you can do: Share the consultation document far and wide!  * You can use the UK Voters for Animals GPT to help draft your responses. * If you want to hear about other high-impact ways to use your political voice to help animals, sign up for the UK Voters for Animals newsletter. There is an option to be contacted only for very time-sensitive opportunities like this one, which we expect will happen less than 6 times a year. See guidance on submitting in a Google Doc Questions and suggested responses: It is helpful to have a lot of variation between responses. As such, please feel free to add your own reasoning for your responses or, in addition to animal welfare reasons for opposing insects as feed, include non-animal welfare reasons e.g., health implications, concerns about farming intensification, or the climate implications of using insects for feed.    Question 7 on the consultation: Do you agree with allowing poultry processed animal protein in porcine feed?  Suggested response: No (up to you if you want to elaborate further).  We think it’s useful to say no to all questions in the consultation, particularly as changing these rules means that meat producers can make more profit from sel