Hide table of contents

We read the book “Moral Uncertainty” in an EA reading group. We think that the concept of moral uncertainty is very important and definitely deserves more attention. However, we had problems with parts of the book that we want to highlight in the following. Especially since we found other books by the authors such as “The Precipice” or “Doing good better” very excellent, we feel like this book hasn’t reached its full potential yet.

We write this criticism in case the authors plan to write a second version of the book and because we are interested in whether other people share our criticism. If we misunderstood something or you find our assessment unjustified, we are happy to engage in a discussion. 

We reached out to the authors who encouraged us to write a forum post.

High-level feedback

Firstly, we found that the book is slightly too vague on its goals and limitations. In the beginning, it is framed around simple everyday decisions and thus evokes the expectation of providing practical guidelines for decision-making under moral uncertainty. Later on, it feels more like the book primarily explains a theoretical framework without providing a guide for improved decision-making. 

If, for example, an eager reader wanted to apply the theoretical insights of moral uncertainty into their everyday life they would struggle in multiple ways. First of all, they would have a hard time determining what their credences in different moral theories are, e.g., to which extent they are utilitarian vs. other moral frameworks. While this is not an easy question, we felt like the book should either discuss different heuristics to get one’s credences or reference other work that explores this question in more detail. But even if we assumed that people knew their credences, we think they are left off with an unsatisfying conclusion. 

Chapter 8 first tells the reader how a naive application of moral uncertainty could look like to then tell them that the real world is actually more complicated and the naive version can lead to wrong conclusions. The reader is then left off neither with a solution to the problem nor with an acknowledgment of its complexity. We obviously don’t expect that this book would solve all problems of moral uncertainty within one go but we think it should either be much clearer about its limitations or provide partial answers where they exist.

Chapter-by-chapter feedback

Introduction: 

We have two pieces of criticism for the introduction. The first is about Table 0.1. This is arguably one of the most important pieces of content in the book as it sets the scene and introduces lots of important concepts. However, there are multiple open questions. Why, for example, is the bottom left corner of the table not applicable? We don’t think this is obvious enough not to be mentioned. Why did the authors choose not to consider all entries with an X? Are they not relevant, too complicated, out of this project’s scope, or something else? A justification would have helped us. 
Furthermore, while the rest of the book has a lot of clear and good examples, the text surrounding Table 0.1 has nearly none. We think it would have been helpful to give examples of the different categories of moral theories they investigate, for example, “Can have a pre-order, e.g. <insert moral theory>”.

The second piece of feedback concerns the distinction between first- and second-order uncertainty. Moral uncertainty is second-order uncertainty, i.e. it is between different moral frameworks. First-order uncertainty would be just within one framework such as utilitarianism. The leading example of the book, i.e. whether Alice should donate her 20€ vs. buy an expensive dinner, has first- and second-order uncertainty. Even if you were convinced that Utilitarianism is 100% true, you could still argue to buy the expensive dinner because e.g. this makes you work harder on the world’s most pressing problem and is thus net positive. We would have appreciated a discussion on how the two orders of uncertainty interact or can be disentangled. 

Fanaticism (Chapter 5):

When someone assigns an infinitely large value to one outcome, any kind of expected value calculation -- and thus moral uncertainty and MEC -- breaks. The authors address this issue of ‘fanaticism’ with two arguments. a) A person could also have infinite credence in the opposite outcome, thus breaking their own model, and b) this issue arises under empirical uncertainty as well and thus people should be unable to make claims with infinitely large credences/values anyway. 
While we agree that these arguments are probably true, we feel like they only make sense from a point of view that is already sympathetic to utilitarianism and moral uncertainty. If someone truly believed that infinite values should exist in moral theories, the authors' answer is probably not convincing. It essentially says “you can have moral uncertainty in your moral theory but you first need to change some fundamental things about it”. 

Non-fully-comparable moral theories (Chapter 3, 4, 5, 6):

This criticism extends the previous one. It is a bit vague and we don’t have a good solution either. It feels like many of the ways to introduce moral uncertainty to moral theories that are not fully comparable or allow for infinite credences are by transforming and tweaking the theory until it has some concept of moral uncertainty. Variance voting, for example, is an interesting mathematical way to weigh different kinds of theories and makes sense from a practitioner’s perspective, but we could very well imagine that genuine supporters of non-utilitarian moral frameworks would not agree that these mathematical operations should be allowed.

From this follow two possible conclusions: 

a) Moral uncertainty can only cleanly be applied to fully comparable moral theories and its application to non-fully-comparable theories will always feel a bit ‘hacky’. 

b) The attempt to apply moral uncertainty to non-fully-comparable moral theories just unveils deeper flaws and inconsistencies in these theories that should be addressed but are independent of moral uncertainty, e.g. infinite credences. 

Is Chapter 7 necessary? (Chapter 7): 

Some of us were not sure whether this chapter adds a lot of value. In our opinion, it can be summarized as “non-cognitivist theories have a hard time including a notion of moral uncertainty”. While this thesis is elaborated in great detail we are not sure how it relates to the rest of the book. Does it mean non-cognitivist theories are worse because they don’t have an account of moral uncertainty? Is it a limitation of moral uncertainty because it can’t be integrated into non-cognitivist theories? Is any conclusion of the chapter relevant for the goals of the book, i.e. laying out the theory and some practical applications of moral uncertainty? We were a bit confused. 

Does MU matter in practice? (Chapter 8): 

On a high level, the authors argue that moral uncertainty should influence moral debates e.g. on veganism or abortion - and we agree. They then lay out a naive interpretation of moral uncertainty for multiple examples and conclude that they are too simplistic because of interaction effects and intertheoretic comparisons. So the conclusion of the chapter doesn’t support what they set out to show. If someone is skeptical that moral uncertainty should matter for practical ethics, they won’t be convinced after this chapter. If somebody is already convinced before the chapter they haven’t learned how to apply it to their life. We are not sure how to solve it but we found it unsatisfying.

Unit-comparability and fanaticism (Chapter 0+4+8):

Chapter 0 (page 8) says: “Because we don’t discuss conditions of level-comparability in this book when we refer to intertheoretic comparability we are referring in every instance to unit-comparability”. 

In chapter 8 (page 182) it says: “In particular, we will assume that all theories in which the decision-maker has credence are complete, interval-scale measurable and intertheoretically comparable and that the decision-maker doesn’t have credences that are sufficiently small in theories that are sufficiently high stakes that ‘fanaticism’ becomes an issue”.

Our question is then if the intertheoretic comparability of chapter 8 still refers to the unit-comparability of the introduction. If that is the case, how can we assume that ‘fanaticism’ doesn’t become an issue? Even with variance voting (chapter 4), as long as we assume unit-comparability, the ratio between moral theories doesn’t change and fanaticism remains a problem, right?

We hope that our feedback is helpful or possible misunderstandings from our side can be clarified. 

If there are resources on ways to get your credences in moral theories, we would appreciate suggestions. 

Looking forward to a constructive discussion.

28

0
0

Reactions

0
0

More posts like this

Comments3
Sorted by Click to highlight new comments since: Today at 7:09 PM

Thanks for these notes! I found the chapter on Fanaticism notable as well. The authors write:

A better response is simply to note that this problem arises under empirical uncertainty as well as under moral uncertainty. One should not give 0 credence to the idea that an infinitely good heaven exists, which one can enter only if one goes to church; or that it will be possible in the future through science to produce infinitely or astronomically good outcomes. This is a tricky issue within decision theory and, in our view, no wholly satisfactory solution has been provided. But it is not a problem that is unique to moral uncertainty. And we believe whatever is the best solution to the fanaticism problem under empirical uncertainty is likely to be the best solution to the fanaticism problem under moral uncertainty. This means that this issue is not a distinctive problem for moral uncertainty.

I agree with their meta-argument, but it is still a bit worrying. Even if you reduce the unsolvable problems if your field to unsolvable problems in another field, I'm still left feeling concerned that we're missing something important.

In the conclusion, the authors call for more work on really fundamental questions, noting:

But it’s plausible that the most important problem really lies on the meta-level: that the greatest priority for humanity, now, is to work out what matters most, in order to be able to truly know what are the most important problems we face.

Moral atrocities such as slavery, the subjection of women, the persecution of non-heterosexuals, and the Holocaust were, of course, driven in part by the self-interest of those who were in power. But they were also enabled and strengthened by the common-sense moral views of society at the time about what groups were worthy of moral concern.

Given the importance of figuring out what morality requires of us, the amount of investment by society into this question is astonishingly small. The world currently has an annual purchasing-power-adjusted gross product of about $127 trillion. Of that amount, a vanishingly small fraction—probably less than 0.05%—goes to directly addressing the question: What ought we to do?

I do wonder, given the historical examples they cite, if purely philosophical progress was the limiting factor. Mary Wollstonecraft and Jeremy Bentham made compelling arguments for women's rights in the 1700s, but it took another couple hundred years for process to occur in legal and socioeconomic spheres.

Maybe it's a long march, and progress simply takes hundreds of years. The more pessimistic argument is that moral progress arises as a function of economic and technological progress, and can't occur in isolation. We didn't give up slaves until it was economically convenient to do so, and likely won't give up meat until we have cost and flavor competitive alternatives.

It's tempting to wash away our past atrocities under the guise of ignorance, but I'm worried humanity just knowingly does the wrong thing.

The more pessimistic argument is that moral progress arises as a function of economic and technological progress, and can't occur in isolation. We didn't give up slaves until it was economically convenient to do so, and likely won't give up meat until we have cost and flavor competitive alternatives.

FWIW this assessment seems true to me, at least for eating non-human animals, for I don't know enough about the economic drives behind slavery. (If one is interested, there's a report by the Sentience Institute on the topic, titled "Social Movement Lessons From the British Antislavery Movement: Focused on Applications to the Movement Against Animal Farming ".)

It's tempting to wash away our past atrocities under the guise of ignorance, but I'm worried humanity just knowingly does the wrong thing.

I would put it something like "as a rule, we do what is most convenient to us".

And I would also like to add that even if one causes terrible suffering "knowingly", there's still the irreducible ignorance of being disconnected from the first-hand experiencing of that suffering, I think. I.e, yes, we can say that one "knows" that one is causing extreme suffering, yet if one knew what this suffering is really like (i.e. if one experienced it on "oneself"), one wouldn't do it. (Come to think of it, this would also reduce one's moral uncertainty by the way.)

That conclusion doesn't necessarily have to be as pessimistic as you seem to imply ("we do what is most convenient to us"). An alternative hypothesis is that people to some extent do want to do the right thing, and are willing to make sacrifices for it - but not large sacrifices. So when the bar is lowered, we tend to act more on those altruistic preferences. Cf. this recent paper:

[Subjective well-being] mediates the relationship between two objective measures of well-being (wealth and health) and altruism...results indicate that altruism increases when resources and cultural values provide objective and subjective means for pursuing personally meaningful goals.

Curated and popular this week
Relevant opportunities