Practical ethics aims to offer advice to decision-makers embedded in the real world. In order to make the advice practical, it typically takes empirical uncertainty into account. For example, we don’t currently know exactly to what extent the earth’s temperature will rise, if we are to continue to emit CO2 at the rate we have been emitting so far. The temperature rise might be small, in which case the consequences would not be dire. Or the temperature rise might be very great, in which case the consequences could be catastrophic. To what extent we ought to mitigate our CO2 emissions depends crucially on this factual question. But it’s of course not true that we are unable to offer any practical advice in absence of knowledge concerning this factual question. It’s just that our advice will concern what one ought to do in light of uncertainty about the facts. But if practical ethics should take empirical uncertainty into account, surely it should take moral uncertainty into account as well. In many situations, we don’t know all the moral facts. I think it is fair to say, for example, that we don’t currently know exactly how to weigh the interests of future generations against the interests of current generations. But this issue is just as relevant to the question of how one ought to act in response to climate change as is the issue of expected temperature rise. If the ethics of climate change offers advice about how best to act given empirical uncertainty concerning global temperature rise, it should also offer advice about how best to act, given uncertainty concerning the value of future generations. Cases such as the above aren’t rare. Given the existence of widespread disagreement within ethics, and given the difficulty of the subject-matter, we would be overconfident if we were to claim to be 100% certain in our favoured moral view, especially when it comes to the difficult issues that ethicists often discuss.

So we need to have an account of how one ought to act under moral uncertainty. The standard account of making decisions under uncertainty is that you ought to maximise expected value: look at all hypotheses in which you have some degree of belief, work out the likelihood of each hypothesis, work out how much value would be at stake if that hypothesis were true, and then trade off the probability of a hypothesis being true against how much would be at stake, if it were true. One implication of maximizing expected value is that sometimes one should refrain from a course of action, not on the basis that it will probably be a bad thing to do, but rather because there is a reasonable chance that it will be a bad thing to do, and that, if it’s bad thing to do, then it’s really bad. So, for example, you ought not to speed round blind corners: the reason why isn’t because it’s likely that you will run someone over if you do so. Rather, the reason is that there’s some chance that you will – and it would be seriously bad if you did.

With this on board, let’s think about the practical implications of maximising expected value under moral uncertainty. It seems that the implications are pretty clear in a number of cases. Here are a few.

1. One might think it more likely than not that it’s not wrong to kill animals for food. But one shouldn’t be certain that it’s not wrong. And, if it is wrong, then it’s seriously wrong – in the same ballpark as murder. So, in killing an animal, one risks performing a major moral wrong, without any correspondingly great potential moral upside. This would be morally reckless. So one ought not to kill animals for food.

2. One might think it more likely than not that it’s not wrong to have an abortion, for reasons of convenience. But one shouldn’t be certain that it’s not wrong. And, if it is wrong, then it’s seriously wrong – in the same ballpark as murder. So, in having an abortion for convenience, one risks performing a major moral wrong, without any correspondingly great potential moral upside. This would be morally reckless. So one ought not to have an abortion for reasons of convenience.

3. One might think it more likely than not that it’s not wrong to spend money on luxuries, rather than giving it to fight extreme poverty. But one shouldn’t be certain that it’s not wrong. And, if it is wrong, then it’s seriously wrong – in the same ballpark as walking past a child drowning in a shallow pond. So, in spending money on luxuries, one risks performing a major moral wrong, without any correspondingly great potential moral upside. This would be morally reckless. So one ought not to spend money on luxuries rather than giving that money to fight poverty.

23

0
0

Reactions

0
0

More posts like this

Comments5
Sorted by Click to highlight new comments since:

A number of links in this article are broken - would it be possible to fix them? Specifically the links from 1. and 2. "in the same ballpark as murder"

The second is here (paywalled) and I am not sure what was the first. If you find it I can use the moderator privileges and edit the post to fix the links.

From 1. "the same ballpark as murder" the Internet Archive has it saved here
The link in 3 "in the same ballpark as walking past a child drowning in a shallow pond" is also dead, but  is in the Internet archive here

Edit: the link in 2 is also archived here

Thanks! After so long I finally understood moral uncertainity :P

Curated and popular this week
Relevant opportunities