This is a linkpost for https://philpapers.org/rec/PODNUA

Podgorski, Abelard (2020). Normative Uncertainty and the Dependence Problem. Mind 129 (513):43-70.

Abstract:

In this paper, I enter the debate between those who hold that our normative uncertainty matters for what we ought to do, and those who hold that only our descriptive uncertainty matters. I argue that existing views in both camps have unacceptable implications in cases where our descriptive beliefs depend on our normative beliefs. I go on to propose a fix which is available only to those who hold that normative uncertainty matters, ultimately leaving the challenge as a threat to recent skepticism about such views.

I was unaware there was skepticism about normative uncertainty, specifically the idea that it's only descriptive uncertainty, not normative uncertainty, that creates uncertainty about what is best to do.

A key quote from the introduction that motivates the paper and explains this issue more:

A number of theorists (Lockhart 2000, Ross 2006, Sepielli 2009, MacAskill and Ord 2018, Tarsney 2018) have tried to provide accounts of decision-making under normative uncertainty. But there has been a backlash against this entire project from a contingent of philosophers who argue that only your descriptive uncertainty matters for what you ought to do (Weatherson 2014, Harman 2015, Hedden 2016).
In this paper, I draw attention to a puzzling class of cases which the controversy over normative uncertainty has neglected – cases in which our credences about descriptive facts depend on our credences about normative facts. I will show that most existing views about decision making, both those that take normative uncertainty into account and those that do not, give unacceptable recommendations in these cases. But the problem, I’ll go on to argue, is much worse for views that don’t take normative uncertainty into account at all. To avoid the counterexamples, views must be sensitive not only to both kinds of uncertainty, but also to the relationship between them. As I’ll show, there is a relatively painless strategy for incorporating this kind of sensitivity into existing views that respect normative uncertainty, but no similarly easy solution in reach for those who claim that only descriptive uncertainty matters. Ultimately, then, the problem is ammunition against the recent scepticism of theorizing about normative uncertainty. Indeed, it raises doubts that there are any interesting norms that are sensitive to merely descriptive uncertainty.

From there things get fairly technical, and I don't think I can offer a good summary, but the paper concludes thusly:

But I have tried to give reason to hold out hope that a defence is possible. Decision-making under normative uncertainty is a research program that is still in its infancy, and it would be a mistake to give up on it too soon, if there seem to be powerful reasons in its favour. At the same time, the argument suggests a note of caution to those developing an account of normative uncertainty: many principles attractive at first blush will fall apart if we do not attend to the ways our normative and descriptive beliefs are integrated.

This gives the impression that ideas about normative uncertainty are starting to mature and gain attention among a wider philosophical audience.

15

0
0

Reactions

0
0

More posts like this

Comments4
Sorted by Click to highlight new comments since: Today at 12:03 AM

Hm. Do you think it would be useful for me to write a short summary of the arguments against taking normative uncertainty into account and post it to the EA forum? (Wrote a term paper last semester arguing against Weatherson, which of course involved reading a chunk of that literature.)

I'd be very excited about this! I really appreciate it when people take research effort they already put in, then making it accessible to others (a la Effective Thesis or my own thesis).

Yeah, sounds interesting!

Thanks for summarizing this! I really like seeing people write up literature from EA-adjacent academic work that didn't happen within the community; it's cool to see how EA ideas get picked up and interpreted/criticized by others (see here for more examples).