(Subtitle: “And ethics, and epistemology, and…”. Cross-posted from my Substack.)
We want to make decisions for good reasons. But I worry some common approaches to decision theory stray from this purpose. They start with a bottom-line verdict, “I should choose this action”, then use this verdict to justify claims about how to make decisions. While I feel sympathetic to this move at times, I ultimately think it’s backwards. And embedding such a move within a more nuanced methodology, like reflective equilibrium, doesn’t make it less backwards.
To see what I mean, take this exaggerated example.
Imagine you ask your friend what they did over the weekend, and they say they went to a casino to play a dice game. You raise an eyebrow. Aren’t they really smart? You ask why they went to a casino.
“It just seemed super intuitive that I should play this game,” they say.
“Oh, huh.” Again, your friend is really smart, so you charitably ask, “So you’ve played it a lot before and know it’s profitable?”
“Nah, it was a new game.”
You blink. “Um. I guess this is your trollish way of saying: It seemed super intuitive that you should put equal probability on each possible outcome? So, somehow, the game was positive EV?”
Now they blink. “What? No, the rules were too complicated for me to figure out the odds. It just seemed super intuitive that I should play the game. And that’s why it makes sense to put higher probability on the outcomes where I win, and maximize EV given those probabilities.”
This is, as I say, an exaggeration. Decision theory is supposed to help us with problems much more confusing than games of chance. Nonetheless, in those confusing problems, we might have bottom-line intuitions as powerful as “Obviously I shouldn’t pay in Pascal’s mugging”, or “Obviously I should cooperate in a Prisoner’s Dilemma with my copy”. These intuitions can lead us to make the same kind of mistake as the friend above.
In brief, I’ll argue:[1]
- A bottom-line verdict about what to choose, whether in a particular case or in general, should be justified by reasons: considerations about the problem that make the chosen option worth choosing. For example, the friend’s choice whether to play the game should be justified by its expected value.
- A brute intuition in favor of such a verdict, a “verdict-level intuition”, isn’t a reason. Roughly, this is because a verdict that I should choose A is a claim that I have reasons for A — so those reasons themselves provide the justification, if any.
- Instead of taking our verdict-level intuitions as a direct source of justification, we should do decision theory by:
- using verdict-level intuitions only to help us discover possible reasons, and
- reflecting on how intuitively compelling these reasons are on their own merits.
- The main counterargument: Our verdict-level intuitions might give us evidence of good reasons that we can’t explicitly articulate. But, in contexts with poor feedback on how well these intuitions track good reasons, this evidence seems weak. And these are the contexts where verdict-level intuitions would do work beyond my methodology in (3).
These are claims about our fundamental methodology in decision theory, and in other kinds of normativity.[2] So, our views on these claims will shape our answers to many action-relevant questions. Should we take Pascalian bets? Should we act as if we control the decisions of agents in other universes? Should consequentialists only act on the near-term consequences of their actions?
Before we get our hands dirty, a few quick points of background.
Background: Intuitions as predictors vs. intuitions as normative expressions
First, it will be important to disentangle two ways we might appeal to “intuitions” in decision theory (both of which can be valid depending on the intuition’s content):
- Intuitions as predictors: We might appeal to an intuition because we think it’s reliably correlated with something we want to predict. E.g., “My very strong intuition that I should do X is evidence that, after thinking more, I’d find good reasons for X.”
- Intuitions as normative expressions: We might report an intuition simply to express how normatively forceful or plausible we find something. E.g., “It just seems clear that I shouldn’t follow a dominated strategy.” Or, “It seems clear that when I’m aiming for reflective equilibrium, ‘causal decision theory is correct’ should be one of my starting points.”
(If this isn’t clear, see the short post “When do intuitions need to be reliable?”.)
Unless I say otherwise, I’m talking about intuitions as normative expressions. But intuitions as predictors will be important later. To spoil things, I’ll argue that our intuitions about bottom-line verdicts don’t do much work, either as normative expressions or predictors. However, this isn’t an argument against intuitions in general. It’s specific to intuitions that, by their nature, seem to rely on other intuitions for justification. I’ll unpack what that means soon.
Background: “Winning isn’t enough”
Second, in this post I’ll assume the following framing of the purpose of decision theory (summarizing “Winning isn’t enough”, by Jesse Clifton and myself):
You can’t evaluate which beliefs and decision theory to endorse just by asking “which ones perform the best?” Because the whole question is what it means to systematically perform better, under uncertainty.
Now, you could agree with this, yet still think that “what it means to systematically perform better” includes taking pre-theoretically intuitive actions. For instance, SMK endorses this framing,[3] but takes it as an axiom that a good decision theory should recommend cooperation in the Twin Prisoner’s Dilemma. What’s wrong with that?
Against verdict-level curve-fitting (and reflective equilibrium)
Let’s say I reflect deeply about a bunch of decision problems, trying my hardest to correct for biases as I do so. Afterwards, for each problem or set of problems, I have a strong intuition favoring some bottom-line verdict about what to choose — so strong, perhaps, that denying it feels downright crazy. Call this my verdict-level intuition about the problem(s).
When I do decision theory, is my goal to find the best model of my verdict-level intuitions? That is, do I start with these problems and their “solutions”, then look for an algorithm that returns these solutions (and generalizes well to new problems)?[4]
I don’t think so. I don’t take my verdict-level intuitions as normative ground truth to fit a curve to. Not even if I admit these intuitions are fallible, or curve-fit other intuitions along with verdict-level ones.
Instead, I think my goal in decision theory is to weigh up the reasons for the acts I’m choosing between. Reasons are considerations about the decision situation that make an act worth choosing. But a verdict-level intuition isn’t a reason. It doesn’t give direct justification — over and above the justification that other reasons provide — for choosing the act it favors. Why? Because a verdict that I should choose A is, by its nature, nothing more than a claim that I have decisive reasons for A. So if the verdict-level intuition can justify anything, that justification has to come from those reasons. But then it’s not direct justification.
By contrast, there’s nothing inherent to the concept of “reason” that claims that deeper reasons exist. So reasons can provide direct justification, if they’re intrinsically compelling. For example, take the reason “If A would certainly cause better consequences than B, you should prefer A”. Seems really intuitive! And its intuitiveness doesn’t seem to clearly depend on any further intuitions.
Under my preferred methodology, then, verdict-level intuitions are still practically important. Yet their role is only to help me discover candidate reasons. For example, reflecting on Pascal’s mugging points me to the possibility that I shouldn’t maximize the expectation of an unbounded utility function. From there, I should ask: Are there independently intuitive reasons not to endorse “I should maximize unbounded expected utility”? More generally, I figure out which reasons are independently intuitive by reflecting on questions like (more in “So what should we do?”):
- What do I believe about the possible consequences of this act, or the duties it might violate, or the kind of character it implies?
- Which ways of making tradeoffs between uncertain consequences seem reasonable in their own right?
- What notion of “expected consequences” has the most reasonable conceptual arguments behind it?
A couple key clarifications:
I’m deliberately not calling reasons “principles”, for a couple (ahem) reasons. See footnote for details.[5] Briefly, what matters isn’t how formal or general a reason is, but that a reason identifies why some action is worth choosing.
- You might think, “Of course verdict-level intuitions in themselves aren’t reasons. That’s not the point. They’re still useful as reliable predictors of independently compelling reasons we haven’t discovered yet.” I’ll address that view later. For now, I’m only critiquing verdict-level intuitions as normative expressions.
My concerns apply just as well to the widely accepted methodology of reflective equilibrium. I see two interpretations of this methodology. On one view (the dominant one, as I understand it[6]), both verdict-level intuitions and independently plausible principles defeasibly and directly justify each other. This approach at least doesn’t treat verdict-level intuitions as bedrock, or as infallible. But this doesn’t avoid the core problem. We’re still saying these intuitions provide direct justification, so my critique above applies, even on a coherentist account of justification.
On another view of reflective equilibrium, we only go back and forth between verdict-level intuitions and principles as a practical heuristic, to discover/predict reasons. That is, the verdict-level intuitions aren’t taken as direct justification, but as a source of evidence. I have no problem with this approach as far as it goes (which, I’ll argue, isn’t very far). I just think it would be clearer not to use the term “reflective equilibrium” for this idea, since it can be conflated with the first interpretation.
In the rest of the post, I’ll give examples of verdict-level curve-fitting, and of the alternative approach, then respond to a few objections. The examples are mostly independent, so feel free to skip any that don’t seem relevant.
Example: Pascal’s mugging
Consider this version of Pascal’s mugging, designed to leave out various messy confounding factors like “The mugger is just pulling a probability out of their ass”. Someone on the street approaches you, gives you incontrovertible proof that everything they say is accurate, and says:
I can see you’re a bit of a do-gooder. I know that if you walk away, you’ll make a $5000 donation to the Against Malaria Foundation. (No need to have an existential crisis about free will or anything, since I’ll wipe your memory after this.) If you give me that $5000 instead, I’ll roll a trillion-sided die. And if I roll a seven, I’ll turn the whole accessible universe into a utopia! Deal?
I’ll bet your verdict-level intuition is to not pay. Verdict-level curve-fitting says, “Based on this intuition, it seems like you just know you shouldn’t pay. This suggests you shouldn’t make decisions in a way that implies paying, e.g., maximizing expected utility with an unbounded utility function.”
Let’s take the most charitable interpretation of this view. When I imagine paying the mugger, and feeling like this is a mistake, I’m not merely imagining feeling silly. Rather, it feels like I have a strong reason not to pay. From here, I have a couple options.
On one hand, I could say, “Here’s a way to explain my strong intuition against paying: I have a bounded utility function! No need to arbitrarily round tiny probabilities down to zero, give up expected utility maximization, or baldly insist on not paying.” But this doesn’t seem to be enough. I haven’t looked at the bounded utility function itself, to check if it seems like a strong reason. Instead, I’ve said that my bottom-line verdict about Pascal’s mugging justifies bounding my utility function. This seems backwards. Isn’t my utility function supposed to represent how much I value different outcomes? And aren’t these values supposed to be my reasons for my verdict?
On the other hand, I could examine the possible reason that a bounded utility function is meant to formalize, namely “Turning the universe into a utopia isn’t a trillion times better than saving a life”. And I could consider what makes utopias, or human lives, more or less valuable.[7] That is, I could assess this possible reason not to pay on its own merits.
Perhaps, in the end, I don’t find this way of valuing outcomes independently compelling, nor any other candidate reasons I’ve thought of. And suppose I also don’t expect my verdict-level intuition to reliably predict good reasons I’m missing. If so, all I’m left with is the brute sense that I shouldn’t pay. It might be a very strong sense. But by hypothesis, I’ve already ruled out the other roles this intuition might play. After that, I struggle to see how this intuition could have any force as a normative expression.[8]
Example: Smoking Lesion and Twin Prisoner’s Dilemma
(This example assumes background knowledge of the CDT vs. EDT vs. FDT (etc.) debate.)
Like most people, I have the strong verdict-level intuition that you should smoke in Smoking Lesion. What does this tell me?
Well, the underlying reason behind the intuition seems to be “Decision-making isn’t about ‘managing the news’, i.e., choosing acts in order to gain evidence of good outcomes”. This is a reason because it tells me what’s wrong with not-smoking: not-smoking doesn’t seem to lead to good outcomes, per se, but to (at best) merely give evidence of them.
However, I also have a strong verdict-level intuition that I should cooperate in the Twin Prisoner’s Dilemma. The reason behind that intuition seems to be “It’s logically guaranteed that the other guy cooperates if and only if I do, even if I can’t physically cause him to cooperate”. But that also sounds like managing the news. So then I ask myself, can I make sense of how this “logical guarantee” can justify my choice, without appealing to news management? If I can’t do that, and the independent motivations for EDT are strong enough, perhaps I end up embracing news management after all.
Enter FDT, which says, “As a decision-maker, ‘you’ are all copies of your decision algorithm. So you actually can cause your copy to cooperate.” Okay … am I an algorithm? It seems like I’m just, well, a human. Maybe there are good independent arguments for the algorithmic ontology. But if I retrofit my ontology of myself as a decision-maker to my intuition that I should cooperate, as many FDT proponents seem to do,[9] I think that gets things backwards. My conception of what a “decision-maker” is should inform my verdicts about what to choose, not vice versa. Thus, I’m wary of endorsing FDT based on my verdict-level intuition about Twin PD.
Rather, I should look at the candidate reasons my verdict-level intuition is pointing at: a) “The relevant expected values are evidential rather than causal”, or b) “‘I’ as a decision-maker can in some sense cause outcomes without physically causing them”. I can then ask myself how much I endorse (a) and (b) in themselves. The same goes for Smoking Lesion and “Decision-making isn’t about managing the news”.
To its credit, my verdict-level intuition was quite useful for identifying these candidate reasons! Beyond that, though, what work is there left for this intuition to do?
Example: Cluelessness
If this acausal decision theory stuff is too speculative for you, let’s come back to Earth. Imagine you walk away from the mugger to go make your donation. Well, Mogensen (2021) argues that we’re clueless about whether donating to the Against Malaria Foundation (AMF) is better than donating to the Make-a-Wish Foundation (MAWF). The details of that argument are out of scope. Roughly, though, the idea is that the long-term consequences of these donations can’t be weighed up without making arbitrary calls.
Despite his argument, Mogensen writes:
I’m most inclined to think this is one of those cases where we’ve got a philosophical argument we don’t immediately know how to refute for a conclusion that we should nonetheless reject, and so we ought to infer that one of the premises must be false.
That is, he puts a lot of weight on his verdict-level intuition that donating to AMF is better. I share this intuition, even when “better” is meant in an impartial consequentialist sense! But what might be the reason behind it? As far as I can tell, the candidates are:
- “The expected total consequences of AMF, across the whole cosmos, are better than those of MAWF.” (Perhaps supported by an appeal to heuristics.)
- “The expected total of some subset of the consequences of AMF is better than those of MAWF. And, even though I’m clueless about the other consequences, this shouldn’t override my non-cluelessness about the subset. (As per ‘bracketing’.)”
Mogensen himself shows why (1) is implausible (see also this series of posts). I have some sympathy for (2). Yet my sense is that many impartial consequentialists want to say things like “AMF has higher expected value than MAWF”, as in (1), and they don’t follow a particularly coherent theory of (2).[10] At the very least, I think if we carefully examine why we think AMF is better, we’ll find option (2) more defensible than (1). So we’ll end up more skeptical of longtermist interventions.
In any case, if all that remains is my bare intuition in favor of AMF, that intuition tells me very little about what is “better” by impartial lights. If I endorse the pro-AMF verdict above, it should be because I endorse (2) (perhaps in some vague form) as a way of weighing up my reasons for decisions. Failing that, I should admit to some non-impartial or non-consequentialist standard that gives me reason to prefer AMF. In which case, I had better actually endorse that standard in its own right!
So what should we do?
Now, we’ve got our pile of candidate reasons, which we’ve discovered and disentangled by thinking through various cases. How do we arrive at views on decision theory without assuming any of our choices are justified?[11] Simple. We reflect hard on these reasons themselves, asking whether they’re well-motivated independently of verdict-level intuitions, and consistent with other well-motivated reasons.
We’ve already seen some direct analysis of candidate reasons in the examples:
- Pascal’s mugging: “... I could examine the possible reason that a bounded utility function is meant to formalize, namely ‘Turning the universe into a utopia isn’t a trillion times better than saving a life’. And I could consider what makes utopias, or human lives, more or less valuable.”
- Smoking Lesion and Twin PD: “So then I ask myself, can I make sense of how this ‘logical guarantee’ can justify my choice, without appealing to news management?” And “Okay … am I an algorithm? It seems like I’m just, well, a human.”
- Cluelessness: “Roughly, though, the idea is that the long-term consequences of these donations can’t be weighed up without making arbitrary calls.”
Here are more examples of this kind of methodology, i.e., decision-theoretic arguments that don’t depend on any appeals to verdict-level intuitions (though I don’t necessarily endorse them in full):
- In “Can you control the past?” (Sec. III), Carlsmith argues that EDT’s “managing the news” seems less objectionable if we reject libertarian free will. Yes, as the CDTist insists, decision theory should be about making a difference in some sense. But if the past and future are both fully determined by physics, why is it any stranger to regard yourself as making a difference to the past than to the future? (See Clifton for an alternative to EDT that’s also grounded in this kind of reasoning.)
Spohn (2012) argues for one-boxing in Newcomb’s problem, not because it’s highly intuitive upon reflection, but because: When you face the boxes and deliberate about what to do, instead of “deciding” at that time, you’re learning what you had decided all along — and so even CDT recommends one-boxing. And he tries to show that this interpretation of the decision situation is plausible.[12]
- Cluelessness entails incomplete preferences. Tarsney et al. (2025) argue against incompleteness by giving independent motivations for two principles that are incompatible with incompleteness. As I discuss in the conclusion, though, I think this kind of argument still goes backwards in a different way.
- To argue for bounded utility functions, Russell and Isaacs (2021) don’t merely appeal to the intuition against paying in Pascal’s mugging. Instead, they argue that applying a risk-neutral decision rule to an unbounded utility function violates a form of the sure-thing principle.
Objections and responses
Don’t our “reasons” also require further justification, by your standard? And then those reasons, and so on?
Response: Some reasons do indeed seem to need justification. For example, if an impartial consequentialist’s reason to donate to AMF is that this donation has positive EV, we should ask why they have credences under which the EV is positive.
That doesn’t mean my view leads to an infinite regress. While I can’t do justice to the whole foundationalism vs. coherentism debate that this touches on (see Huemer (2022, ch. 4-5) or this summary), two key points will suffice.
First, suppose you think all reasons need justification, contra foundationalism. This is consistent with my core claim, because you could still think that reasons justify each other via their mutual coherence. It’s just that there can’t be mutual justification between reasons and verdict-level intuitions, for the reasons I’ve already argued.
Second, if instead we accept foundationalism, it seems very intuitive that some reasons don’t need further justification. (Again, my previous arguments show why this isn’t true of verdict-level intuitions. See footnote[13] for more on how my view relates to foundationalism generally.) Imagine that the friend from the intro story had said:
The game is simple. You pay $1, and the house pays you $3 if a six-sided die comes up even. I have plenty of money, and it just seems morally obvious that I should be scope-sensitively altruistic. So I should value more money linearly if I’m only playing a few games. By the principle of indifference, it seems arbitrary to put anything other than 1/6 probability on each possible side of the die. It also seems arbitrary for my decision rule not to give equal weight to equally likely possible outcomes. So I should play the game because its EV is positive.
This casino would go bankrupt in a heartbeat, but set that aside. The friend seems to have given a sufficient justification for their choice, more or less. It’s not an infallible justification, e.g., maybe infinite ethics weirdness calls that last step about the decision rule into question. But foundational justifications can be fallible (again, see Huemer (2022, ch. 4)).
What if our verdict-level intuitions are tracking good reasons that we can’t articulate?
For example, in the Twin PD case, perhaps we’re missing other reasons to cooperate, besides “The relevant expected values are evidential” and “I can ‘logically cause’ certain outcomes”. That seems pretty likely, given how subtle decision theory is. I might, then, appeal to my verdict-level intuition as a predictor: Perhaps this very strong intuition is evidence that, if I had a lot more time to think, the reasons I can’t yet articulate would on balance favor this verdict.
Response: To summarize, the problem is that I have little reason to think my verdict-level intuitions track good reasons on net — more so than bad reasons, or various spurious things orthogonal to the good reasons. At least, this seems true when we’re aiming to predict reasons that we’re missing after having already thought a lot. So if someone claims that their verdict-level intuitions are strong evidence about the balance of reasons, I think they have the burden of proof.
Crucially, our intuitions about reasons themselves don’t face this critique, because those are intuitions as normative expressions, not as predictors. It’s a category error to ask about the “reliability” of intuitions that aren’t meant to make predictions (see this post for more). Take the principle of indifference example from the previous section. When I say, “It seems arbitrary to put unequal probabilities on symmetric outcomes”, I’m not predicting anything. I’m saying that such arbitrariness is unacceptable on its face.
Let’s unpack this. I see two general ways I might argue that my intuition-as-predictor is strong evidence of something, as in the bolded claim above. And neither seems to work in this case.
First, I could say, “I’ve gotten rich feedback about the answers to many similar questions. So probably my intuition about this question is reliable (even if I haven’t explicitly verified its reliability).” Here, the “similar questions” are past decision problems. And the “answers” are the reasons I’ve discovered, underlying my verdicts in those problems.
But whether we have indeed gotten rich feedback depends on the kinds of reasons we’re trying to predict:
- We might have a lot of experience discovering the easy-to-articulate reasons behind our intuitions — the low-hanging fruit, like “Decision-making isn’t about managing the news”. For easy-to-articulate reasons, we have tight feedback loops between first encountering the decision problem, noticing our verdict-level intuition, and discovering the reasons. This is true even if the reasons take much longer to formalize.
- By contrast, we have much less experience discovering the hard-to-articulate good reasons. These are the reasons we only discover, even in vague form, after years of being in the limbo of “This feels right, but I can’t put my finger on why”. So the feedback loops are much weaker. And we’re in that limbo right now, when we appeal to our verdict-level intuitions as predictors. It’s telling that I can’t think of many examples of hard-to-articulate reasons! I’m curious if the reader can.
Second, I might say, “I’ve verified that my intuitive guesses of answers to many similar questions were usually correct (perhaps after some ‘calibration training’). So probably my intuitive guess about this question is correct.”
I would update on well-designed empirical studies of this. Paying attention to the track record of one’s intuitions about missing reasons, especially the hard-to-articulate ones, seems like a promising way to become a “philosophy superforecaster”. But I haven’t seen anyone point to such a track record to justify their verdicts.
Overall, once I’ve probed my verdict-level intuition for the more obvious candidate reasons, I don’t think I should expect this intuition to contain further dormant wisdom.
Concluding thoughts on further implications
I’ve argued that our bottom-line verdicts need to be justified by deeper reasons. We’ve seen that this matters in cases like Pascal’s mugging, acausal decision theory puzzles, and cluelessness. But I’ve defended a more general methodology here: “Look at how the structure of justification depends on the content of what we’re justifying.” So I expect this point to matter to a wide range of questions in decision theory and other fields of philosophy. Here’s a quick preview of its potential implications.
First, just as there’s a direction of justification from reasons to verdicts, there’s also a direction from different kinds of reasons to others. The reasons for our choices are our preferences under uncertainty (among other things). In turn, the reasons for our preferences include our values and beliefs.
Take the example of Tarsney et al. (2025) from “So what should we do?”. Their argument comes down to: “Incomplete preferences violate a very intuitive principle about how to make choices across multiple (hypothetical) situations. So we should reject these preferences.” I definitely feel the force of the intuitions they’re appealing to. But this argument doesn’t seem to tell us what’s wrong with the preferences themselves, or the values and beliefs that motivate them, in a given choice situation. It doesn’t tell us what about option A makes it preferable to option B. It just tells us that if we have incomplete preferences, the pattern of choices we’ll make across multiple situations is supposedly problematic. I don’t think we should retrofit our preferences to intuitively “principled” patterns of choices like this. (Wouldn’t my reasoning here undermine most money pump and Dutch book arguments? There’s a lot more to say on this, but arguably, it would!) See this appendix for more.
Second, just like in decision theory, arguably we need deeper reasons for our verdicts in ethics, epistemology, and more. Why should we count as a datum the brute intuition that, say, the repugnant conclusion is unacceptable? Or that we know we’re not Boltzmann brains? I’m not concluding that we should reject these intuitive verdicts, here. All I ask is that we go beyond “one person’s modus ponens is another’s modus tollens”, or “Moorean shift”. Instead, let’s look much more closely at the structure of our intuitions, and what that says about the justificatory work different intuitions can do.
Acknowledgments
Many thanks to Jesse Clifton for various past discussions that have helped me clarify my views presented here (this doesn’t imply his endorsement of all my claims). Thanks also to Lukas Finnveden, Michael St. Jules, Sylvester Kollin, Niels Warncke, Joseph Ancion, Brandon Sayler, Francis Rhys Ward, Eric Olav Chen, Clare Harris, and Claude for helpful comments.
- ^
Compare these claims to the views expressed in McMahan (2013, p. 114), Nye (2015, pp. 628-630), and Singer (1974). However, these authors can be read as arguing that general principles take priority over intuitions about particular cases. As I note in “Against verdict-level curve-fitting (and reflective equilibrium)”, my argument is not about “particular vs. general” but about “intuitions about the bottom line vs. reasons for that bottom line”. Though, the authors do gesture at the latter to some extent.
- ^
I’ve written about some such questions in previous work, most notably here, here, and here.
I could just as well have called this post “How to not do normativity backwards”, encompassing things like ethics and epistemology as well. I decided to focus on decision theory here both to keep the scope manageable, and because it sounds less highfalutin than “normativity”. But see the conclusion for brief thoughts on other domains.
- ^
Quote: “As a final note, I think it is important to not delude ourselves with terms like ‘success’, ‘failure’, ‘wins’ and ‘preferable’ (which I have used in this post) in relation to decision theories; UEDT and UCDT are both the ‘correct’ decision theory by their own lights (just as all decision theories): the former maximises expected utility with conditionals from the earlier logical vantage point, and the latter does the same but with logical counterfactuals, and that is that—there is no objective performance metric. See The lack of performance metrics for CDT versus EDT, etc. by Caspar Oesterheld for more on this.”
- ^
I’ve encountered this view in a few personal communications. Some public examples, besides those in footnote 9 (emphasis mine):
- Briggs (2010): “Unlike EDT, CDT yields the right answer in The Smoking Lesion. … But several authors, Eells (1985), Bostrom (2001), Egan (2007), have formulated examples where EDT gets things right, while CDT gets things wrong.”
- Demski: “I give a variant of the smoking lesion problem which overcomes an objection to the classic smoking lesion, and which is solved correctly by CDT, but which is not solved by updateless EDT. … If CDT fails Newcomb's problem and EDT doesn't, it seems CDT is at best a hack which repairs some cases fitting the above description at the expense of damaging performance in other cases.”
- Treutlein: “The Coin Flip Creation problem is intended as an example of a problem in which EDT would give the right answer, but all causal and logical decision theories would fail.”
- ^
First, “principles” connotes something formal or precise. Yet a reason can be as vague as “Rational decision-making is about causing good outcomes”, or “It seems irrational to be dynamically inconsistent”. Conversely, FDT’s recommendations match many of my verdict-level intuitions (let’s say), and it’s a formal decision theory. But I don’t think this makes FDT a good decision theory. Also, see footnote 11.
Second, “principles” connotes generality or simplicity. Yet we can have reasons with respect to particular cases, and these reasons might be complex. Indeed, compared to many rationalist curve-fitters, I’m much more comfortable with certain supposedly “complex” moves. E.g., to avoid money pumps or Dutch books, I can augment my beliefs, values, and decision theory with certain commitments, instead of endorsing some new beliefs, values, or decision theory. And this isn’t ad hoc, because such commitments are desirable according to my current beliefs, values, and decision theory! (See here and here.)
- ^
Examples of reflective equilibrium-sympathetic writings that support this interpretation, i.e., “verdict-level intuitions are treated as giving direct justification”:
- From the SEP article (emphasis mine): “The method of reflective equilibrium has been advocated as a coherence account of justification […]. For example, a moral principle or moral judgment about a particular case […] would be justified if it cohered with the rest of our beliefs about right action (or correct inferences) on due reflection and after appropriate revisions throughout our system of beliefs.”
- In Rechnitzer’s (2022) reflective equilibrium analysis of the trolley problem(s), she considers several bottom-line verdicts (“input commitments”), e.g., “IC 3 In Case 3, the bystander at the switch may divert the trolley so that one workman dies instead of five.” She then says (emphasis mine): “The second strategy—reversing the commitment, accepting that the bystander must not turn the trolley, and instantly establishing consistency—looks like the least laborious solution. However, it cannot be vindicated by the RE criteria at this stage. The problem is that this conflicts with the criterion to respect input commitments. For Thomson, the bystander commitment, IC 3, has a high independent credibility: she has a strong intuition that the bystander may divert the trolley …”
- ^
Under orthodox expected utility theory, this isn’t how we construct utility functions. The orthodox approach takes our preferences under uncertainty as given, and derives the utility function from those preferences via a representation theorem. So much the worse for the orthodox approach, then! See Peterson (2008), Meacham and Weisberg (2011), and Easwaran (2014) for more thorough arguments.
- ^
I can hear the distant keyboards clacking as readers type, “Okay, here’s my bank account, $5000 please.” This is a separate conversation, one I have complicated thoughts on. Basically, though, I think in more realistic cases than this one, there plausibly are good reasons not to pay: a combination of my response here, the idea of “bracketing”, and accounting for perverse incentives set by paying actual muggers. If I didn’t buy those reasons, perhaps I still wouldn’t pay, but if I were honest I’d admit this was simply irrational.
- ^
Examples:
- Levinstein and Soares (2017), who say, “In brief, FDT suggests that you should think of yourself as instantiating some decision algorithm”, and then primarily argue for FDT by showing that it gets various cases “right”;
- comment by Baker;
- the last paragraph of the section “Objection: comparing ontologies by comparing decision theories” in SMK’s “FDT is not directly comparable to CDT and EDT. To be clear, SMK themselves rejects this approach: “[I]t is arguably reasonable to think of ontology as something that is not fundamentally tied to the decision theory in and of itself. Instead, we might want to think of decision theories as objects in a given ontology, meaning we should decide on that ontology independently—or so I will argue.”
- ^
See Lewis for an example of the “AMF has higher expected value than MAWF” view.
As far as I’m aware, Kollin et al.’s (2025) “bracketing” is the only theory that attempts to make (2) precise. Unfortunately, bracketing has a pretty significant open problem. Also, in unpublished work, I show that bracketing violates the sure-thing principle. Though I’m undecided as to whether this is a huge problem, compared to the independent motivations for bracketing.
- ^
Apparently, in the context of epistemology, this question is a classic challenge to “methodism” (as opposed to “particularism”): “How do we arrive at reasonable criteria for knowledge without assuming we know any particular facts?”. My perhaps-naïve feeling is that this isn’t so challenging at all, since my response in the next sentence seems kind of obvious to me. So I worry I could be missing something.
(I’m not sure whether my view in this post is inconsistent with particularism, though. Reasons can be particular in a sense, e.g., “It’s logically guaranteed that the other guy cooperates if and only if I do”. But the paradigm examples of “particulars” I usually see in introductions to particularism vs. methodism seem to be verdict-level intuitions.)
- ^
From p. 103 (emphasis mine):
Perhaps, you start deliberating on the matter only when standing before the boxes, because you are informed about the plot only then. This does not necessarily mean, though, that you are deciding only then. Perhaps—indeed, this is what I am suggesting—you were committed to one-boxing all along, and by deliberating you discover to be so committed all along. In any case, this is the only way how [the model of Newcomb’s problem in which your decision is causally upstream of the prediction] makes sense: You are decided early enough to one-box, simply by being rational, and this influences the predictor’s prediction, presumably simply by his observation of your consistent and continuous rationality. [...] Being committed or decided all along without ever having reflected on the matter? This sounds strange, and this may be the weak part of my account of NP, but, as I would insist, the only weak part [...] On the other hand, it is not so strange perhaps. You will grant that you have many beliefs without ever having reflected on them, for instance, about the absence of ice bears in Africa. You have never posed the question to yourself; but for a long time your mind is fixed how to respond. Similarly, I trust, your introspection will reveal that often your reflection does not issue in a decision, but rather finds that you were already decided or committed.
- ^
Some philosophers evidently do take certain verdict-level intuitions as foundational. E.g., Huemer seems to endorse putting a lot of weight on verdict-level intuitions-as-normative-expressions, based on (e.g.) his objections to utilitarianism.
One reason we might be tempted to consider some verdict-level intuitions foundational is by analogy to sense perception. Arguably, if you see a table in front of you, your visual perception is foundational justification for believing there’s actually a table, absent defeaters. (So says phenomenal conservatism.) I’m happy to grant that here. But the disanalogy is that, as I’ve said, a verdict just is a claim that you have reason to choose some act. This isn’t true of a perception of a table. In particular, my arguments in “Against verdict-level curve-fitting (and reflective equilibrium)” are defeaters of the verdict-level intuition.
