Hide table of contents

Last updated: 21/1/2022

This is the fourth post in my sequence on moral anti-realism; it works well as a standalone piece.

Outline

In my previous post, I argued that irreducible normativity probably isn't a meaningful concept. That said, some people have the intuition that, if our actions don’t matter in the sense implied by irreducibly normativity, they don’t matter at all. This post addresses the argument that as long as we believe there is a slight chance that irreducible normativity might be true, we should act as though it’s true.

This wager for non-naturalist moral realism has the same structure as a related argument described by Michael Huemer (2013). I will discuss Huemer’s argument and show how we can expand it into a wager for non-naturalist moral realism. Then, I’ll explain why I consider the resulting wager unconvincing.

(Note that I'm only discussing moral realism based on irreducible normativity. In this sequence's last post, I'll discuss a different wager for naturalist moral realism.)

Huemer’s “proof of moral realism”

In the paper “An Ontological Proof of Moral Realism,” Michael Huemer presents the following argument in support of moral realism:

Given that moral realism might be true, and given that we know some of

the things we ought to do if it is true, we have a reason to do those things. Furthermore, this reason is itself an objective moral reason. Thus, we have at least one objective moral reason.

The conclusion in the first sentence (“we have a reason to do those things”) derives from what Huemer calls the Probabilistic Reasons Principle:

The rough idea is that if some fact would (if you knew it) provide a reason for you to behave in a certain way, then your having some reason to believe that fact obtains also provides you with a reason to behave in the same way. Even a small epistemic probability of the fact’s obtaining provides you with a (perhaps very small) first person reason for action.

So, the argument is that if we start with non-zero credence in the existence of moral reasons, and if we have at least a vague sense about what those reasons would imply, then, via the Probabilistic Reasons Principle, we can conclude that we have one type of irreducible reason—and Huemer argues that this would be a moral reason—with certainty. Namely, we would then, with certainty, have a moral reason (possibly a very weak one) to act as though those other moral reasons apply.

A quick note on terminology

Huemer distinguishes between moral reasons (which, according to his usage, are necessarily other-regarding reasons) and prudential reasons (which are reasons related to one’s self-interest). For this article, I will adopt Huemer’s distinction. That said, I reject prudential reasons and moral reasons based on the same arguments: I cannot make sense of “reasons” (in the irreducibly normative, reasons externalist sense) as a concept.

Transforming Huemer’s argument into the moral realism wager

By framing his argument in the language of objective reasons for action, Huemer already takes some version of irreducible normativity for granted. His claimed contribution is merely about deriving a particular kind of irreducible reason (a moral one) from other irreducible reasons. If our initial credence in the existence of moral reasons was low, the newfound moral reason we obtain through Huemer’s argument would remain comparatively weak.

For example, suppose I believe in the existence of irreducible prudential reasons, and I think that those favor me buying a watch. And suppose I assign a 1% credence to the existence of moral reasons, of which I believe that they support giving the watch money to charity. Huemer’s argument now accomplishes the following:

Via the Probabilistic Reasons Principle, I can conclude that I hold at least one moral reason with certainty. Namely, I have reason to feel at least somewhat (reflected by my initial 1% credence) moved by the moral reason for donating to charity. Correspondingly, I should refine my initial (1%) credence in the existence of any moral reasons to the following, more nuanced set of credences:

  1. With certainty, I have at least one moral (meta-)reason: I should feel at least somewhat moved by my best guess about the content of the other, object-level moral reasons.
  2. I still maintain my original 1% credence in those other, object-level moral reasons (e.g., reasons in favor of making a charitable donation).

Note that I wrote “moral (meta-)reason” to highlight that we are dealing with an unusual type of moral reason. Huemer argues that deriving these “moral (meta-)reasons” from prudential reasons qualifies as proving moral realism. (I’m somewhat skeptical about this claim, but it has no bearing on why I ultimately reject the moral realism wager.)

Secondly, note that in the above example, the newfound moral reason for donating—despite being held with certainty—may not be strong enough to outweigh my prudential reasons for buying the watch. More generally, as long as our credence in object-level moral reasons was initially low, newly derived moral reasons won’t outweigh our strongest prudential reasons. This conclusion may seem underwhelming.

Huemer’s argument becomes much more interesting, however, if we start out disbelieving all irreducibly normative reasons, whether prudential or moral ones. Let’s say we start 99% convinced that irreducible normativity is altogether nonsensical. And let’s say we have a vague idea what irreducible normativity would imply for us, should it exist. (This assumption is questionable—I will come back to it in the next section.) Again, then we can conclude that we have at least one irreducibly normative (meta-)reason: We should act as though object-level irreducibly normative reasons apply.

Compared to the previous application of Huemer’s argument, this application here is importantly different. Since we are contrasting irreducibly normative reasons with reasons anti-realism, we cannot contrast our newfound (meta-)reason with potentially more strongly held reasons of the same, irreducibly normative kind. Instead, we have to compare the newfound (meta-)reason to the view that there are no irreducibly normative reasons at all.

In personal communication, several effective altruists have expressed the intuition that irreducibly normative reasons are always decisive. According to this intuition, what we do matters infinitely less, in some robust and all-things-considered sense, if there are no irreducibly normative reasons. If we accept this picture, then we have to either reject irreducible normativity with complete confidence or act as though it is true, even if our actual credence is low. This is the wager for moral realism based on irreducible normativity.

Side note: different interpretations of irreducible normativity

In my previous post “#3 Against Irreducible Normativity,” I outlined three ways to interpret the notion of irreducible normativity. They correspond to the following section headings in my previous post:

1. “Super-reasons

2. “Is (our knowledge of) irreducible normativity limited to self-evident principles?

3. “Is there a speaker-independent normative reality?

To follow the argument in this post here, there’s no need to read up on the above distinctions. I just want to briefly note that the moral realism wager is inconsistent with interpretations 2.[1] and 3.[2]

The intuition “our actions matter infinitely more if irreducible normativity applies” only applies to interpretation 1.—the interpretation of irreducible normativity that I find the most strange. I couldn’t make sense of this notion in my attempt to explain it. (Of course, since my inability to make sense of something doesn’t mean that the probability of it making sense is exactly zero, this is where the moral realism wager could come in.)[3]

In the following section, I will provide counter-arguments against the moral realism wager. I wrote this side note to point out that those counter-arguments are only needed if we subscribe to a particular, already contested interpretation of irreducible normativity.

Counter-arguments

The moral realism wager seems questionable to me in several ways. I will start by describing two objections that I consider forceful, but not in themselves decisive. Then, in subsection 3., I’ll present what I think to be a decisive counter-argument.

1. Incoherence

To derive practical implications from the moral realism wager, we need an informed guess about the content irreducibly normative reasons would have, if they existed. As Brian Tomasik (2014) explains in this section of his essay “Why the Modesty Argument for Moral Realism Fails,” it’s controversial whether we can obtain this type of probabilistic knowledge. We may think that morality is more likely to be about donating to charity than buying luxury goods. However, if moral anti-realism is correct, moral language cannot successfully refer to irreducibly normative facts. Moral anti-realists typically don’t think that moral realism is coherent but wrong. Instead, they find the concept inherently confused—like a square circle. Therefore, one might argue that all bets about the content of irreducibly normative reasons are off.

That said, as Tomasik also concedes in his essay, proponents of the moral realism wager can now resort to an argument from modesty. Many philosophers consider irreducible normativity to be meaningful. Those philosophers may advance informed guesses about the content of irreducible reasons. For instance, in the PhilPapers survey (Bourget & Chalmers, 2014), 56.4% of surveyed philosophers “accept[ed]” or “lean[ed toward” moral realism (note, however, that this also includes versions of moral realism not based on irreducible normativity). While we may consider irreducible normativity meaningless ourselves, perhaps this reflects a shortcoming on our part.

Via the argument from modesty, it seems plausible that even people who lean heavily toward moral anti-realism should—on peer-disagreement grounds—assign at least some probability to the hypothesis that irreducible normativity is a coherent concept. Still, I think there are a few reasons why it could be defensible to have this probability be very low. While moral realism is a common belief, the wager for it only applies to versions grounded in irreducible normativity. Most people whose reasoning I hold in high regard are skeptical of irreducible normativity. Moreover, my impression is that among this set of people, most people who place significant credence on irreducible normativity do so because of their epistemic modesty. To avoid double-counting, we should only update toward other people’s credences if they endorse irreducible normativity for direct reasons, i.e., if they confidently claim that they understand it. This seems rare.[4]

2. Infectiousness

The moral realism wager requires us to compare irreducibly normative reasons to the view that there are no objective reasons at all. MacAskill (2013) points out that this is a situation where intertheoretic comparisons are problematic. His argument has two parts. First, he argues that we should generally use an expected value framework for reasoning under intertheoretic uncertainty. Then, he highlights that moral anti-realism neither assigns value zero to all options nor assigns any well-specified but uniform value. Instead, according to moral anti-realism, actions simply don’t have any objective value. We can represent this as “according to anti-realism, the values of all possible actions are undefined.” Consequently, any non-zero credence in anti-realism “infects” the entire expected value framework for deciding under intertheoretic uncertainty, rendering the value of every conceivable action undefined.

The infectiousness problem illustrates just how incompatible irreducible normativity is with moral anti-realism. In the next section, I will expand on this point and show that there isn’t a way to find a suitable method of comparison.

Another quick note on terminology

MacAskill’s paper is titled “The Infectiousness of Nihilism.” By Nihilism, he and other philosophers refer to moral anti-realism, or, more generally, anti-realism about reasons. I don’t like this choice of terminology because it smuggles in question-begging connotations. Those are exactly the type of connotations I want to argue against in the next section.

3. Begging the question

I believe that the moral realism wager fails because it begs the question. The wager only works if we stipulate that our actions matter infinitely more if irreducible normativity is true. There is no theory-neutral way to compare reasons anti-realism with reasons realism. Moreover, from the perspective of what we care about, I would be surprised if many people confidently endorsed the view “irreducible normativity always dominates.”

The situation is that we are uncertain between two frameworks: realism and anti-realism about reasons. Both of those are frameworks about which goals to pursue. Our situation is vexed because without a theory about which goals to pursue, we obviously cannot figure out which goals to pursue. Once we pick a theory, we are no longer neutral.

Huemer claims reasons imported via the Probabilistic Reasons Principle are irreducibly normative reasons. This point seems question-begging,[5] but I am willing to grant it. Even then, it remains question-begging whether the newfound reasons—which would only count weakly compared to more typical irreducibly normative reasons—should matter infinitely more than the merely subjective reasons behind anti-realism.

As I have argued in previous posts (especially in this section), anti-realism in no way means forfeiting all aspirations. On the anti-realist account, failing to act based on what one cares about can be as catastrophic as it gets. Subjective reasons for action aren’t just random whims. They include everything we care about—except, admittedly, for sentiments we can only express with normative terminology.

Even if someone’s life goals are entirely focused on “doing what’s right” or “doing the most good,” this doesn’t mean that they have to buy into the moral realism wager. For the wager to go through,[6] a person needs to explicitly think of these expressions in the irreducibly normative sense and stake all their caring capacity into that particular interpretation.

In practice, most people who want to “do the most good” presumably have in mind specific connotations that they’d consider a necessary part of that phrase’s meaning. If they somehow became convinced that the phrase “do the most good” referred to collecting pebbles, they would no longer identify that way.[Joe Carlsmith discusses similar considerations in his post The despair of normative realism bot. My sense is that his takeaways are the same as mine.]

Metaethical fanaticism: adhering to irreducible normativity as one’s only goal

I have argued that the moral realism wager fails for everyone who isn’t already committed to the view that irreducible normativity trumps everything. I would guess that this captures the vast majority of people.

Admittedly, the intuition that one’s actions are meaningless without irreducible normativity is widespread. Most people may interpret this intuition in a tentative way that allows for the possibility that it's misguided. However, if someone is deeply attached to this intuition, it could function as a terminal value. On this view, "acting according to irreducible normativity" would constitute a personal life goal, which would stay in place even from an anti-realist way of looking at it. If this is the case, the moral realism wager goes through.

I have talked to two people in the effective altruism movement who claimed to endorse this position. I’m talking about a type of (stated) endorsement that goes well beyond “this position is worth exploring” or “this position is maybe true.” At least from the way those two people described it to me, they were willing to bet their life’s impact on this position. I will use the term metaethical fanaticism to refer to this stance.

As the label suggests, I’d caution against it. Before placing all caring capacity into a shaky philosophical assumption which may not even be meaningful, we should think carefully about the potential implications. In my next post, I will illustrate how metaethical fanaticism commits those who endorse it to potentially absurd consequences.

Acknowledgments

I’m grateful for helpful comments by David Althaus, Max Daniel, Sofia Davis-Fogel, Stefan Torges, and Johannes Treutlein.

My work on this post was funded by the Center on Long-Term Risk.

References

Bourget, D. and D. Chalmers. (2014). What do philosophers believe? Philosophical Studies, 170(3):465–500.

Huemer, M. (2013). An Ontological Proof of Moral Realism. Social Philosophy and Policy, 30(1–2):259–79.

MacAskill, W. (2013). The Infectiousness of Nihilism. Ethics, 123(3):508–520.

Tomasik, B. (2014). Why the Modesty Argument for Moral Realism Fails. reducing-suffering(.)org. <reducing-suffering(.)org/why-the-modesty-argument-for-moral-realism-fails/>.


  1. On the notion where our knowledge of irreducible normativity is forever limited to self-evident principles, the moral realism wager remains irrelevant in practice. By definition, people will recognize self-evident principles, whether they try to act in accordance with irreducible normativity or whether they do what they most feel like doing. ↩︎

  2. The third interpretation of irreducible normativity very much resembles moral naturalism. Therefore, I will discuss it separately and at length in this sequence's final post. ↩︎

  3. That said, others often have similar concerns/complaints. For instance, see Joe Carlsmith's post The ignorance of normative realism bot). ↩︎

  4. Derek Parfit, whose arguments I have addressed in previous posts, is the main exception that comes to my mind. However, my best interpretation of Parfit's view is that his position isn't the typical sort of moral non-naturalism. As such, it becomes subject to a different type of wager, which I'll discuss in this sequence's last post. Also, on the outside-view, we may note that Parfit built his reputation primarily in the fields of normative ethics, rationality, and personal identity. His later writings on metaethics have not had a similarly discipline-defining impact so far. Admittedly, these things may take time. ↩︎

  5. In Huemer’s defense, his point makes sense in a context where we already postulate some externalist reasons. ↩︎

  6. When I speak of “the wager going through,” I mean that even the smallest non-zero probability placed on irreducible normativity would imply that one should discount all the other metaethical possibilities. Of course, one could also make a more gradualist argument based on the view that our actions may matter somewhat more if irreducible normativity applies. I don’t intend to argue against this possibility. (Whether or not this is rings true to someone would depend on their degree of attachment to the view that normativity is irreducible.) ↩︎

Comments13
Sorted by Click to highlight new comments since: Today at 1:23 PM

I found this post, and the rest of the series thus far, quite interesting. I still feel very confused about this whole topic, but I think that's more to do with the topic and/or my intuitions and lack of background, and less to do with your arguments or writing.

At the moment, it looks like the post has 12 votes but only 8 karma, suggesting there've been some downvotes. But there aren't any comments highlighting key flaws or counterarguments (at least, I don't think MichaelStJules' comments do that). I'd personally be quite interested to hear about whether people are seeing important flaws in the arguments made - as opposed to just disliking discussion of anti-realism or something like that - and, if so, what those flaws might be.

I don't find the arguments fully convincing myself, but I don't know if I can articulate why (or if there's a good reason at all), and I don't know if I put much weight on my failure to feel convinced.

Thanks! Yeah, I'm curious about the same questions regarding the strong downvotes. Since I wrote "it works well as a standalone piece," I guess I couldn't really complain if people felt that the post was unconvincing on its own. I think the point I'm making in the "Begging the question" subsection only works if one doesn't think of anti-realism as nihilism/anything goes. I only argued for that in previous posts.

(If the downvotes were because readers are tired of the topic or thought that the discussion of Huemer's argument was really dry, the good news is that I have only 1 post left for the time being, and it's going to be a dialogue, so perhaps more engaging than this one.)

One response to Infectiousness is that expected value is derivative of more fundamental rationality axioms with certain non-rational assumptions, and those rationality axioms on their own can still work fine to lead to the wager if used directly (similar to Huemer's argument). From Rejecting Supererogationism by Christian Tarsney:

Strengthened Genuine Dominance over Theories (GDoT*) – If some theories in which you have credence give you subjective reason to choose x over y , and all other theories in which you have credence give you equal* subjective reason to choose x as to choose y , then, rationally, you should choose x over y .

and

Final Dominance over Theories (FDoT) – If (i) every theory in which an agent A has positive credence implies that, conditional on her choosing option O , she has equal* or greater subjective reason to choose O as to choose P , (ii) one or more theories in which she has positive credence imply that, conditional on her choosing O , she has greater subjective reason to choose O than to choose P , and (iii) one or more theories in which she has positive credence imply that, conditional on her choosing P , she has greater subjective reason to choose O than to choose P , then A is rationally prohibited from choosing P .

Here, "equal*" is defined this way:

‘“x is equally as F as y ” means that [i] x is not F er than y , and [ii] y is not F er than x , and [iii] anything that is F er than y is also F er than x , and [iv] y is F er than anything x is F er than’ (Broome, 1997, p. 72). If nihilism is true, then all four clauses in Broome's definition are trivially satisfied for any x and y and any evaluative property F (e.g. ‘good,’ ‘right,’ ‘choiceworthy,’ ‘supported by objective/subjective reasons’): if nothing is better than anything else, then x is not better than y , y is not better than x , and since neither x nor y is better than anything, it is vacuously true that for anything either x or y is better than, the other is better as well. Furthermore, by virtue of these last two clauses, Broome's definition distinguishes (as Broome intends it to) between equality and other relations like parity and incomparability in the context of non‐nihilistic theories.

Of course, why should we accept GDoT* or FDoT or any kind of rationality/dominance axioms in the first place?

Furthermore, besides equality*, GDoT* and FDoT being pretty contrived, the dominance principles discussed in Tarsney's paper are all pretty weak, and to imply that we should choose x over y, we must have exactly 0 credence in all theories that imply we should choose y over x. How can we justify assigning exactly 0 credence to any specific moral claim and positive credence to others? If we can't, shouldn't we assign them all positive credence? How do we rule out ethical egoism? How do we rule out the possibility that involuntary suffering is actually good (or a specific theory which says to maximize aggregate involuntary suffering)? If we can't rule out anything, these principles can never actually be applied, and the wager fails. (This ignores the problem of more than countably many mutually exclusive claims, since they can't all be assigned positive credence, as the sum of credences would be infinite > 1.)

We also have reason to believe that a moral parliament approach is wrong, since it ignores the relative strengths of claims across different theories, and as far as I can tell, there's no good way to incorporate the relative strengths of claims between theories, either, so it doesn't seem like there's any good way to deal with this problem. And again, there's no convincing positive reason to choose any such approach at all anyway, rather than reject them all.

Maybe you ought to assign them all positive credence (and push the problem up a level), but this says nothing about how much, or that I shouldn't assign equal or more credence to the "exact opposite" principles, e.g. if I have more credence in x > y than y > x, then I should choose y over x.

Furthermore, Tarsney points out that GDoT* undermines itself for at least one particular form of nihilism in section 2.2.

Thanks for sharing these points. For people interested in this topic, I'd also recommend Tarsney's full thesis, or dipping into relevant chapters of it. (I only read about a quarter of it myself and am not an expert in the area, but it seemed quite interesting and like it was probably making quite substantive contributions.)

Also on infectiousness, I thought I'd note that MacAskill himself provides what he calls a "a solution to the infectious incomparability problem". He does this in his thesis, rather than the paper he published a year earlier and which Lukas referenced in this post. (I can't actually remember the details of this proposed solution, but it was at the end of Chapter 5, for anyone interested.)

Here's a half-formed idea of a somewhat different wager for moral realism, which may not make sense:

I feel a strong intuition similar to the idea that "one’s actions are meaningless without irreducible normativity". I might phrase it as something like "'Reasons' based on moral beliefs that are 'just arbitrary' would matter far less, or perhaps infinitely less, than 'reasons' that are 'objectively' true."

It seems quite plausible to me that this intuition might be quite hard to really "justify", and that the intuition might involve question-begging. (I also think moral anti-realism is far more likely than moral realism.) But I do have this intuition. And given that I have this intuition, if (a certain version of) moral anti-realism is correct, perhaps I should still act as though moral realism is correct anyway, because that's what "feels morally right" to me? And if moral realism is correct, that would also suggest I should act as though moral realism is correct.

So perhaps for some of the people you're responding to, there's a wager that does work to favour acting as though moral realism is correct, simply because their intuitions favour doing that? And perhaps, in that case, there's no real reason why such people should want to listen to your arguments or have their intuitions shifted towards acting as though anti-realism was true, even if they begin getting an inkling that that would be more "rational" in some sense?

Are there reasons why that wager doesn't work (for some people)? I wouldn't be surprised if the answer was a strong "yes", as this is just something I'm spit-balling now, and I'm confused about the whole topic.

One type of counterargument I can imagine is that this wager could be overridden by intuitions/preferences favouring thinking things through fully, avoiding intuitions that look like question-begging, etc. But I think that, for me, those intuitions/preferences might be weaker than my intuitions favouring acting as though moral realism is true (even if I think it probably isn't).

You're describing what I tried to address in my last paragraph, the stance I called "metaethical fanaticism." I think you're right that this type of wager works. Importantly, it's dependent on having this strongly felt intuition you describe, and giving it (near-)total weight on what you care about.

Hmm, I think I'm talking about something different. (Though it may lead to the same problems, or be subject to the same counterarguments, or whatever, and maybe further discussion on this should be saved until after your next post.)

That section sounded like it was talking about people who are already committed to believing "irreducible normativity trumps everything". Personally, I think it's like I don't believe that - I assign it very low credence - but I feel that intuition. So in a sense, I think I want to mostly think and act as though that were true.

So if you say "Hey, that seems to be based on a shaky philosophical assumption which may not even be meaningful", perhaps I could be inclined to say "If that were so, I don't think I'd feel a strong pull towards caring that that were so. So I'm just going to proceed as I am."

Obviously, I'm not yet convinced enough about this wager to actually do that, given that I'm writing these comments. But it feels to me like maybe that's the position I'd end up in, if I became more convinced of your arguments against my current moral realism wager (which is roughly the one you argue against in most of this post).

I meant it the way you describe, but I didn't convey it well. Maybe a good way to explain it as follows:

My initial objection to the wager is that the anti-realist way of assigning what matters is altogether very different from the realist way, and this makes the moral realism wager question begging. This is evidenced by issues like "infectiousness." I maybe shouldn't even have called that a counter-argument—I'd just think of it as supporting evidence for the view that the two perspectives are altogether too different for there to be a straightfoward wager.

However, one way to still get something that behaves like a wager is if one perspective "voluntarily" favors acting as though the other perspective is true. Anti-realism is about acting on the moral intuitions that most deeply resonate with you. If your caring capacity under anti-realism says "I want to act as though irreducible normativity applies," and the perspective from irreducible normativity says "you ought to act as though irreducible normativity applies," then the wager goes through in practice.

(In my text, I wrote "Admittedly, it seems possible to believe that one’s actions are meaningless without irreducible normativity." This is confusing because it sounds like it's a philosophical belief rather than a statement of value. Edit: I now edited the text to reflect that I was thinking of "believing that one's actions are meaningless without irreducible normativity" as a value statement.)

Ok, that makes sense, then. In that case, I'll continue clinging to my strange wager as I await your next post :)

Do you think it's fair to say that this is somewhat reminiscent of the argument you countered elsewhere in the series, that (belief in) normative anti-realism would be self-defeating? Perhaps there as well, your counterargument was valid in that there's some question-begging going on when comparing between frameworks like that, but anti-realism could still be self-defeating in practice, for people with particular intuitions?

Yes, that's the same intuition. :)

In that case, I'll continue clinging to my strange wager as I await your next post :)

Haha. The intuition probably won't get any weaker, but my next post will spell out the costs it would have to endorse this intuition as your value, as opposed to treating it as a misguiding intuition. Perhaps by reflecting on the costs and the practical inconveniences it could bring about to treat this intuition as one's terminal value, we might come to rethink it.

You write:

On the notion where our knowledge of irreducible normativity is forever limited to self-evident principles, the moral realism wager would remain irrelevant in practice. By definition, people will recognize self-evident principles either way, whether they try to act in accordance with irreducible normativity, or whether they just do what they most feel like doing.

But in your prior post, you noted:

Self-evident principles are principles that, by definition, (almost) everyone recognizes. (This may not mean that everyone will be motivated to act on them; for instance, amoral psychopaths may not have any intrinsic motivation to act on self-evident moral principles.)

And more generally, it seems like non-psychopaths often agree about a moral principle, yet don't act based on it, due to habits or lack of willpower or whatever.

So might the moral realism wager matter, even if it doesn't change what moral principles we "believed in", because it would give additional force to the argument that people should actually act on those principles? Maybe someone acting as if moral realism is true and someone who is instead "just do[ing] what they most feel like doing" will endorse the same "moral" principles, but differ in how often they act on them?

(I think this is a minor point, probably depends somewhat on what version of moral anti-realism one endorses, and is already sort-of addressed in the parts of your prior post surrounding the claim "Moral realism or not, our choices remain the same". Maybe it doesn't warrant an answer here.)

I will probably  rename this post eventually to "Why the Irreducible Normativity Wager Fails." I now think there are three separate wagers related to moral realism: 

  • An infinitely strong wager to act as though Irreducible Normativity applies
  • An infinitely strong wager to act as though normative qualia exist (this can be viewed as a subcategory of the Irreducible Normativity wager) 
  • A conditionally strong wager to expect moral convergence
    • I will argue that this is not per se a wager for "moral realism" but actually equivalent to a wager for valuing moral reflection under anti-realism; the degree to which it applies depends on one's prior intuitions and normative convictions. 

I don't find the first two wagers convincing. The last wager definitely works in my view, but since it's only conditionally strong, it doesn't quite work the way people think it does. I will devote future posts to wagers 2 and 3 in the list above. This post here only covers the first wager.