Lukas_Gloor

Lukas_Gloor's Comments

Moral Anti-Realism Sequence #1: What Is Moral Realism?

Thanks!

At the time when I wrote this post, the formatting either didn't yet allow the hyperlinked endnotes, or (more likely) I didn't know how to do the markdown. I plan to update the endnotes here so they become more easily readable.

Update 7/7/2020: I updated the endnotes.

Moral Anti-Realism Sequence #5: Metaethical Fanaticism (Dialogue)

Yeah, I made the AI really confident for purposes of sharpening the implications of the dialogue. I want to be clear that I don't think the AI's arguments are obviously true.

(Maybe I should flag this more clearly in the dialogue itself, or at least the introduction. But I think this is at least implicitly explained in the current wording.)

Moral Anti-Realism Sequence #5: Metaethical Fanaticism (Dialogue)
I think sometimes my metaethical fanaticism looks like that. And I imagine for some people that's how it typically looks. But I think for me it's more often "wanting to be careful in case moral realism is true", rather than "hoping that moral realism is true". You could even say it's something like "concerned that moral realism might be true".

Interesting! Yeah, that framing also makes sense to me.

Moral Anti-Realism Sequence #5: Metaethical Fanaticism (Dialogue)

Thanks for those thoughts, and for the engagement in general! I just want to flag that I agree that weaker versions of the wager aren't covered with my objections (I also say this in endnote 5 of my previous post). Weaker wagers are also similar to the way valuing reflection works for anti-realists (esp. if they're directed toward naturalist or naturalism-like versions of moral realism).

I think it's important to note that anti-realism is totally compatible with this part you write here:

Humanity should try to "keep our options open" for a while (by avoiding existential risks), while also improving our ability to understand, reflect, etc. so that we get into a better position to work out what options we should take.

I know that you wrote this part because you'd primarily want to use the moral reflection to figure out if realism is true or not. But even if one were confident that moral realism is false, there remain some strong arguments to favor reflection. (It's just that those arguments feel like less of a forced move, and the are interesting counter-considerations to also think about.)

(Also, whether one is a moral realist or not, it's important to note that working toward a position of option value for philosophical reflection isn't the only important thing to do according to all potentially plausible moral views. For some moral views, the most important time to create value arguably happens before long reflection.)

Moral Anti-Realism Sequence #3: Against Irreducible Normativity

It seems odd to me to suggest we have any examples of maximally nuanced and versatile reasoners. It seems like all humans are quite flawed thinkers.

Sorry, bad phrasing on my part! I didn't mean to suggest that there are perfect human reasoners. :)

The context of my remark was this argument by Richard Yetter-Chappell. He thinks that as humans, we can use our inside view to disqualify hypothetical reasoners who don't even change their minds in the light of new evidence, or don't use induction. We can disqualify them from the class of agents who might be correctly predisposed to apprehend normative truths. We can do this because compared to those crappy alien ways of reasoning, ours feels undoubtedly "more nuanced and versatile."

And so I'm replying to Yetter-Chappell that as far as inside-view criteria for disqualifying people from the class of promising candidates for the correct psychology goes, we probably can't find differences among humans that would rule out everyone except a select few reasoners who will all agree on the right morality. Insofar as we try to construct a non-gerrymandered reference class of "humans who reason in really great ways," that reference class will still contain unbridgeable disagreement.

One example of why: I don't think we yet have a compelling demonstration that, given something like coherent extrapolated volition, humans wouldn't converge on the same set of values. So I think we need to rely on arguments, speculations, etc. for matters like that, rather than the answer already being very clear.

I haven't yet made any arguments about this (because this is the topic of future posts in the sequence), but my argument will be that we don't necessarily need a compelling demonstration, because we know enough about why people disagree to tell that they are aren't always answering the same question and/or paying attention to the same evaluation criteria.

Moral Anti-Realism Sequence #4: Why the Moral Realism Wager Fails

Yes, that's the same intuition. :)

In that case, I'll continue clinging to my strange wager as I await your next post :)

Haha. The intuition probably won't get any weaker, but my next post will spell out the costs it would have to endorse this intuition as your value, as opposed to treating it as a misguiding intuition. Perhaps by reflecting on the costs and the practical inconveniences it could bring about to treat this intuition as one's terminal value, we might come to rethink it.

Moral Anti-Realism Sequence #3: Against Irreducible Normativity

Good question!

By "open-ended moral uncertainty" I mean being uncertain about one's values without having in mind well-defined criteria (either implicit or explicit) for what constitutes a correct solution.

Footnote 26 leaves me with the impression that perhaps you mean something like "uncertainty about what our fundamental goals should be, rather than uncertainty that's just about what should follow from our fundamental goals". But I'm not sure I'd call the latter type of uncertainty normative/moral uncertainty at all - it seems more like logical or empirical uncertainty.

Yes, this captures it well. I'd say most of the usage of "moral uncertainty" in EA circles is at least in part open-ended, so this is in agreement with your intuition that maybe what I'm describing isn't "normative uncertainty" at all. I think many effective altruists use "moral uncertainty" in a way that either fails to refer to anything meaningful, or it implies under-determined moral values. (I think this can often be okay. Our views on lots of things are under-determined and there isn't necessarily anything wrong with that. But sometimes it can be bad to think that something is well-determined when it's not.)

Now, I didn't necessarily mean to suggest that the only defensible way to think that morality has enough "structure" to deserve the label "moral realism" is to advance an object-level normative theory that specifies every single possible detail. If someone subscribes to hedonistic total utilitarianism but leaves it under-defined to what degree bees can feel pleasure, maybe that still qualifies as moral realism. But if someone is so morally uncertain that they don't know whether they favor preference utilitarianism or hedonistic utilitarianism, or whether they might favor some kind prioritarianism after all, or even something entirely different such as Kantianism, moral particularism, etc., then I would ask them: "Why do you think the question you're asking yourself is well-defined? What are you uncertain about? Why do you expect there to be a speaker-independent solution to this question?"

To be clear, I'm not making an argument that one cannot be in a state of uncertainty between, for instance, preference utilitarianism versus hedonistic utilitarianism. I'm just saying that, as far as I can tell, the way to make this work satisfactorily would be based on anti-realist assumptions. The question we're asking, in this case, isn't "What's the true moral theory?" but "Which moral theory would I come to endorse if I thought about this question more?"

Timeline of the wild-animal suffering movement

Dawkins wrote about it and said "it must be so." Maybe the timeline is about people who explicitly challenged that perception.

Moral Anti-Realism Sequence #2: Why Realists and Anti-Realists Disagree

Does (2) sound like a roughly accurate depiction of your views?

Yes, but with an important caveat. The way you described the three views, it doesn't make it clear that 2. and 3. have the same practical implications as 1. Whereas I intended to describe them in a way that leaves no possible doubt about that.

Here's how I would change your descriptions to make them compatible with my views:

  • A position in which there may not even be a single correct moral theory ((no change))

  • A position in which no strong claims can ever be made about what the single correct moral theory would be.

  • A position in which the only moral questions that have a correct (and/or knowable) answer are questions on which virtually everyone already agrees.

As you can see, my 2. and 3. are quite different from what you wrote.

Moral Anti-Realism Sequence #4: Why the Moral Realism Wager Fails

I meant it the way you describe, but I didn't convey it well. Maybe a good way to explain it as follows:

My initial objection to the wager is that the anti-realist way of assigning what matters is altogether very different from the realist way, and this makes the moral realism wager question begging. This is evidenced by issues like "infectiousness." I maybe shouldn't even have called that a counter-argument—I'd just think of it as supporting evidence for the view that the two perspectives are altogether too different for there to be a straightfoward wager.

However, one way to still get something that behaves like a wager is if one perspective "voluntarily" favors acting as though the other perspective is true. Anti-realism is about acting on the moral intuitions that most deeply resonate with you. If your caring capacity under anti-realism says "I want to act as though irreducible normativity applies," and the perspective from irreducible normativity says "you ought to act as though irreducible normativity applies," then the wager goes through in practice.

(In my text, I wrote "Admittedly, it seems possible to believe that one’s actions are meaningless without irreducible normativity." This is confusing because it sounds like it's a philosophical belief rather than a statement of value. Edit: I now edited the text to reflect that I was thinking of "believing that one's actions are meaningless without irreducible normativity" as a value statement.)

Load More