A popular notion for some people interested in effective altruism is moral uncertainty1.
There is much disagreement by intelligent people over what it is right to do, and while we each have our own beliefs about the matter, it would be foolish to be 100% certain that we are correct while all these others are not. Until further evidence is in, or until much wider consensus is reached, the only rational approach is to spread your degrees of belief between different ethical theories.
I find this notion unconvincing because I view morality as subjective. For example, if I believe bribery is wrong and someone else believes it's morally acceptable, I think neither of us is (in)correct. Instead, we simply differ in our moral preferences, similarly to how I might think apples are tasty while someone else thinks they're gross. This view is sufficient to reject the notion of moral uncertainty described above, but there are at least two ways we can plausibly attempt to preserve the conclusion that we should accommodate alternative moral frameworks in our decision-making.
First, although it seems to me very unlikely and even incomprehensible to have a correct2 morality, one that is objective true independent of whose perspective we use, the existence of a correct morality (this belief is often called moral realism) is an empirical belief so I do accept a modesty argument1 for its possibility.
Start with the premise of
Nonrealist Nihilism: If moral realism is false, then nothing matters.
Now, suppose you think the probability of moral realism is P. Then when you consider taking some action, the expected value of the action is
P * (value if moral realism is true) + (1-P) * (value if moral realism is false)
= P * (value if moral realism is true) + (1-P) * 0
where the substitution with 0 follows from the Nonrealist Nihilism premise. Therefore, we can prudentially assume that moral realism is true.
In this link above, Brian Tomasik points out some good reasons for rejecting this argument. But even given the argument holds, moral realism seems so weird and incomprehensible that I see no reason to prefer any possible moral realisms (e.g. utilitarianism, libertarianism) over any other, so each possible moral realism is canceled out by an equally likely opposite realism (e.g. utilitarianism says happiness is good, but anti-utilitarianism says happiness is bad so these cancel out) and the argument fails.
The second rescue of moral uncertainty is more tenable and, as far as I know, was first formalized by Ruairí Donnelly. We can call it anti-realist moral uncertainty. This is how I'd describe this idea:
I accept some uncertainty in what my future self will morally value. If I am a utilitarian today, I accept I might not be a utilitarian tomorrow. Therefore, I choose to give some weight to these future values.
I, personally, choose not to account for the values of my future self in my moral decisions. If I did, I might expect my future values to be more similar to the average human morality. So, it seems to be simply a matter of personal preference. I think this reframing of moral uncertainty makes sense, even if I don't see reason that I should adopt it.
Lastly, it is important to note that even without accommodating for moral uncertainty, there are still good reasons to cooperate with people whose value systems differ from our own.
[1] I am not sure who first came up with these two terms (moral uncertainty and the modesty argument), so I just used the most frequently used write-ups I know of for each term.
[2] By correct, I mean correct in the way it is correct that the sky is blue.
Even if being a subjectivist means you don't need to account for uncertainty as to which normative view is correct, shouldn't you still account for meta-ethical uncertainty i.e. that you could be wrong about subjectivism? Which would then suggest you should in turn account for moral uncertainty over normative views.
I think you're kind of trying to address this in what you wrote about moral realism, but it doesn't seem clear or convincing to me. There are a lot of premises here (there's no reason to prefer one moral realism over another, we can just cancel each possible moral realism out by an equally likely opposite realism) that seem far from obvious to me, and you don't give any justification for.
In general, it seems overconfident to me to write off moral uncertainty in a few relatively short paragraphs, given how much time others have spent thinking about this in a lot of depth. Will wrote his entire thesis on this, and there are also whole books in moral philosophy on the topic. Maybe you're just trying to give a brief explanation of your view and not go into a tonne of depth here, though, which is obviously reasonable. But I think it's worth you saying more about how your view fits with and responds to these conflicting views, because otherwise it sounds a bit like you are dismissing them quite offhand.
Ah, I definitely could have went into more detail. This was just meant to prompt discussion on an important topic.
I'll avoid posting (things like this) in the future. I'm sorry :(
I'm sorry you had a bad experience with this post.
We definitely want to make sure everyone feels comfortable and welcome contributing.
So for reference (and for anyone reading) if you notice a down vote and realise you could've justified your post better, or framed it more sensitively, you can easily put it into your drafts to work it over and submit it (or a similar new post) again later.
You shouldn't feel sorry about this. Why did you delete your account?? There is absolutely no reason to feel bad.
As a guy who has written a lot of stuff people hated in his life, I sympathise!
But I don't think this should discourage you from continuing to post. I disagreed with this post, but the only way to be right every time is to say nothing. And as people said it's an important and difficult topic to take on.
If you found the counterarguments convincing, just say so and adjust your views. People admire that kind of thing here. If you didn't, let us know why! :)
Yeah, I think it was a really good thing to prompt discussion of, the post just could have been framed a little better to make it clear you just wanted to prompt discussion. Please don't take this as a reason to stop posting though! I'd just take it as a reason to think a little more about your tone and whether it might appear overconfident, and try and hedge or explain your claims a bit more. It's a difficult thing to get exactly right though and I think something all of us can work on.
Meta: it may be surprising this post received so many downvotes. It is making a contribution to an important topic. I'm not sure how useful the contribution is (other comments raise several issues), but we usually don't want to put people off offering ideas that may have flaws.
I guess that what led to the downvotes is tone: there seems to be a high level of confidence in the idea, which is not adequately justified while also running contrary to default opinion.
Good point to raise Owen! I strongly agree that we don't want to put people off contributing ideas that might run against default opinion or have flaws - these kinds of ideas are definitely really useful. And I think there were points in this post that did contribute something useful - I hadn't thought before about whether a subjectivist should take into account moral uncertainty, and that strikes me as an interesting question. I didn't downvote the post for this reason - it's certainly relevant and it prompted me to think about some useful things - although I was initially very tempted to, because it did strike me as unreasonably overconfident.
I didn't mean to appear overconfident. I just meant to state my own views on the topic.
I'll avoid posting (things like this) in the future. I'm sorry :(
This kind of thing is hard. I wholly approve of you stating your own views, and wouldn't want to discourage posting things like this.
I'd guess that just changing the framing slightly (e.g. saying "These are my current thoughts:" at the start and "What do you think?" at the end) or adding in a couple more caveats would have been enough to avoid the negative reaction.
I hope you end up taking this response as useful feedback, and not a negative experience!
You don't account for the value of your future self, but do you account for the values of a version of yourself that is idealized in some appropriate way? E.g. more rational, thought about morality for longer, smarter etc. Whether this would have significant impact on your values, is an open question, which also depends on how you'd 'idealize' yourself. I'd be very interested in thoughts on how much we should expect our moral views to change upon further deliberation by the way.
On moral realism, I assume you mean that we have absolutely no evidence about the truth of either utilitarianism or anti-utilitarianism so we should apply a principle of indifference as to which one is more likely? I think I agree with that idea, but there still remains slightly higher chance that utilitarianism is true - simply because more people think it is, even if we find their evidence for that questionable. Then of course there's still the question of why one should care about such an objective morality anyway - my approach would be to evaluate whether I'm an agent who's goal it is to do what's objectively moral or who's goal it is to do some other thing that I find moral.
This post raises a bunch of questions for me:
If you were in a simulation or a dream, would you hold uncertainty about its behaviour, within a framework of subjectivity?
Do you believe in changing the rules that you use to make moral decisions as you learn things?
Do you think that these probabilities are nonzero and that they cancel each other out?
How do you respond to Will's thesis on this topic?:
Many moral questions are empirical questions in disguise. For example, you might value reducing suffering in conscious beings. You might believe animals are conscious, so you focus on reducing their suffering, since there seems to be the most low-hanging fruit there. However, it's wrong to have 100% certainty on empirical questions. Some people believe that animals aren't conscious (e.g. they believe that language is required for consciousness). If you focused on animal suffering and it turned out animals weren't conscious, you'd be wasting resources that could have been used to reduce human suffering.
I think that's approximately true, but I also think it goes the other way around as well. In fact, just a few hours before reading your comment, I made a post using basically the same example, but in reverse (well, in both directions):
One idea informing why I put it that way around as well is that "consciousness" (like almost all terms) is not a fundamental element of nature, with clear and unambiguous borders. Instead, humanity has come up with the term, and can (to some extent) decide what it means. And I think one of the "criteria" a lot of people want that term to meet is "moral significance".
(From memory, and in my opinion, this sequence did a good job discussing how to think about words/concepts, their fuzzy borders, and then extent to which we are vs aren't free to use them however we want.)
(Also, I know some theories would propose consciousness is fundamental, but I don't fully understand them and believe they're not very mainstream, so I set them aside for now.)
This page is also relevant, e.g.:
I too reject moral realism!
It occurs to me that this has big consequences. For example, some guys talk about being obligated under utilitarianism to give away almost all their income, or devote themselves to far future folks who don't yet exist. Maybe the only barrier, they say, deflecting this crushing weight is that if you push yourself too hard then you might burn out. This never seemed satisfactory to me. But if morality is in our minds, then these obligations don't exist. There is no need to push ourselves even just shy of burning out. I am free.
One aspect of my moral uncertainty has to do with my impact on other people.
If other people have different moral systems/priorities, then isn't 'helping' them a projection of your own moral preferences?
On the one hand, I'm pretty sure nobody wants malaria - so it seems simple to label malaria prevention as a good thing. On the other hand, the people you are helping probably have very different moral tastes, which means they could think that your altruism is useless or even negative. Does that matter?
I think this is a pretty noob-level question, so maybe you can point me to where I can read more about this.