TLDR: I think morality is subjective. My ideal society would maximize total happiness while minimizing happiness inequality for as many beings as possible. My morals could change, and I don’t always do what I feel is moral.

I don’t think there is an objective morality. 

I can’t prove that slavery is wrong. I can’t prove child porn is wrong. I can’t prove anything is morally right or wrong. 

I’m not 100% certain what the correct morality for me is either. At times, I struggle to determine what I believe. 

But, overall, I’ve formed many opinions. Some are more strongly held than others.

And I encourage others to agree with my beliefs. Generally, the more values people share with me, the more inclined we’ll be to work together. We can help each other make the world better to us.

If morality is subjective, why do I form moral opinions and try to act on them? I think I do that for the same reason I think I do anything else. To be happy.

My Moral Code

I think everyone matters equally. As much as I love myself, I can’t bring myself to believe I deserve more happiness than others.

I didn’t control my genes. I could’ve had a mental or physical disability. I could’ve inherited genes that made me more likely to have the “dark triad” traits of narcissismMachiavellianism, and psychopathy. There may be genes that lead to pedophilia too.

I didn’t control the environment I was born into. I could’ve been born into slavery. I could’ve been born as an animal on a factory farm. I could’ve been born into a dystopian future. 

I could’ve been anyone. I’m fortunate.

To me, the ideal society would maximize total happiness while minimizing happiness inequality for as many beings as possible.

Morality Isn’t That Simple For Me

While everyone matters equally to me, some people make more of an impact than others. Imagine a hypothetical scenario where you have to go back in time to 1920. Imagine you have to kill Mahatma Gandhi or five random people. My gut instinct is to save Gandhi. I’d bet he’s done more to maximize and equalize happiness than the average five people.[1] 

I’d feel more confident about my answer if I could know what would’ve happened if Gandhi died in 1920. If other leaders would’ve stepped up to make the same impact as Gandhi, I’d probably choose to kill Gandhi.[2]

Equality

Equality (of happiness) matters to me. I’m not sure how much. I couldn’t tell you if I’d prefer all beings to have 1% more total happiness if that increased happiness inequality by 5%.

Besides when I mull over philosophical hypotheticals, I can’t recall when my uncertainty about how much to value equality has made it difficult to make a decision. So I’m not planning to make a concerted effort to determine how much equality matters to me.[3]

Uncertainty

My values have changed many times in the past. They’ll probably change again.[4] If I was born as a Christian white man in 1600s Europe, I probably would’ve been racistsexist, and intolerant of other religions.[5] I opposed gay marriage until 2004.

If I could live a few hundred more years, I’d bet my beliefs change significantly. So I won’t advocate for anything that leads to significant value lock-in.

I don’t think that future people’s morals are necessarily better. As I said, I don’t think morality is objective. My point is that I’ve been happy with how moral views have evolved. I’m cautiously optimistic that won’t change.[6]

Why I Don’t Always Follow My Morality

I can’t scientifically explain my behavior.[7] I often feel like there are different parts of me fighting each other.[8] Sometimes I feel like a “moral part” of me loses control to another part of me. For example, a fearful part of me could push me to try to please someone. Other times, I look back and feel like one part of me has deluded my “moral part.” That’s how I’ve convinced myself it was productive to play One Night Ultimate Werewolf to help me develop my idea for a reality show.[9] I don’t think that’ll help anymore.[10]

Even when I feel I’m trying to be moral, making decisions is complicated. I don’t see how I can know I’m working optimally to achieve my moral goals. And I occasionally question what’s moral to me in the first place.

I suspect the “part of me” that always wins out is the one that brings me the most immediate happiness.

The strongest part of me right now is writing this post. I don’t know if that’s a moral part of me, a part of me that wants to fulfill my potential as a writer, or a part of me that wants people to like me. It’s probably some combination of all of them and more.

But the strongest moral part of me right now reminds me that I didn’t have to be me. I could’ve been anyone. It hopes I remember that more.

(cross-posted from my blog: https://utilitymonster.substack.com/p/my-morality)
 

  1. ^

    I used Gandhi in this example because I thought he represented an uncontroversial “good” figure. Since publishing this post, I’ve learned that he isn’t as well regarded as I’d thought.

  2. ^

    I’d decide based on the amount of happiness Gandhi and the average person in 1920 have.

  3. ^

    To determine how much equality matters to me, I’d pretend I could quantify happiness. Next, I’d ask myself hypotheticals such as "Would I rather give 1 happiness point to person A who has -1 million happiness points or give 1 million happiness points to person B, who has 1 million happiness points?" I’d use these answers to help me determine a mathematical formula that expresses the tradeoffs I’d make in any situation.

  4. ^

    My morals have already changed since I published this post. Originally, I’d said I wanted to maximize total utility while minimizing utility inequality for as many beings as possible. I’ve now replaced the term utility (i.e., what anyone fundamentally wants) with happiness (i.e., positive emotional states, good feelings) At the time, I said I used utility instead of happiness because people have told me their desires don’t reduce to happiness. And if anyone ultimately wanted other things or feelings besides happiness, I wanted them to have that.

    I no longer feel that way. If someone fundamentally wants freedom, justice, dignity, or whatever they claim to value, and none of those things make them happy, I don’t care if they get them.

  5. ^

    This article’s similar claim inspired this thought. It seems reasonable. But I can’t find any surveys on racism, sexism, or religious intolerance from 1600s Europe.

  6. ^

    Over the long term. At some point, I'd bet I'll think something like my values align more with people in 2022 than 2023.

  7. ^

    I think there’s ultimately a scientific way to explain my behavior. But I don’t know enough science to do that. So instead I use mumbo jumbo.

  8. ^

    The yearning octopus from this article describes these feelings well.

  9. ^

    If someone who shares enough of my values wants to produce a reality show, I’d be excited to explain my idea to them. I think it has some promise, but it’s complicated and unpolished.

  10. ^

    If I use the original cards.

11

1 comments, sorted by Click to highlight new comments since: Today at 11:54 AM
New Comment

I appreciate your philosophy being written in a manner that does not require decoding.

"I don’t think there is an objective morality. "

- If a person, such as myself, believes that the value we give to the pursuit of happiness and avoidance of pain is arbitrary (in the sense that we appear to be programmed to give worth to these emotionally attractive ideas for evolutionary survival purposes),  then a foundation for objective morality is lost and any selfish or selfless behaviour is ultimately performed to induldge our comfortable delusions.

"I can’t scientifically explain my behavior.[5] I often feel like there are different parts of me fighting each other.[6] Sometimes I feel like a “moral part” of me loses control to another part of me. For example, a fearful part of me could push me to try to please someone."

- I think we're ultimately controlled by our emotions. While beliefs do alter emotions, other factors may overpower them. For this reason, I suppose our behaviour can only, at best, roughly approximate our belief about what our behaviour ought to be (utilitarian or otherwise).