Hey everybody,
Recently, I've found it hard to retain my belief in altruism. I'm really hoping that one of you has something to say that might turn me back, because I don't want to lose my belief in this wonderful thing.
Ever since I was very young, I've found a utilitarian style of thinking very natural. I've always wanted to maximize happiness. I also decided that other people's happiness ought to matter, and because in my thinking I'm very logical, I tried to roughly arbitrate how much more valuable my own happiness is than that of a strangers. This was the question that I thought of:
If you have to die to save x number of random Americans (or whatever nationality you happen to be) your age, what would the minimum number be?
I've asked this question to maybe 200 people over the years. It doesn't come up all the time, but every now and then I find it an interesting topic. About 80-90% of people will say an amount between 1-10, and the rest will say some very high number such as 10000000. My number was somewhere between 3-15 (I know a big range, but the question is kind of hard to really decide on anyways).
This definitely made me more passionate about altruism. If you really believe that your life is worth about a dozen lives, then you should dedicate your life to helping others. The reason being that there are so many ways to save/help far, far, far more people than that and still live a good life.
I'm only 20 right now, and although this was one of my core beliefs, it was for sure one of those 'easier said than done beliefs' and I knew that. I was always very worried that I would get reject this belief later on in life in favor of selfishness. This seems to be what's happening now.
Recently, I have really put this belief to the test. In short, I never went to college and am a self taught programmer, a pretty successful one. After reading 80,000 hours career guide, I realized that working in the field of artificial intelligence would be much, much, much more beneficial then working as a web developer and donating like 20-30k a year. So, I started studying for the SATs and applying for colleges.
This period went on for about 2 months. During this period, I was on an around the world backpacking trip, and paused it to do this work. Still, I was staying in a hostel, and so many people would go up to me and ask me why I was studying. I used this as an opportunity to have a discussion about effective altruism with them.
While having these discussions, I would ask the question mentioned above about dying to save x number of people. I realized though, that the answers I was now getting were much higher than before. This probably had to do with the fact that I was prefacing it with a discussion of effective altruism instead of discussing it afterwards. Something as small as that was radically changing the answers to this question, one which was a core belief of mine for altruism.
So, I dug deeper. This is where I had a truly depressing realization. Upon talking with people, it now seems to me that people don't intrinsically value the happiness of a stranger. That is, they'll do something because they follow their heart (as do I), but ultimately they're doing it to not feel bad, to feel good, or to help a loved one. Even though before they very often answered something between 1 and 10 to the question, the question is very flawed because it's too arbitrary. People were being more optimistic than realistic with their answers I think.
Because of the nature of belief, we find it very easy to belief what everybody else think and very hard to belief something that practically no one else thinks. The beliefs of others support ours, and when that support is gone, it's easy to find our own belief crumbling. After realizing that other people didn't intrinsically value the happiness of someone they didn't know, I questioned my own passion for helping strangers. Now, I have a hard time thinking of why I should value intrinsically the happiness of strangers, and so my logical belief for altruism has mostly gone away.
In my heart, I still care about helping others. I've always looked at effective altruism as being reached from two different paths. One, the logic that the approximate ratio for how important your life is vs others is not super super high, and that you can help a number of people much higher than whatever your ratio is through effective altruism (this is what I was talking about above). Two would be that your heart wants to help people and that effective altruism is a great way to do that. I view the first belief as a much stronger belief. Following your heart more often leads to selfishness for yourself than it does selflessness for a stranger. This is why so many people are not donating effectively and why so many people would choose careers that make them feel good about helping the world but don't actually help very much: these people aren't being altruistic out of logic, but out of emotion. Because the nature of emotion is selfish, they don't really have that much of a desire to care about how to maximize their help for people; caring in itself is enough to make them feel good, even if it helps 1% as much as they could otherwise. The heart will help strangers, but it rarely will it sacrifice a lot for strangers unless the decision is impulsive.
Even though my logical belief towards altruism (stemming from no longer valuing intrinsically the happiness of a stranger) is gone, my heart will always want to help those who really need help through effective altruism. I don't think that's good enough though and really hope somebody can reconvince me to belief logically in altruism instead of just emotionally. If this doesn't happen, I'll still donate 10-20% of my income to charity, but I won't want to make the big 10 year sacrifice of going to college, studying to get a PhD in machine learning, in order to finally work in artificial intelligence. I would actually enjoy working in artificial intelligence, but I would hate the 10 years of studying involved. This is really bad I think, because I could be helping far, far more people with this path, even though it would make me less happy. When I logically believed in altruism I was willing to do this, but now I just don't care enough.


Hopefully you were able to follow that, I'm sorry if the reasoning is a bit messy, it was a bit hard to explain over writing!

10

0
0

Reactions

0
0

More posts like this

Comments5
Sorted by Click to highlight new comments since: Today at 3:44 PM

It sounds to me like you're learning about the world, and it's causing you to question your core beliefs. I have two pieces of (arguably) good news for you:

  1. Questioning your entire belief system while traveling in your early twenties is VERY NORMAL. Almost every teenager with strong convictions goes through this on their path to adulthood.

  2. Questioning your beliefs makes them better. Engaging with different people and different belief systems makes your beliefs more nuanced and robust. Other people may have seen something you've missed! Alternatively, you may develop a new appreciation for the importance of your work. Either way, whether you decide to altruistically study machine learning or not, your thinking on the topic of moral philosophy shouldn't stop here. It's important to remain open to new ideas in the long term - all the most interesting people are.

I hope that's helpful, and I hope you can find interesting and compassionate people to discuss this with in real life. Good luck!

You seem to be working under the assumption that we have either emotional or logical motivations for doing something. I think that this is mistaken: logic is a tool for achieving our motivations, and all of our motivations ultimately ground in emotional reasons. In fact, it has been my experience that focusing too much on trying to find "logical" motivations for our actions may lead to paralysis, since absent an emotional motive, logic doesn't provide any persuasive reason to do one thing over another.

You said that people act altruistically because "ultimately they're doing it to not feel bad, to feel good, or to help a loved one". I interpret this to mean that these are all reasons which you think are coming from the heart. But can you think of any reason for doing anything which does *not* ultimately ground in something like these reasons?

I don't know you, so I don't want to suggest that I think that I know how your mind works... but reading what you've written, I can't help getting the feeling that the thought of doing something which is motivated in emotion rather than logic makes you feel bad, and that the reason why you don't want to do things which are motivated by emotion is that you have an emotional aversion to it. In my experience, it's very common for people to have an emotional aversion to what they think emotional reasoning is, causing them to convince themselves that they are making their decisions based on logic rather than emotion. If someone has a strong (emotional) conviction that logic is good and emotion is bad, then they will be strongly motivated to try to ground all of their actions in logical reasoning. All the while being unmotivated to notice the reason why they are so invested in logical reasoning. I used to do something like this, which is how I became convinced of the inadequacy of logical reasoning for resolving conflicts such as these. I tried and failed for a rather long time before switching tactics.

The upside of this is that you don't really need to find a logical reason for acting altruistically. Yes, many people who are driven by emotion end up acting selfishly rather than altruistically. But since everyone is ultimately driven by emotions, then as long as you believe that there are people who act altruistically, then that implies that it's possible to act altruistically while being motivated emotionally.

What I would suggest, would be to embrace everything being driven by emotion, and then trying to find a solution which satisfies all of your emotional needs. You say that studying to get a PhD in machine learning would make you feel bad, and also that not doing it is also bad. I don't think that either of these feelings is going to just go away: if you just chose to do a machine learning PhD, or just chose to not do it, then the conflict would keep bothering you regardless, and you'd feel unhappy either way you chose. I'd recommend figuring out the reasons why you would hate the machine learning path, and also the conditions under which you feel bad about not doing enough altruistic work, and then figuring out a solution which would satisfy all of your emotional needs. (CFAR's workshops teach exactly this kind of thing .)

I should also remark that I was recently in a somewhat similar situation as you: I felt that the right thing to do would be to work on AI stuff, but also that I didn't want to. Eventually I came to the conclusion that the reason why I didn't want it was that a part of my mind was convinced that the kind of AI work that I could do, wouldn't actually be as impactful as other things that I could be doing - and this judgment has mostly held up under logical analysis. This is not to say that doing the ML PhD would genuinely be a bad idea for you as well, but I do think that it would be worth examining the reasons for why exactly you wouldn't want to do studies. Maybe your emotions are actually trying to tell you something important? (In my experience, they usually are, though of course it's also possible for them to be mistaken.)

One particular question that I would ask is: you say you would enjoy working in AI, but you wouldn't enjoy learning the stuff that you need to do in order to work in AI. This might make sense in a field where you are required to study something that's entirely unrelated to what's useful for your job. But particularly once you get around doing doing your graduate studies, much of that stuff will be directly relevant for your work. If you think that you would hate to be in an environment where you get to spend most of your time learning about AI, why do you think that you would enjoy a research job, which also requires you to spend a lot of time learning about AI?

It seems that there are two factors here leading to a loss in altruistic belief:

1. Your realization that others are more selfish than you thought, leading you to feel a loss of support as you realize that your beliefs are more uncommon than you thought.

2. Your uncertainty about the logical soundness of altruistic beliefs.

Regarding the first, realize that you're not alone, that there are thousands of us around the world also engaged in the project of effective altruism – including potentially in your city. I would investigate to see if there are local effective altruism meetups in your area, or a university group if you are already at university. You could even start one if there isn't one already. Getting to know other effective altruists on a personal level is a great way to maintain you desire to help others.

Regarding the second, what are the actual reasons for people answering "100 strangers" to your question? I suspect that the rationale isn't on strong ground – that it is mostly borne out of a survival instinct cultivated in us by evolution. Of course, for evolutionary reasons, we care more about ourselves than we care about others, because those that cared too much about others at the expense of themselves died out. But evolution is blind to morality; all it cares about is reproductive fitness. But we care about so, so much more. Everything that gives our lives value - the laughter, love, joy, etc. – is not optimized for by evolution, so why trust the answer "100 strangers" if it is just evolution talking?

I believe that others' lives have an intrinsic value on par with my own life, since others are just as capable of all the experiences that give our lives value. If I experience a moment of joy, vs. if Alice-on-the-other side-of-the-world-whom-I've-never-met experiences a moment of joy, what's the difference from "the point of view of the universe"? A moment of joy is a moment of joy, and it's valuable in and of itself, regardless who experiences it.

Finally, if I may make a comment on your career plan – I might apply for career coaching from 80,000 hours. Spending 10 years doing something you don't enjoy sounds like a great recipe for burnout. If you truly don't think that you'll be happy getting a machine learning PhD, there might be better options for you that will still allow you to have a huge impact on the world.

I think there are more efficient paths to working on AI Safety than a PhD. This 80,000 Hours podcast episode has the story of someone who, if I remember correctly, decided to skip a PhD and start working directly: https://80000hours.org/podcast/episodes/olsson-and-ziegler-ml-engineering-and-safety/

Even though my logical belief towards altruism (stemming from no longer valuing intrinsically the happiness of a stranger) is gone, my heart will always want to help those who really need help through effective altruism. I don't think that's good enough though and really hope somebody can reconvince me to believe logically in altruism instead of just emotionally.

Maybe doing what your heart wants to do is "good enough", if a lot of people who seem very logical and reasonable to you have come to the same conclusion through more "logical" routes?

I've been involved with EA for four years and work full-time at an EA organization, but I still wouldn't call my commitment to EA an especially "logical" one. I'm one of those unusual people (though they're much more common within EA) who grew up with a strong feeling that others' happiness mattered as much as mine; I cried about bad news from the other side of the world because I felt like children starving somewhere else could just as easily have been me.

I reached that conclusion emotionally -- but when I went to college and began studying philosophy, I realized that my emotional conclusion was actually also supported by many philosophers, plus thousands of other people from all walks of life who seemed to be unusually thoughtful in their other pursuits. Seeing this was what convinced me I'd probably found the right path, and I haven't seen strong evidence against EA being broadly "correct" since I joined up.

So even if you don't "logically" value the happiness of strangers, I think it's safe to trust your heart, if doing so is leading you to a path that seems better for the world, and you're still using logic to make decisions along that path. Even if you get lost in a strange city and stumble upon your destination by accident, that doesn't mean you need to leave and find your way back using a map.

Curated and popular this week