Altruistic action is dispassionate

by Milan_Griffes1 min read30th Mar 201918 comments



Epistemic status: speculating, hypothesizing

At first approximation, there are two types of motivation for acting – egoistic & altruistic.

Almost immediately, someone will come along and say "Wait! In fact, there's only one type of motivation for acting – egotistic motivation. All that 'altruistic' stuff you see is just people acting towards their own self-interest along some dimension, and those actions happen to help out others as a side effect."

(cf. The Elephant in the Brain, which doesn't say exactly this but does say something like this.)

In response, many people are moved to defend the altruistic type of motivation (because they want to believe in altruism as a thing, because it better matches their internal experience, because of idealistic attachments; motivations vary).

I'm definitely one of these people – I think the altruistic motivation is a thing, distinct from the egoistic motivation. Less fancily – I think that people sometimes work to genuinely help other people, without trying to maximize some aspect of their self-interest.

Admittedly, it can be difficult to suss out a person's motivations. There are strong incentives for appearing to act altruistically when in fact one is acting egotistically. And beyond that, there's a fair bit of self-deception – people believing / rationalizing that they're acting altruistically when in reality their motivations are self-serving (this gets confusing to think about, as it's not clear when to disbelieve self-reports about a person's internal state).

Here's a potential heuristic to help determine when you're acting altruistically or egotistically – altruistic action tends to be dispassionate. The altruist tends to not care very much about their altruistic actions. They are unattached to them.

It's a bit subtle – an altruistic actor still wants things to go well for the situation they're acting upon. They're motivated to act, after all. But that care seems distinct from caring about their actions themselves – considerations about how they will be received & perceived.

The locus of their care is in the other people involved in the situation – if things go better for those people, the altruist is happy. If things go worse, the altruist is sad. It doesn't matter who helped those people, or what third parties thought of the situation. It doesn't matter who got the credit. Those considerations are immaterial to the altruist. They aren't the criteria by which the altruist is judging their success.

This heuristic doesn't help very much for determining whether other people are acting from altruistic or egotistic bases (though if you see someone paying particular attention to optics, PR, etc., that may be a sign that they are being more moved by egotistic considerations in that particular instance).

I think this heuristic does help introspectively – I find that it helps me sort out the things I do for (mostly) altruistic reasons from the things I do for (mostly) egotistic reasons. (I do a large measure of both.)

Cross-posted to my blog.



18 comments, sorted by Highlighting new comments since Today at 2:23 PM
New Comment

I agree with this paragraph: "The locus of their care is in the other people involved in the situation – if things go better for those people, the altruist is happy. If things go worse, the altruist is sad. It doesn't matter who helped those people, or what third parties thought of the situation.... "

But I don't think of the word 'dispassionate' ('not influenced by strong emotion') when I try to describe those behaviours. An altruistic person could have very strong feelings about the outcomes for the other person - I don't see how 'dispassionate' comes into it at all.

Yeah, I think my title is too lossy. (Open to suggestions for alternatives!)

I'm trying to point to this thing where the altruist has basically no feelings / emotions about their particular actions. They have feelings about the situation in which they're acting, and/or about other people in the situation.

So regarding their actions, the altruist is dispassionate.

My dictionary backs me up, somewhat: "Altruistic – showing a disinterested and selfless concern for the well-being of others..."

I disagree with the claim 'Altruists are dispassionate' because it suggests that altruists have no feelings about anything, including outcomes for people they want to help.

I'd agree with the claim 'Altruists don't care who gets the credit.'

Agreed, I think.

What do you think about the claim "Altruists are dispassionate regarding the particular actions they take?"

I'd agree with that, although it's not very catchy :P


Yeah, it'll have to pass through the dank EA memes filter for any hope of catchiness.

I understand that you aren't saying that altruism is completely unemotional, but I still want to emphasize the role that emotion plays. I do not distinguish too sharply between things that I want for personal reasons, and things that I want for altruistic concerns. Personally, when I learned about utility functions, it was a watershed moment for my understanding of ethics.

If you describe an agent as having a utility function, it means that all of its preferences are commensurate. To put it another way, the agent might want to have a cup of coffee and also want world peace. Importantly, the two preferences are the same type -- I don't distinguish between moral wants and non-moral wants.

Therefore, when I say that I am altruistic, I am not saying that it is my duty to be so. If I were to put my biases aside and dispassionately calculate the action with the highest utility, it is because I truly believe that being dispassionate is the best way to get what I want. I would do the same for actions which concern my own life, and feelings.

Splitting our motivation into two pieces, one personal, and one moral, seems like a remnant of our evolutionary past. It seems to me that people naturally believe in social norms, moral standards, duty, virtues and these don't always align with what they personally want. I seek to dissolve this whole dichotomy: there is simply a world that I want to be in, and I am trying to do whatever is necessary to make that world the real one.

I'd argue that humans would actually be better understood as an aggregate of agents, each with their own utility function. In your case, these agents might cooperate so well that your internal experience is that you're just one agent, but that's certainly not a human universal.

Yeah, there are many possible ways to frame this. I like the idea of a coherent agent, but that might just be the part of me capable of putting verbal thoughts on a forum page. In any case, over time I've experienced a shift from viewing preferences as different types which compete, to viewing preferences as all existing together in one coherent thread. Of course, my introspection is not perfect, but this is how I feel when I look inward to find what I really want.

I do not claim that this is what other people feel. However, to the extent which I find the idea pleasing, I certainly would like if people shared my view.

At the very least, I agree that one coherent thread is more healthy and something to strive for, but in choosing a thread you might want to be aware of the various stakeholders and their incentives. I find that counting myself and my needs into my moral framework makes my moral framework more robust.

I realise that I've been implicitly assuming this is true, which made me resist optimizing for impressions. Doing that I could no longer convince myself that I was acting altruistically. The awful and hard to accept reality is that you sometimes do have to convince people in order for your work to be supported.

For sure.

I think there's a complicated relationship between altruistic & egotistic motivations. Oftentimes you can have a larger post hoc positive impact by acting egotistically (because this increases your reputation, your deployable capital, and/or other relevant resources).

So the egotistic motivation seems super important! I'm just pointing out that I've found it helpful to get more internal clarity on when I'm acting out of self-interest versus when I'm acting altruistically.

If my values say "I should help lots of people", and I work to maximize my values (which makes my life meaningful) which category does that fit into? Does it matter if I'm doing it "because" it makes my life meaningful, or because it helps other people?

To me that last distinction doesn't even make a lot of sense - I try to maximize my values BECAUSE they're my values. Sometimes I think the egoists are just saying "maximize your values" and the altruists are just saying "my values are helping others" and the whole thing is just a framing argument.

Eh, I think there's probably two separate motivations here:

  • Doing things that help other people
  • Doing things that make you believe that you are helping other people (and thus making your life meaningful)

And I think those motivations overlap substantially, such that you can do actions that fulfill both motives. But they do strike me as separate, such that you can do actions that fulfill one but not the other.

Sure, but if one has the value of actually helping other people, that distinction disappears, yes?

As an example of a famous egoist, I think someone like Ayn Rand would say that fooling yourself about your values is doing it wrong.

I'm not clear on the crux of our disagreement, or if we're even disagreeing at all.

I think my crux is something like "this is a question to be dissolved, rather than answered"

To me, trying to figure out whether a goal is egoistic or altruistic is like trying to figure out whether a whale is a mammal or a fish - it depends heavily on my framing and why I'm asking the question, and points to two different useful maps that are both correct in different situations, rather than something in the territory.

Another useful map might be something like "is this eudomonic or hedonic egoism" which I think can get less squirrely answers than the "egoic or altruistic" frame. Another useful one might be the "Rational Compassion" frame of "Am I working to rationally optimize the intutions that my feelings give me?"

I think most actions-in-the-world result from a (very) complicated matrix of motivations inside the actor's head.

I think it's very rare for an action to be entirely driven by altruistic motivations, or entirely by egotistic motivations.

I do think that many actions are mostly driven by altruistic motivations, and many others mostly driven by egotistic motivations. (And I've found it personally helpful to get more clarity on when I'm acting from a mostly altruistic basis, versus when I'm acting from a mostly egotistic basis.)