[ Question ]

Are selection forces selecting for or against altruism? Will people in the future be more, as, or less altruistic?

by Mati_Roy 1 min read27th Mar 20204 comments

10


Including, but not limited to, selection forces for: genes, memes, economic power, and political power.

Motivation for asking: This is part of my analysis on whether we should aim to make philanthropy obsolete.

New Answer
Ask Related Question
New Comment

2 Answers

I wrote down some musings about this (including a few relevant links) in appendix 2 here.

Epistemic status: narrative driven; arm-chair thinking; contains large simplifications, suppositions, and speculations

Conclusion: I don't know if the overall effect is selecting for or against

Historically

Humans might be good at detecting whether someone is altruistic. So from an evolutionary psychology perspective, altruism might act as a commitment mechanism for cooperativeness (but remember, we're Adaptation-Executers, not Fitness-Maximizers). Similarly, but alternatively, similar alleles could be responsible for both cooperativeness and altruism. In either case, those seems like plausible explanations for why some amount of altruism were selected for, and would continue being selected for.

But I want to focus my answer mostly on speculating on new and future selection pressures for or against altruism. The term to search to read the literature on the topic of its historical selection pressures is 'problem of altruism'. The above is just a quick thought, not a summary of the literature.

General

Narratives for increased selectiveness

It could be that we have a greater opportunity for cooperativeness than we used to. It's now possible to cooperate with people throughout the world, and not just with your local tribe. Plus, with a winners take most financial dynamics, this could have increase benefits of having large group cooperates.

Also, a tribe of people sharing the same moral values will cooperate much more easily. A pure negative preference utilitarian giving money to another pure negative preference utilitarian knows that this money will be used for the pursuit of a shared goal. Whereas a pure egoist can't as easily do this with other pure egoists as they all have different goals / they all want to help different people (ie. themselves, respectively). It's much cheaper for people sharing moral values to cooperate as they don't have to design robust contracts.

Genes

Narratives for increased selectiveness

A) It could be that altruistic people think having more people in absolute or more people like them in comparison is a good thing, and so make an effort to raise more children or conceive more biological children, respectively, on average.

B) It could be that when we get technology to do advance genetic engineering in humans, subsidies or laws encourage or force selecting prosocial genes for the benefit of the common good.

Narratives for decreased selectiveness

A) It could be that altruistic people give resources away to the extent that they don't have enough to raise (as much) children, or to raise them well enough.

B) It could be that altruistic people think it's wrong to create new people, either on deontological or utilitarian grounds. Deontological grounds could include directly being against creating new humans, or indirectly, by being against taking welfare money to do so. From an utilitarian perspective, they could potentially be failing to see the longer-term consequences it would have from the resulting selection effect, or they could rightfully have weighted this consideration as less important (or came to the right conclusion for epistemically wrong reasons).

C) It could be that when we get technology to do advance genetic engineering in humans, people want their kids to mostly care about their family and themselves, and not care about society as much.

Economic power

Related: Donating now vs later (on Causepriotization.org)

Narratives for increased selectiveness

It seems likely that egoists have faster diminishing returns on marginal dollars, and also, as a consequence, are more risk averse to making a lot of money. Ie. you can only save yourself once (sort of), but there are a lot of other people to save. Although if you have fringe moral values, they might be so neglected that this isn't as accurate.

As a potential example for altruistic people taking more risks, it seems more plausible that an egoist person being offered 100M USD to sell zir startup would take the money than an altruistic person given an altruistic person might still have low diminishing returns on money at that level.

It could also be that altruistic people, caring about people in the future, are more likely to invest their money long-term, and so gain power over a larger fraction of the economy.

Narratives for decreased selectiveness

It could be that philanthropists, by redistributing their wealth directly or through public goods, or by helping oppressed groups see their relative capacity to influence the world diminished as they become relatively less wealthy than those who don't. Trivially, if they are rational, they would only do that if they expect this to be the best course of action. But their altruistic instinct might incite them for more rapid gratification, especially if they want to signal those instincts, and other mechanisms, such as Donor-Advized Funds, don't allow them to do so as much.

Other

Ems

On page 302-303 of "The Age of Ems", Robin Hanson explains what ze thinks altruistic ems will donate money to and why they will choose those cause areas. Ze also says "Like people today, ems are eager to show their feelings about social and moral problems, and their allegiance to pro-social norms", although I think ze doesn't explain why, but it might just be a premise of the book that ems are similar to humans a priori, and just live with different incentive structures.