[ Question ]

Should you optimise for the long-term survival of people who don't care for their own long-term survival?

by Samuel Shadrach1 min read3rd Oct 202111 comments

6

Population ethicsLongtermismMetaethics
Frontpage

Or more broadly, should we optimise over any sort of objective or utility of other people - expressed in any form (utilitarian, deontological) in any timeframe - unless these objectives are expressed by the persons themselves?

 

I ask because there is a clear paternalistic tendency here, which could be good or bad. It has happened throughout human history. Some examples:
- Bad - racism is justified as means to civilise people for their own good

 - Controversial - governments and societies deny people the right or means to commit suicide for their own good

 - Good - ??

 

I also ask because one can observe such tendencies in some of the people espousing various forms of long-termism, utilitarianism, and so on on this site. Should you create anti-ageing tech and claim it a good for others if most people don't want it? Should you try to ensure long-term survival of civilisation if most of civilisation isn't interested in long-term survival? (Or even if they are interested, if it's not their primary concern) Should you code utilitarian policies into public policy if most people are not utilitarian nor have a particular preference that their government be utilitarian?

 

I don't have a strong opinion on it yet, but I do see it as an important question to ask - lest we provide the wrong people or ideas with too much weight - power, knowledge, culture, etc.

New Answer
Ask Related Question
New Comment

1 Answers

Two quick thoughts:

  1. I would definitely say there are good examples of so-called “paternalistic” policies: some people may engage in acts (e.g., suicide attempts) because they are suffering from temporary or long-term mental impairment. Additionally, I think nudge policies like opt-out instead of opt-in for saving money have generally been held up as good policy interventions. More broadly, I’d suggest there are many kinds of health and safety regulations which, although far from perfect as a whole, have probably as a whole helped people who would have willingly (foolishly) taken some bad e.g., medicine or building supplies (although I certainly would not stake this overall point on that one example). By and large, we recognize that children are not well equipped to make good decisions for themselves, and there is no magic age where someone becomes perfectly rational. However, I will say that once people start to reach around the age of ~20 they develop a level of self-rationality that typically does better than a centralized governments in maximizing their own preferences, but still not universally.
  2. Some people’s rejection of life-extension technology/medicine may partially be a coping mechanism with the belief that short life is inescapable.

Second, briefly, I’ll add that “civilization” is not some monolithic moral patient, hive mind, or other thing: you can’t broadly say “civilization doesn’t want to be saved.” Regardless, I’ll bite the (nerf?) bullet here and bluntly say, I don’t really think it matters much if a hypothetical majority of the current ~7B people don’t want civilization to continue. It is important to consider why so many people might hypothetically think civilization shouldn’t continue (e.g., people have determined that the growth of civilization will produce net suffering ove... (read more)

3Samuel Shadrach2moThanks for the response. > they develop a level of self-rationality that typically does better than .... in maximizing their own preferences What does it mean for someone to undertake actions that are not maximising their own preferences? What does it mean to be rational when it comes to moral or personal values? Would I be right in assuming you're using a model where people have terminal goals which they can self-determine, but are then supposed to "rationally" act in favour of those terminal goals? And that if someone is not taking decisions that will take them closer to these goals (as decided by a rational mind), you feel it is morally acceptable (if not obligatory) that you take over their decision-making power?
2Harrison D2moI want to be clear that there are certainly a lot of wrong ways to approach this, and that one should be very careful whenever they try to override or restrict someone’s decision-making: Generally, the instances where this violation of autonomy is clearly a good thing are quite rare in comparison to situations where it would be a bad thing. Also, I’ll clarify that “maximize their own preferences” probably wasn’t the best way of phrasing it (I typed that message in a rush while in transit)—a more accurate phrasing would have been something more like “maximizes their wellbeing”. The point about preferences/wellbeing, though, is that there are times where people, whether as children, teens, adults, or seniors, want to do something despite the fact that it would be detrimental to their short or long-term wellbeing. In some of the more-extreme cases, it may even be that a few months or years later the person would look back and say “Wow, I’m really glad you stopped me from going through with that.” As to why someone might want to make a decision that you/an outsider can confidently foresee as detrimental to their well-being, sometimes that is due to impaired decision-making capabilities from some mental disorder (e.g., bipolarity) or substance abuse. In some of these scenarios, intervention can definitely be justified (although one still has to be careful, including to not make the situation worse). Sometimes the poor decision-making is due to the fact that people sometimes make mistakes and/or otherwise fail to appreciate the dangers of their actions, and they don’t want to listen to advice. Sometimes the poor decision-making is simply lack of thinking or laziness (e.g., the case for opting-in people for savings plans). As to your assumptions of my model: I am supposing that people have values, preferences, etc. but that their explicit/stated preferences at a given point in time may not reflect what is actually beneficial to them overall—especially not in the long te
3Samuel Shadrach2moThanks for replying again. I'm just wondering if there's a way to condense down the set of rules or norms under which it is acceptable to take away someone's decision-making power. Or personally take decisions that will impact them but not respect their stated preferences. If I try rephrasing what you've said so far: 1. People with impaired mental capabilities Is it possible to universally define what classifies as mentally impaired here? Would someone with low IQ count? Someone with a brain disorder from birth? Someone under temporary psychedelic influence? Would an AI considering all humans stupid relative to its own intelligence count? 2. People whose actions or self-declared short-term preferences differ from _____ Should the blank be filled with "their self-declared long-term preferences" or "what you think their long-term preferences should be"? Or something else? I'm trying to understand what exactly wellbeing means here and who gets to define it.
2Harrison D2moI think it’s really tough to try to make some simplistic+universal set of rules. It also is really important to make sure you have a clear sense of what you are asking: “acceptable” (which is more in line with either a deontology framework or “would a utilitarian framework support social condemnation of this action [regardless of whether the action was itself utilitarian]”) may not be the same as “beneficial” (per a utilitarian framework). If it’s the latter, you might have rules of thumb but you will certainly struggle to get something comprehensive: there will be so many exceptions and fringe cases. If it’s the former (“acceptable”) it’s perhaps not quite as impossible, but even still I am skeptical that a solid ruleset could be devised / would be worth devising. I listed some of the most common examples of where you may find exceptions to the principle of not violating autonomy (e.g., verifiably foolish/immature behavior, failure to understand the long-term consequences, mental impairment). In the end, nobody “gets to define” wellbeing in some “cosmic authority” sense: a parent, friend, government, or even a stranger at various times might be able to make the determination. I think it’s better to approach it more loosely, identifying principles with the recognition that there will be exceptions. For example, it’s worth highlighting that parents tend to 1) be making decisions for children (who are less rational/mature); 2) know their own children better; and 3) be less personally biased/more motivated to care about their children’s wellbeing than the government or a stranger. But that still doesn’t mean parents always ought to “get to decide” (but as a matter of law/policy, there are strong justifications for being biased against/hesitant towards intervening in parenting). If your goal is “how should we set policy”, that makes it more answerable (and worth answering), but I don’t know what my answer would be on such a broad level. My objections/points/examples a
3Samuel Shadrach2moGreat answer and I tend to agree that a 100% comprehensive ruleset may be unobtainable. I wonder if we could still get meaningful rules of thumb even if not 100% comprehensive. And maybe these rules of thumb for what social norms are good can be generalisable across "whom" you're setting social norms or policy for. Maybe what social norms are good for "X choosing to respect or disrespect Y's autonomy" are similar whether: - X and Y are equal-standing members of the LW community - X is the parent of Y - X is a national law-making body and Y is citizens - X is programming the goals for an AGI that is likely to end up governing Y And as you mention, rules conditional on mental impairment or a sense of long-term wellbeing might end up on this list. Maybe I'll also explain my motivation in wanting to come up with such general rules even though it seems hard. I feel that we can't say for sure who will be in power (X) and who will be subjected to it (Y) in the future, but I do tend to feel power asymmetries will grow in the future. And there is some non-trivial probability that people from certain identifiable groups (scientists in certain fields, member of LW community, etc) end up in those positions of power. And therefore it might be worthwhile to cultivate those norms right here. It feels easier to do any form of moral advocacy on someone before they are in power, versus after they are in power. I understand if you still feel my approach to the problem is not a good one, I just wanted to share my motivation anyway.
2Harrison D2moThis seems like a good point. And to your broader question, I do think it’s possible to get general rules of thumb (for personal though) and identify norms that ought to be social endorsed/pushed. However, you have to make sure to frame/ask the question correctly, which I think includes the point about “acceptable” vs. “beneficial” (and more generally, about taking utilitarianism as the foundational framework).
3Samuel Shadrach2moMakes sense. For personal, I can definitely see why acceptable and beneficial are different. I'm not sure how much the distinction matters for a society or hivemind. Whatever seems beneficial for society is what it should enforce norms towards and also deem acceptable. I feel like assuming utilitarianism will alienate people, might be better to keep the societal goal and corresponding norms more loosely and broadly defined. That way everyone individual can evaluate - does this society enforce social norms useful enough to my own personal goals both for myself and society - that I find it more value to accept and further enforce these social norms than rebel against them. Like how effective altruism forum doesn't explicitly refer to utilitarianism in the intro although the concepts are overlapping.
2 comments, sorted by Highlighting new comments since Today at 2:31 PM

 - Good - ??


I don't think seatbelts are very controversial, though admittedly it's more of a minor example than the things you've mentioned.

Mandatory vaccinations and certain other restrictions on children's rights (eg joining the military, smoking, gambling) are often usually regarded as appropriate as well, though those cases are more debatable.

I was thinking of mandatory vaccinations, but I thought it seemed more like a case of requirements for community health rather than paternalism proper (focused on the wellbeing of the target of enforcement).