I finally finished my series of 5 essays on my life philosophy (Valuism)! In a nutshell, Valuism suggests that you work to figure out what you intrinsically value, and then that you try to use effective methods to create more of what you intrinsically value. 

While simple and straightforward at first glance, few people seem to approach life this way, and Valuism ends up with a number of surprising implications and, I think, provides a perspective that can help shed light on a number of different domains. 

Interestingly, Effective Altruism is implied by Valuism plus a specific set of strong intrinsic values that most Effective Altruists have (reducing suffering + truth).
 

Here is the sequence of essays, if you feel like checking them out: 

 

Part 1: Doing what you value as a life philosophy – an introduction to Valuism 

 

Part 2: What to do when your values conflict? 

 

Part 3: Should Effective Altruists be Valuists instead of utilitarians? 

 

Part 4: What would a robot value? An analogy for human values 

 

Part 5: Valuism and X: how Valuism sheds light on other domains 

 

A big shoutout goes to Amber Dawn Ace who wrote these essays with me.

33

0
0

Reactions

0
0
Comments7
Sorted by Click to highlight new comments since:

I feel like 'valuism' is redefining utilitarianism, and the contrasts to utilitarianism don't seem very convincing. For instance, you define valuism as noticing what you intrinsically value and trying to take effective action to increase that. This seems identical to a utilitarian whose utility function is composed of what they intrinsically value.

I think you might be defining utilitarianism such that they are only allowed to care about one thing? Which is sort of true, in that utilitarianism generally advocates converting everything into a common scale, but that common scale can measure multiple things. My utility function includes happiness, suffering, beauty, and curiosity as terms. This is totally fine, and a normal part of utilitarian discourse. Most utilitarians I've talked to are total preference utilitarians, I've never met a pure hedonistic utilitarian.

Likewise, I'm allowed to maintain my happiness and mental health as an instrumental goal for maximizing utility. This doesn't mean that utilitarianism is wrong, it just means we can't pretend we can be utility maximizing soul-less robots. I feel like there is a post on folks realizing this at least every few months. Which makes sense! It's an important realization!

Also, utilitarianism also doesn't need objective morality any more than any other moral philosophy, so I didn't understand your objection there.

You're the one who's redefining utilitarianism- which is commonly defined as maximization of happiness and well-being of conscious beings. You can consider integrating other terminal values into what you'd like to do, but you're not really discussing utilitarianism at that point, as it's commonly used. For instance, Greenberg points to truth as a potential terminal value, which would be at odds with utilitarianism as it's typically used.

I think Singer is a hedonic utilitarian for what it's worth, and I think I subscribe to it while acknowledging that weighing the degrees of positive and negatively subjective experiences of many kinds is daunting.

As for having other instrumental values (which is why I don't really think the "burnout" argument is very good as against utilitarianism, I agree with you on that one.

I agree that 'utilitarianism' often gets elided into meaning a variation of hedonic utilitarianism. I would like to hold philosophical discourse to a higher bar. In particular, once someone mentions hedonic utilitarianism, I'm going to hold them to the standard of separating out hedonic utilitarianism and preference utilitarianism, for example.

I agree hedonic utilitarians exist. I'm just saying the utilitarians I've talked to always add more terms than pleasure and suffering to their utility function. Most are preference utilitarians.

Preference utilitarianism and valuism don't have much in common.

Preference utilitarianism: maximize the interests/preferences of all beings impartially.

First, preferences and intrinsic values are not the same thing. For instance, you may have a preference to eat Cheetos over eating nachos, but that doesn't mean you intrinsically value eating Cheetos or that eating Cheetos necessarily gets you more of what you intrinsically value than eating nachos will. Human choice is driven by a lot of factors other than just intrinsic values (though intrinsic values play a role).

Second, preference utilitarianism is not about your own preferences, it's about the preferences of all beings impartially.

Hey Spencer, really enjoyed these posts. I found it insightful to mentally separate out actions related to mimicry, instinctive behavior, habits, and other sources from actions actually connected to intrinsic values, loosely defined as values that stand the test of thought experiment. On a personal level, the simplicity and lightness of the philosophy resonate with me. 

I'm curious about the downsides of valuism. In your opinion, what are some good critiques against valuism? My initial thoughts: 

  • It's a subjective, individual life philosophy that doesn't set a stake in any area: 
    • This might be a feature given the diversity of minds and the difficulty in 100% proving aspects of moral realism. 
    • It is interesting and potentially concerning that a valuist can be anything, eg a terrible person that would like to make the world suffer. The focus on effectiveness in this context isn't ideal. 
  • In a similar vein, it's plausible that spreading valuism can lead to unintended outcomes because of the focus on individualism: 
    • Culture, mimicry, etc may lead to better collective and societal outcomes in some contexts. Valuism emphasizes personal interpretations of value which can be conflicting between individuals and societies. 

Other than these angles - what would be good reasons for a person to explicitly say they're not a valuist? It seems daunting for someone to disclaim pursuing intrinsic values, so I'd like to understand a plausible situation better. 

From the AI "engineering" perspective, values/valued states are "rewards" that the agent adds themselves in order to train (in RL style) their reasoning/planning network (i.e., generative model) to produce behaviours that are adaptive but also that they like and find interesting (aesthetics). This RL-style training happens during conscious reflection.

Under this perspective, but also more generally, you cannot distinguish between intrinsic and instrumental values because intrinsic values are instrumental to each other, but also because there is nothing "intrinsic" about self-assigned reward labels. In the end, what matters is the generative model that is able to produce highly adaptive (and, ideally, interesting/beautiful) behaviours in a certain range of circumstances.

I think you confusion about the ontological status of values is further corroborated by this phrase for the post: "people are mostly guided by forces other than their intrinsic values [habits, pleasure, cultural norms]". Values are not forces, but rather inferences about some features or one's own generative model (that help to "train" this very model in "simulated runs", i.e., conscious analysis of plans and reflections). However, the generative model itself is effectively the product of environmental influences, development, culture, physiology (pleasure, pain), etc. Thus, ultimately, values are not somehow distinct from all these "forces", but are indirectly (through the generative model) derived from these forces.

Under the perspective described above, valuism appears to switch the ultimate objective ("good" behaviour) for "optimisation of metrics" (values). Thus, there is a risk of Goodharting. I also agree with dan.pandori who noted in another comment that valuism pretty much redefines utilitarianism, whose equivalent in AI engineering is RL.

You may say that I suggest an infinite regress, because how "good behaviour" is determined, other than through "values"? Well, as I explained above, it couldn't be through "values", because values are our own creation within our own ontological/semiotic "map". Instead, there could be the following guides to "good behaviour":

  • Good old adaptivity (survival) [roughly corresponds to so-called "intrinsic value" in expected free energy functional, under Active Inference]
  • Natural ethics, if exists (see the discussion here: https://www.lesswrong.com/posts/3BPuuNDavJ2drKvGK/scientism-vs-people#The_role_of_philosophy_in_human_activity). If "truly" scale-free ethics couldn't be derived from basic physics alone, there is still evolutionary/game-theoreric/social/group stage on which we can look for an "optimal" ethics arrangement of agent's behaviour (and, therefore, values that should help to train these behaviours), whose "optimality", in turn, is derived either from adaptivity or aesthetics on the higher system level (i.e., group level).
  • Aesthetics and interestingness: there are objective, information-theoretic ways to measure these, see Schmidhuber's works. Also, this roughly corresponds to "epistemic value" in expected free energy functional under Active Inference.

If the "ultimate" objective is the physical behaviour itself (happening in the real world), not abstract "values" (which appear only in agent's mind), I think Valuism could be cast as any philosophy that emphasises creation of a "good life" and "right action", such as Stoicism, plus some extra emphasis on reflection and meta-awareness, albeit I think Stoicism already puts significant emphasis on these.

The way you define values in your comment:

"From the AI "engineering" perspective, values/valued states are "rewards" that the agent adds themselves in order to train (in RL style) their reasoning/planning network (i.e., generative model) to produce behaviours that are adaptive but also that they like and find interesting (aesthetics). This RL-style training happens during conscious reflection."

is just something different than what I'm talking about in my post when I use the phrase "intrinsic values." 

From what I can tell, you seem to be arguing:

 

[paraphrasing] "In this one line of work, we define values this way", and then jumping from there to "therefore, you are misunderstanding values," when actually I think you're just using the phrase to mean something different than I'm using it to mean. 

Curated and popular this week
Relevant opportunities