I finally finished my series of 5 essays on my life philosophy (Valuism)! In a nutshell, Valuism suggests that you work to figure out what you intrinsically value, and then that you try to use effective methods to create more of what you intrinsically value. 

While simple and straightforward at first glance, few people seem to approach life this way, and Valuism ends up with a number of surprising implications and, I think, provides a perspective that can help shed light on a number of different domains. 

Interestingly, Effective Altruism is implied by Valuism plus a specific set of strong intrinsic values that most Effective Altruists have (reducing suffering + truth).
 

Here is the sequence of essays, if you feel like checking them out: 

 

Part 1: Doing what you value as a life philosophy – an introduction to Valuism 

 

Part 2: What to do when your values conflict? 

 

Part 3: Should Effective Altruists be Valuists instead of utilitarians? 

 

Part 4: What would a robot value? An analogy for human values 

 

Part 5: Valuism and X: how Valuism sheds light on other domains 

 

A big shoutout goes to Amber Dawn Ace who wrote these essays with me.

33

0
0

Reactions

0
0
Comments7


Sorted by Click to highlight new comments since:

I feel like 'valuism' is redefining utilitarianism, and the contrasts to utilitarianism don't seem very convincing. For instance, you define valuism as noticing what you intrinsically value and trying to take effective action to increase that. This seems identical to a utilitarian whose utility function is composed of what they intrinsically value.

I think you might be defining utilitarianism such that they are only allowed to care about one thing? Which is sort of true, in that utilitarianism generally advocates converting everything into a common scale, but that common scale can measure multiple things. My utility function includes happiness, suffering, beauty, and curiosity as terms. This is totally fine, and a normal part of utilitarian discourse. Most utilitarians I've talked to are total preference utilitarians, I've never met a pure hedonistic utilitarian.

Likewise, I'm allowed to maintain my happiness and mental health as an instrumental goal for maximizing utility. This doesn't mean that utilitarianism is wrong, it just means we can't pretend we can be utility maximizing soul-less robots. I feel like there is a post on folks realizing this at least every few months. Which makes sense! It's an important realization!

Also, utilitarianism also doesn't need objective morality any more than any other moral philosophy, so I didn't understand your objection there.

You're the one who's redefining utilitarianism- which is commonly defined as maximization of happiness and well-being of conscious beings. You can consider integrating other terminal values into what you'd like to do, but you're not really discussing utilitarianism at that point, as it's commonly used. For instance, Greenberg points to truth as a potential terminal value, which would be at odds with utilitarianism as it's typically used.

I think Singer is a hedonic utilitarian for what it's worth, and I think I subscribe to it while acknowledging that weighing the degrees of positive and negatively subjective experiences of many kinds is daunting.

As for having other instrumental values (which is why I don't really think the "burnout" argument is very good as against utilitarianism, I agree with you on that one.

I agree that 'utilitarianism' often gets elided into meaning a variation of hedonic utilitarianism. I would like to hold philosophical discourse to a higher bar. In particular, once someone mentions hedonic utilitarianism, I'm going to hold them to the standard of separating out hedonic utilitarianism and preference utilitarianism, for example.

I agree hedonic utilitarians exist. I'm just saying the utilitarians I've talked to always add more terms than pleasure and suffering to their utility function. Most are preference utilitarians.

Preference utilitarianism and valuism don't have much in common.

Preference utilitarianism: maximize the interests/preferences of all beings impartially.

First, preferences and intrinsic values are not the same thing. For instance, you may have a preference to eat Cheetos over eating nachos, but that doesn't mean you intrinsically value eating Cheetos or that eating Cheetos necessarily gets you more of what you intrinsically value than eating nachos will. Human choice is driven by a lot of factors other than just intrinsic values (though intrinsic values play a role).

Second, preference utilitarianism is not about your own preferences, it's about the preferences of all beings impartially.

Hey Spencer, really enjoyed these posts. I found it insightful to mentally separate out actions related to mimicry, instinctive behavior, habits, and other sources from actions actually connected to intrinsic values, loosely defined as values that stand the test of thought experiment. On a personal level, the simplicity and lightness of the philosophy resonate with me. 

I'm curious about the downsides of valuism. In your opinion, what are some good critiques against valuism? My initial thoughts: 

  • It's a subjective, individual life philosophy that doesn't set a stake in any area: 
    • This might be a feature given the diversity of minds and the difficulty in 100% proving aspects of moral realism. 
    • It is interesting and potentially concerning that a valuist can be anything, eg a terrible person that would like to make the world suffer. The focus on effectiveness in this context isn't ideal. 
  • In a similar vein, it's plausible that spreading valuism can lead to unintended outcomes because of the focus on individualism: 
    • Culture, mimicry, etc may lead to better collective and societal outcomes in some contexts. Valuism emphasizes personal interpretations of value which can be conflicting between individuals and societies. 

Other than these angles - what would be good reasons for a person to explicitly say they're not a valuist? It seems daunting for someone to disclaim pursuing intrinsic values, so I'd like to understand a plausible situation better. 

From the AI "engineering" perspective, values/valued states are "rewards" that the agent adds themselves in order to train (in RL style) their reasoning/planning network (i.e., generative model) to produce behaviours that are adaptive but also that they like and find interesting (aesthetics). This RL-style training happens during conscious reflection.

Under this perspective, but also more generally, you cannot distinguish between intrinsic and instrumental values because intrinsic values are instrumental to each other, but also because there is nothing "intrinsic" about self-assigned reward labels. In the end, what matters is the generative model that is able to produce highly adaptive (and, ideally, interesting/beautiful) behaviours in a certain range of circumstances.

I think you confusion about the ontological status of values is further corroborated by this phrase for the post: "people are mostly guided by forces other than their intrinsic values [habits, pleasure, cultural norms]". Values are not forces, but rather inferences about some features or one's own generative model (that help to "train" this very model in "simulated runs", i.e., conscious analysis of plans and reflections). However, the generative model itself is effectively the product of environmental influences, development, culture, physiology (pleasure, pain), etc. Thus, ultimately, values are not somehow distinct from all these "forces", but are indirectly (through the generative model) derived from these forces.

Under the perspective described above, valuism appears to switch the ultimate objective ("good" behaviour) for "optimisation of metrics" (values). Thus, there is a risk of Goodharting. I also agree with dan.pandori who noted in another comment that valuism pretty much redefines utilitarianism, whose equivalent in AI engineering is RL.

You may say that I suggest an infinite regress, because how "good behaviour" is determined, other than through "values"? Well, as I explained above, it couldn't be through "values", because values are our own creation within our own ontological/semiotic "map". Instead, there could be the following guides to "good behaviour":

  • Good old adaptivity (survival) [roughly corresponds to so-called "intrinsic value" in expected free energy functional, under Active Inference]
  • Natural ethics, if exists (see the discussion here: https://www.lesswrong.com/posts/3BPuuNDavJ2drKvGK/scientism-vs-people#The_role_of_philosophy_in_human_activity). If "truly" scale-free ethics couldn't be derived from basic physics alone, there is still evolutionary/game-theoreric/social/group stage on which we can look for an "optimal" ethics arrangement of agent's behaviour (and, therefore, values that should help to train these behaviours), whose "optimality", in turn, is derived either from adaptivity or aesthetics on the higher system level (i.e., group level).
  • Aesthetics and interestingness: there are objective, information-theoretic ways to measure these, see Schmidhuber's works. Also, this roughly corresponds to "epistemic value" in expected free energy functional under Active Inference.

If the "ultimate" objective is the physical behaviour itself (happening in the real world), not abstract "values" (which appear only in agent's mind), I think Valuism could be cast as any philosophy that emphasises creation of a "good life" and "right action", such as Stoicism, plus some extra emphasis on reflection and meta-awareness, albeit I think Stoicism already puts significant emphasis on these.

The way you define values in your comment:

"From the AI "engineering" perspective, values/valued states are "rewards" that the agent adds themselves in order to train (in RL style) their reasoning/planning network (i.e., generative model) to produce behaviours that are adaptive but also that they like and find interesting (aesthetics). This RL-style training happens during conscious reflection."

is just something different than what I'm talking about in my post when I use the phrase "intrinsic values." 

From what I can tell, you seem to be arguing:

 

[paraphrasing] "In this one line of work, we define values this way", and then jumping from there to "therefore, you are misunderstanding values," when actually I think you're just using the phrase to mean something different than I'm using it to mean. 

Curated and popular this week
trammell
 ·  · 25m read
 · 
Introduction When a system is made safer, its users may be willing to offset at least some of the safety improvement by using it more dangerously. A seminal example is that, according to Peltzman (1975), drivers largely compensated for improvements in car safety at the time by driving more dangerously. The phenomenon in general is therefore sometimes known as the “Peltzman Effect”, though it is more often known as “risk compensation”.[1] One domain in which risk compensation has been studied relatively carefully is NASCAR (Sobel and Nesbit, 2007; Pope and Tollison, 2010), where, apparently, the evidence for a large compensation effect is especially strong.[2] In principle, more dangerous usage can partially, fully, or more than fully offset the extent to which the system has been made safer holding usage fixed. Making a system safer thus has an ambiguous effect on the probability of an accident, after its users change their behavior. There’s no reason why risk compensation shouldn’t apply in the existential risk domain, and we arguably have examples in which it has. For example, reinforcement learning from human feedback (RLHF) makes AI more reliable, all else equal; so it may be making some AI labs comfortable releasing more capable, and so maybe more dangerous, models than they would release otherwise.[3] Yet risk compensation per se appears to have gotten relatively little formal, public attention in the existential risk community so far. There has been informal discussion of the issue: e.g. risk compensation in the AI risk domain is discussed by Guest et al. (2023), who call it “the dangerous valley problem”. There is also a cluster of papers and works in progress by Robert Trager, Allan Dafoe, Nick Emery-Xu, Mckay Jensen, and others, including these two and some not yet public but largely summarized here, exploring the issue formally in models with multiple competing firms. In a sense what they do goes well beyond this post, but as far as I’m aware none of t
 ·  · 1m read
 · 
Andy Masley
 ·  · 4m read
 · 
If you’re visiting Washington DC to learn more about what’s happening in effective altruist policy spaces, we at EA DC want to make sure you get the most out of it! EA DC is one of the largest EA networks and we have a lot of amazing people to draw from for help. We have a lot of activity in each major EA cause area and in a broad range of policy careers, so there are a lot of great opportunities to connect and learn about each space! If you're not visiting DC soon but would still like to connect or learn more about the group you should email us at Info@EffectiveAltruismDC.org and explore our resource list!   How to get the most out of DC Fill out our visitor form Start by filling out our visitor form. We’ll get back to you soon with any resources and connections you requested! We’d be excited to chat over a video call before your visit, get you connected to useful resources, and put you in touch with specific people in DC most relevant to your cause area and career interests. Using the form, you can: Connect with the EA DC network If you fill out the visitor form we can connect you with specific people based on your interests and the reasons for your visit. After we connect you, you can either set up in-person meetings during your visit or have video calls ahead of time to get a sense of what's happening on the ground here before you arrive. To connect with more people you can find all our community resources here and on our website. Follow along with EA DC events here.  Get added to the EA DC Slack Even if you’re just in town for a few days, the Slack channel is a great way to follow what’s up in the network. If you’re okay sharing your name and reasons for your DC visit with the community you can post in the Introductions channel and put yourself out there for members to reach out to. Get hosted for your stay We have people in the network with rooms available to sublet, and sometimes options to stay for free. Find an office to work from during the