I’ve often heard, as a critique of Effective Altruism, the argument that certain goods—aesthetic, familial, cultural, communal, etc.—can’t, or shouldn’t, be quantified, and that the implicit utilitarianism of EA forces us to do just that. A friend once told me that “it’s like forcing someone to pick a favorite child;” because even if (in your heart of hearts) you could pick a favorite child, that action in and of itself seems to not only create unnecessary psychological turmoil (aka negative utility), but also seems like something most people can’t find an answer for through bare-bones empiricism. For example, does it really seem feasible to propose that we can even somewhat accurately determine the value of a Van Gogh painting, or the heartbreak of having a family member pass away? 

I’ve heard many arguments made by prominent EAs (in prominent Singerian fashion) arguing that even though it may seem difficult, or even cruel, we have a utilitarian responsibility to assign these harsh valuations for the sake of accurately defining our utilitarian priorities. They might say that even though you may not be able to put a specific utility value on Van Gogh’s Starry Night, the risk of failing to do so is that your $5,000 donation to the Museum of Modern Art could have had the ability to improve innumerable human, animal, or future lives. In short, they argue that if you don’t assign these goods any sort of reasonable valuation, there will be no empirical backing to support the need to maximize the utility value of your donation (given that relative utility, by definition, entails having to compare values.) If we decide not to give these goods any sort of utilitarian valuation, how can we decide where our resources will do more good? 

I am not making an argument against utilitarianism. What I will posit is a way to make an argument for EA while sidestepping the trepidations many people have of assigning values to seemingly in-evaluable goods. Instead of arguing through the standard utilitarian template, I propose that the argument should be made through the position of fairness and expected equal potential. 

By expected equal potential I mean that it is expected that everyone has the capability to appreciate things such as aesthetics, family, culture, community etc.; and I believe that most people will also accept this premise. Therefore, even if you think that you can’t put a value on a Van Gogh painting, it seems probable to assume that most people have the same ability—if given the right opportunities—to also get in-evaluable value from this painting. And the same goes for interpersonal human goods; it would be unusual to hear someone say that not everyone can love their family or culture. Therefore, through this assumption of expected equal potential—i.e. that all people, at minimum, have the capability to appreciate these in-evaluable goods—we can make the further argument that it is normatively fair to try to maximize the amount of people with the opportunity to appreciate these goods. For example, rates of premature death, infant mortality, and chronic illness are all symptoms correlated with extreme poverty. So even if you don’t believe that we can quantify the value of family, it still appears to be fair to posit that we should try to provide as many people as we can with the ability to enjoy their families (which then also makes this argument compatible with utilitarianism [with the utility function being the maximization of opportunity]). 

Therefore, I see this argument as valid within the parameters of both utilitarianism and opposing ethical systems of justice and fairness—namely, certain branches of deontology. I hope this article will to show that when someone says that they aren’t a fan of Effective Altruism because they cherish goods that they can’t, or that they don’t want to, assign a set-value to, there is still a way to convince them of EA while accepting this premise, through the argument of fairness and expected equal potential

I have made this argument to 5+ people and have, so far, had success each time. However, I am interested to hear if other members of the community have approached this criticism of EA in different ways, or if they disagree with my approach. 


5 comments, sorted by Click to highlight new comments since: Today at 8:53 AM
New Comment

Generally I find it helps to separate cost from value (see The Value Of A Life) and also to point out that our decisions carry implicit value regardless of whether we articulate that value. By choosing to donate $5,000 to an art gallery rather than $5,000 to AMF to save a life (in expectation) then I am implicitly valuing the former over the latter. It helps to articulate what it is that we value to understand whether these decisions are consistent with our beliefs.

I do believe that I probably value more things than just the expected utility of sentient beings. However, I budget a significant portion of my time and money to optimising the expected utility of sentient beings  while also carving out time and money to be spent on other things (including my own hedonism, aesthetic preferences, and special obligations to people in my life etc).

Also relevant is Julia's post You Have More Than One Goal And That's Fine.

I agree, and that is essentially the rationale I employ. I personally think I could put a value on every aspect of my life, therefore subverting the notion that implicit values can't be made explicit. 

However, I think the problem is that for some people your answer will be a non-starter. They might not want to assign the implicit value an explicit value (and therefore your response would shew them away). So what I'm proposing is allowing them keep their implicit values implicit while showing them that you can still be an EA if you accept that other people have implicit values as well.  In honesty, it's barely a meta-ethical claim, and more-so an explication of how  EA can jive with various ethical frameworks. 

Hm what about saying that ok then if one values different objectives then they can still try to do the most good with their spare resources, making some kind of a conditional or a weighted average in their mind (for example, one can think that if they enjoyed Van Gogh 'this much' they can then focus on family 'that much' and then make philanthropic investments, which can enable people to do the same 'emotions-based prioritization,' such as care for family's basic needs and enjoy aesthetics of family presentation or can enable communities to act in this way, such as see if non-human animals should receive some attention, if economics should be improved, and if time to enjoy relationships should be allocated - this can be better possible in less industrialized areas which may make more decisions by 'emotional consensus').

The problem here is that it's still overtly utilitarian, with just a  bit more wiggle room. It still forces people to weigh one thing against the other, which is what I think they might be uncomfortable doing. Buck Shlegeris says' everything is triage' and I think you'd agree with this sentiment. However, I don't think everyone likes to think this way, and I don't want that hiccup to be the reason they don't further investigate EA. 

Hmmm ...but is it more so about the presentation of relative power between the one who offers and the one who contemplates EA as a reasonable or not framework? For example, if Buck Shlegeris (or anyone else) offers that he has "the utmost respect in [his] heart" for "dumb" people, if the person was implying that different thinking should be dismissed or ridiculed (to assert dominance by fear as opposed to critical thinking invitation), then regardless of the framework that the offering person would support, the suggestion may be relatively less well accepted among people who seek to cooperate.

So, if 'everything is a triage' is meant to (or happens to, in interpretation) allude to the notion of 'an exclusive group of decisionmakers does not have time for the emotional requests of a much larger group, which can be perceived as almost disgusting by persistent appeals' then a utilitarian framework with actually highly significant wiggle room  may be accepted relatively less well than when 'everything is a triage' connotes 'everyone is a decisionmaker; decisions are challenging; we have to take care of our close ones but also others; requests are received well and always welcome but there is only so much one can do, perhaps the best is to inspire and encourage' - for example.

So, sure, I think that any framing that shows that the one who pitches EA sincerely cares about the perspective of the other person or people but also is confident that EA is a great option should work. What you are suggesting can work for many people, perhaps a stereotype of older affluent decisionmakers who seek to be seen as righteous and caring for others/their group by almost privileging them. It should be actually brilliant for appealing to such decisionmakers.

What I was suggesting can appeal to non-decisionmakers, those who perhaps do not much enjoy Van Gogh because they may prefer to save the gallery entrance fee and time spent there to develop relationships with others - may understand decisionmaking more as an emotional consensus. Your pitch would not work there; the people would feel like they have nothing to contribute.

Between your and maybe moral circles/triage pitches, moral circles/triage invites decisionmakers' and others' critical thinking about institutions/standards, by reason. It just makes sense to get a bit organized about the impact. There is nothing personal, no 'failed hopes and dreams' of 'future potential' and asking others to solve 'one's concerns' by being on their side of fairness. Thus, your framing can attract a group of people who could come across as seeking to emotionally influence others who they expect to not 'yield' because they do not have to. That is a movement stagnation risk.

So, I suggest the moral circles/triage/high impact with some, those which can be well spared, resources as an option that one offers when pitching EA and reason of pitching it to be that it actually benefits them personally, is quite cool, so it is almost a personal tip with absolutely no feelings or judgment regarding one's thinking about such tip.