For me, basically every other question around effective altruism is less interesting than this basic one of moral obligation. (Scott Alexander)

In the spirit of focusing on the basic cruxes of EA, there's only one conclusion that I find repugnant. It goes as follows:

  • P1: We should value outcomes according to the total welfare experienced in each outcome.
  • P2: People in poor countries have more difficult and immiserating lives than people in rich countries.[1]
  • C1: A person in a poor country whose life is saved experiences less welfare than a person in a rich country whose life is saved.
  • C2: All else equal, it is better to save the life of a rich country resident than a poor country resident.[2]

All else equal is obviously an enormous caveat and the reason why in practice we do not actually prioritize interventions in the US over those in India. It is much cheaper and easier to save lives in poor countries, because the causes of death are more preventable.

But I find the conclusion disturbing nonetheless, and as far as I can tell, this is a conclusion of any strain of utilitarianism (except preference utilitarianism). If we could save a life in the US for the same cost as a life in India, would we really prioritize that because a life in the US is happier than a life in India? What if it cost slightly more?

These cross-country comparisons can get even more disturbing if you think about them more. Nick Beckstead:

"To take another example, saving lives in poor countries may have significantly smaller ripple effects (on the long-term future) than saving and improving lives in rich countries. Why? Richer countries have substantially more innovation, and their workers are much more economically productive. By ordinary standards—at least by ordinary enlightened humanitarian standards—saving and improving lives in rich countries is about equally as important as saving and improving lives in poor countries, provided lives are improved by roughly comparable amounts. But it now seems more plausible to me that saving a life in a rich country is substantially more important than saving a life in a poor country, other things being equal."

This is a different route to the same conclusion, and it's even less palatable. If we value saving lives instrumentally - as ripple effects for the far future, or as ways to deliver positive life experiences - then we cannot escape the conclusion that some lives are more valuable to save than others.


  1. This is sometimes taken as an implication of the literature showing that under revealed preferences, poor countries have a lower value of statistical life (VSL) than rich countries. This is inaccurate: since VSL = (utility of life)/(utility of money), a low VSL can come from a low utility of life, or from a high utility of money in poor countries. Rather, this premise is more justified by subjective wellbeing measures. ↩︎

  2. You can make a symmetric argument showing that it is better to save the life of an able-bodied person as opposed to a disabled person. That feels repugnant in the same ways as this argument does. ↩︎

Comments16
Sorted by Click to highlight new comments since: Today at 11:25 PM

It's indisputable that some lives are more instrumentally valuable (to save) than others.  So if you hold that all lives are equally intrinsically valuable, it follows that some lives are all-things-considered more valuable to save than others (due to having the same intrinsic value, but more instrumental value).

To avoid that "uncomfortable"-sounding conclusion, you would need to reject the second premise (that all lives are equally intrinsically valuable).  That is, you would have to claim that some lives are intrinsically more valuable than others.  And that is surely a much more uncomfortable conclusion!

I think we should conclude from this that there's actually nothing remotely morally objectionably about saying that some lives are more valuable to save for purely instrumental reasons.  The thing to avoid is to claim that some lives are intrinsically more important.  It "sounds bad" to say "some lives are more valuable to save than others" because it sounds like you're claiming that some lives are inherently more valuable than others.  So it's important to explicitly cancel the implicature by adding the "for purely instrumental reasons" clause.

But once clarified, it's a perfectly innocuous claim.  Anyone who still thinks it sounds bad at that point needs to think more clearly.

I agree that some lives will go on to have a higher impact on others. I disagree with OP that you can predict which.

But the point from OP is that it's unacceptable to have favourites in terms of the importance of their lives, for whatever reason. So if you think some lives are predictably instrumentally more valuable, it follows that a good moral theory should ignore (some of) the instrumental value of saving a life.

The OP spoke of evaluative claims ("it is better to..." and "the conclusion that some lives are more valuable..."), so I think it's important to be clear that those axiological claims are not reasonably disputable, and hence not reasonably regarded as "repugnant" or whatever.

Now, it's a whole 'nother question what should be done in light of these evaluative facts. One could argue that it's "unacceptable" to act upon them; that one should ignore or disregard facts about instrumental value for the purposes of deciding which life to save.

The key question then is: why? Most naturally, I think, one may worry that acting upon such differences might reinforce historical and existing social inequalities in a way that is more detrimental on net than the first-order effects of doing more immediate good.  If that worry is empirically accurate, then even utilitarians will agree with the verdict that one should "screen off" considerations of instrumental value in one's decision procedure for saving lives (just as we ordinary think doctors etc. should).  Saving the most (instrumentally) valuable life might not be the best thing to do, if the act itself--or the process by which it was decided--has further negative consequences.

Again, per utilitarianism.net:

[T]here are many cases in which instrumental favoritism would seem less appropriate. We do not want emergency room doctors to pass judgment on the social value of their patients before deciding who to save, for example. And there are good utilitarian reasons for this: such judgments are apt to be unreliable, distorted by all sorts of biases regarding privilege and social status, and institutionalizing them could send a harmful stigmatizing message that undermines social solidarity. Realistically, it seems unlikely that the minor instrumental benefits to be gained from such a policy would outweigh these significant harms. So utilitarians may endorse standard rules of medical ethics that disallow medical providers from considering social value in triage or when making medical allocation decisions. But this practical point is very different from claiming that, as a matter of principle, utilitarianism's instrumental favoritism treats others as mere means [or is otherwise inherently objectionable]. There seems no good basis for that stronger claim.

I like a lot the last paragraph pointing out to the risk of perpetuating a privileged situation based on bias. Thanks for sharing it.

(For related discussion, see the 'Instrumental Favoritism' section of the 'Mere Means' objection on utilitarianism.net)

It seems a little strange to call this a repugnant conclusion, given that this priority has been shared by the vast majority of people both historically and today. As far as I can see, almost no-one thinks that we should be completely indifferent between which person we save. I don't think really anyone believes it is equally important to save e.g. a terminally ill pedophile as it is to save a happy and healthy doctor who has many freind's and helps many people.

I certainly agree it has been shared by the vast majority of people historically and today. I do not think that's a sufficient justification. The vast majority of people historically and today think that animals don't matter, but we don't accept that.

I think most EAs would say reflexively that they care about life-years equally, independent of income (though not of health). I think this conclusion would be uncomfortable to those people. There are other ways to discount the life-years you save - the pedophile vs doctor example points to using some notion of virtue as a criterion for who we should save. I think that the income-difference should be deeply uncomfortable to people because of how it connects to a history (and continuing practice!) of devaluing the lives of people far away from us.

[anonymous]2y4
1
0

Wouldn't health have the same problems as income? E.g. that it connects to a history (and continuing practice) of devaluing the lives of people who are not as healthy or able?

One point on instrumentally valuing life is that this is essentially what happens under triage. I think a large portion of the reason we can value equality is because right now, our societies are stable and prosperous, and hopefully won't have a crisis. But if we do and the crisis is an GCR or X-risk, then certain people really are more valuable.

I guess it's on me for putting "repugnant conclusion" in the title, but what does this have to do with my post? My post is not about the Repugnant Conclusion as discussed in population ethics. It is just, in very literal terms, a conclusion which is repugnant.

Edit: I've changed the title from "repugnant conclusions" to "uncomfortable conclusions"

I've edited my comment to include instrumentally valuing life as well. I'll cut out the repugnant conclusion part of my comment.

On your triage point, I think we can and do triage based on other criteria - namely, how much it costs to save a life. That feels a lot more in the spirit of triage than this specific comparison, which is much closer to a value judgment about what kinds of lives are worth living. Are we really okay with just judging that the lives of other people are less worth living than our own?

On the GCR point, that's fair enough - it is the argument that Beckstead makes. The post is just to say that I find it uncomfortable, and plausibly an argument that a less WEIRD and more international EA would reject. But I'm afraid that's just my wild speculation.

I think the reverse could be argued as well, that is, a rich person's life costs the rest of society and the globe more in terms of resources and suffering, and the rich are more likely to spend their time in idle diversions, therefore, saving a life of someone in a developing country is more valuable than saving a life of someone in an industrialized country. Either way you argue, though, you miss another perspective.

Another perspective is, and a plausible comparison would be, how much people value their own lives. By that standard, people in each country, rich or poor, value their own lives equally as far as I know. Accordingly, since their interest in saving their own lives is equal, the moral value of serving each's interests is equal as well.

Deciding who to save is always an uncomfortable decision because it positions us in front of someone who will die because of our choice. But not taking the decision at all is even worse because then both will die. Therefore, we have triage systems that help us to take the decision consistently to our moral principles.

Whenever we can increase the number of QALYs using the same resources the decision is simple. Whenever we can get the same amount of QALYs with less resources the decision is also simple. Things get complicated when the ratio QALY/$ is the same. I don’t know what the next criteria are to be used then. To me it also seems very problematic to use the economic position of the affected persons. One could argue that the better positioned person will (potentially) get more welfare and therefore choosing her/him to maximize the absolute welfare, but one could alternatively argue that the person who will (potentially) do the most efficient transformation of our common resources into personal welfare is the right choice. I develop the example in the rest of the reply, without taking a stance on which choice is best.

[Update: I edited this reply to include the introduction above this line, as it provides more context]

What is the cost of saving a life? In my view the resources to be counted are the sum of:

  1. Resources needed to save the life in a given point of time
  2. Resources needed to keep the person alive with the expected level of welfare during the rest of her/his life

Even in a hypothetic scenario where saving a life (1) of a rich person costs the same as saving the life of a poor person, we would still have to consider how much resources will the rich person consume (2) compared to the poor.

The life of the average person in Luxembourg takes 15 times more resources than the life of the average person in Gambia [1], therefore the life expectancy should be almost 15 times longer in order to generate a similar level of welfare per resources used. But it is not.

Taking finite resources such as fossil fuels or easily accessible minerals will reduce the resources available for future beings’ welfare, so we as humanity should make an effective use of them to generate effectively welfare and HQALYs. In this respect, an alternative uncomfortable conclusion seems to favor increasing the number of lives with a low ecological footprint compared to the number of lives with higher impact.

Whether somebody is aligned with this alternative uncomfortable conclusion will depend heavily on how she/he sees the availability of resources. Let’s take these two extremes:

  1. If you believe we will soon overcome our planet limits and face significant difficulties as described by Corentin Biteau in “The great energy descent (short version) - An important thing EA might have missed
  2. If you believe we still have a lot of room for growth and human ingenuity will find ways to make more people richer before overshooting Earth’s ecosystems’ capacity, and this is very positive for welfare creation as described in “Growth and the case against randomista development” by Hauke Hillebrandt and John G. Halstead

In case of (A - planet limits coming soon) then the uncomfortable conclusion is that it is way much better to let the richest people die instead of the poor one in your uncomfortable example. A less uncomfortable conclusion would be to heavily limit their capacity to exhaust resources and impact ecosystems by setting limits on personal ecological footprint and/or huge taxes, but in case someone needs to die the choice seems evident.

In case of (B - room for positive growth) then the smart thing to do might be to open the borders completely. As it is described in the post “Global economic inequality” the life expectancy and many other welfare factors depend almost completely on where are you from. If we could for example increase 15 years the life expectancy of 100 million of sub-Saharan Africans by letting them come into Europe this would mean 1.500 millions HQALYs generated within a generation time. If we don’t think it is possible to extend the European way of life to so many additional people, then (B) is not completely true and we should go back to (A) and reduce the consumption per capita of the rich people to let more room for the poor people to develop their welfare.

“C1: A person in a poor country whose life is saved experiences less welfare than a person in a rich country whose life is saved”

(Asking a dumb question here, but,) is this true? Ie, does an increase in material wealth actually increase psychological wellbeing?

I have an intuition that psychological well-being is mostly affected by how wealthy you are compared to your peer group.

Maybe you’re talking about individuals poor countries who are below the poverty line (in which case, I agree that they would experience much less psychology wellbeing).

But I would be surprised if individuals in rich countries are actually happier than individuals in poor countries (who have all their basic needs met).

So my understanding of the economics literature on income and subjective well-being is that currently we think that:

Relative income has a large effect on subjective well-being

Absolute income has a smaller effect on subjective well-being but the effect is still there

Relevant abstract: https://scholar.google.co.uk/scholar?q=income+and+subjective+well-being&hl=en&as_sdt=0&as_vis=1&oi=scholart#d=gs_qabs&t=1662383007403&u=%23p%3DXTFeCIDXcGcJ

Curated and popular this week
Relevant opportunities