(Cross-posted from Hands and Cities)

In Peter Singer’s classic thought experiment, you can save a drowning child, but only by ruining your clothes. Singer’s argument is that (a) you’re obligated to save the child, and that (b) many modern humans are in a morally analogous relationship to the world’s needy. (Note that on GiveWell’s numbers, the clothes in question need to cost thousands of dollars; but the numbers aren’t my point at present).

I was first exposed to arguments like this in high school, and they left a lasting impression on me, as they do on many. But I also think they tend to leave some people with a reluctant, defensive, and quasi-adversarial relationship to the omnipresence of Singerian decisions of this kind, once this omnipresence is made clear. See, for example, the large literature on moral “demandingness,” which focuses heavily on the question of what costs you “have” to pay to help others, and what you are “allowed” not to pay.

Morality, in this conception, is sort of like taxes; it invades your personal domain — the place where you make choices based on what you want and care about — and then takes some amount of stuff for its own ends. One hopes it isn’t too much; one wonders how little one can get away with paying, without breaking “the rules.”

I’m not going to evaluate this conception directly here. But here’s a version of Singer’s thought experiment that I sometimes imagine, which lands in a different register in my mind (your mileage may vary).

Suppose that I am setting off to walk in the forest on a crisp fall afternoon — something I’ve been looking forward to all week. As I near the forest, I notice, far away, what looks like some sort of commotion down by the river, off of my walking route, though I can’t see very clearly. I consider going to see what’s going on, but the light is fading, so I decide to continue on my way.

I learn later that while I was walking, a man in his early forties drowned in that river. He was pinned under some sort of machinery. Five other people were there, including his wife and son, but they weren’t strong enough to lift the machinery by themselves. One extra person might have made the difference.

(Here I try to imagine vividly what it was like trying to save him — his wife desperate, weeping, pulling at him, his own eyes frantic, the fear and chaos, the eventual despair– and the pain of his absence afterwards; and the counterfactual world in which instead, another person arrived in time to help, and he lives.)

The intuition this pumps for me is: I wish I’d gone to the river. Importantly, though, at least for me, the case leaves the focus of attention on the drowned man himself, and the clear sense in which a beautiful walk would be worth trading, cancelling, disappearing, to grant him multiple decades of life, and his family multiple decades of time together. The question of whether my choice to continue walking was wrong, though not entirely absent, is less salient. That is, for me (at least as a thought experiment — who knows how I’d feel if something like this really happened), the case touches most centrally into a feeling of regret, rather than guilt. I wish I could go back, and create a world where I had one fewer beautiful walk, and he lived.

Really, the cases here aren’t very different. But I like the way this one feels less oriented towards, as it were, calling someone an asshole (which isn’t to say that it’s not possible to be an asshole), even though it still makes a specific choice and trade-off salient. I think this avoids triggering some defenses around fearing reproach and wrongdoing, and hones in on the values that animate regret, sadness, and a desire to make things better. As a result, the emotional reaction feels to me somehow more wholehearted and internal. In weighing the walk vs. his life, I’m not asking whether or not I “have” to give up the walk, or whether I am allowed” to keep it. I want to give it up; I wish I could; the choice feels clear, and continuous with other choices I make in weighing what matters to me most.

(I also like this version because it’s clearer and more immediate to me how I value long walks on fall afternoons than it is how I value e.g. money. Indeed, the fact that walks are a substantively meaningful good in my life — as opposed to something with stronger connotations of shallowness and frivolity, like an expensive suit — also makes me feel more like I’m in connection with the fact that giving it up is a genuine loss — albeit, a clearly worthwhile one.)

Notice the possibility, on a “morality as taxes” approach, of being glad that e.g. you weren’t able to see more clearly what was happening at the river. For perhaps, if it was clear to you that a man was drowning, then this would’ve triggered an obligation to give up your walk, and you would’ve “had” to do it, on pain of having been bad, wrong, worthy of reproach, etc. That is, on this view, what you care about is the walk; but sometimes, unfortunately for you, morality demands that you give up what you care about. You may well obey dutifully in such situations; but from the perspective of what you care about, you’re glad to avoid encounters with morality. And indeed, since morality is partly a matter of how you act in response to what you believe is happening, or to what is salient to you, you’re incentivized, subtly or not-so-subtly, to avoid forming certain kinds of beliefs, or making certain kinds of things salient. Thus a pull towards self-deception (though of course, self-deception in these respects will be condemned by morality as well).

None of this is particularly new (see, for example, Nate Soares on harmful uses of “should”; and comments from various effective altruists who prefer to avoid Singerian “obligation” framings of altruism). And there’s much more to say. In particular, I’m not here claiming that a conception of morality as something in some sense external to or constraining of what you care about is without basis, or that more wholehearted approaches resolve all questions about demandingness. But the difference between wholehearted approaches to Singerian decision-making and “morality as taxes” approaches is an important one in my world, and I try, where possible, to stay rooted in the former.

Comments9


Sorted by Click to highlight new comments since:

I often find myself thinking about this post, and I now often use the phrases "morality as taxes" and "wholeheartedness". Thank you for writing it :)

I just got around to reading this, and I love it.

I think it's much better than previous writing on obligation vs opportunity framings of EA, which always left me unsatisfied. 

I like the thought experiment, but I think (unfortunately) the Singerian analogy is closer to reality.

In the "woodland commotion" case, you don't feel bad for not going to help because, well, how could you have known this weird situation was occurring? But it doesn't seem like the world is like that, where it's so non-obvious how we help that no one could blame us for not seeing it.

Indeed, even if the world were like that to us initially, the situation changes as soon as someone tells you what you can do to help.

To adjust your case, suppose you hear a commotion in the distance, but then someone next to you who has binoculars, sees what's going on and say "hey, there's a man stuck over there, shall we go help?" Then the case becomes much like Singer's shallow pond where you can easily help someone else at a cost to you and you know it.  So all the concerns about demandingness resurface.  But Singer, effective altruists, and many others in society, are basically being the guy with binoculars ("hey, do you know how you can do good? Don't buy that latte, buy a bednet instead") so once you've heard their pitch, you can hardly claim you had no idea how you could have helped. 

I like the reframing, but I don't feel like it centrally addresses the problem of demandingness. With your example (and knowing a man was pinned under machinery) and seeing a drowning child, I imagine wanting to leap into action. If I dragged a child out of a pond, and I imagine being wet and cold but looking at the child and seeing that they're okay, and maybe the parents are grateful and people around me are happy, I feel actively glad I jumped in the pond, and would feel similar regret if I passed by. 

The unpleasant feeling of wondering if I can get away with doing less, with not looking, hoping too much won't be asked of me, etc., is still triggered for me in your framing if I imagine that this scenario happens on every walk I went on, and every time I tried to take a walk in the woods I thought "oh geez, probably someone will be in trouble and I'll have to help, and it will be the right thing, but can I ever just have a walk the woods in peace?" I imagine I would even gradually become inured to the situation, possibly feel impatience and not want to see the family's panic, etc.

In other words, it's mostly the near-omnipresence of opportunities to help that makes me feel the aversive demandingness reaction, and the temptation of self-deception. And I still feel unsure how to deal with that in a world where it does basically feel like people are drowning in every river, pond, and other body of water I see. 

I really like this. To me, it emphasizes that moral reason is a species of practical reason more generally and that the way moral reasons make themselves heard to us is through the generic architecture of practical reasoning. More precisely: Acting in a manner consistent with one's moral duties is not about setting one's preferences aside and living a life of self-denial; it's about being sufficiently attentive to one's moral world that one's preferences naturally evolve in response to sound moral reasons, such that satisfying those preferences and fulfilling one's duties are one and the same.

These cases seem not at all analogous to me because of the differing amount of uncertainty in each.

In the case of the drowning child, you presumably have high certainty that the child is going to die. The case is clear cut in that way.

In the case of the distant commotion on an autumn walk, it's just that, a distant commotion. As the walker, you have no knowledge about what it is and whether or not you could do anything. That you later learn you could have done something might lead you to experience regret, but in the moment you lacked information to make it clear you should have investigated. I think this entirely accounts for the difference in feeling about the two cases, and eliminates the power of the second case.

In the second case, any imposition on the walker to do anything hinges on their knowledge of what the result of the commotion will be. Given the uncertainty, you might reasonably conclude in the moment that it is better to avoid the commotion, maybe because you might do more harm than good by investigating.

Further, this isn't a case of negligence, where you failing to respond to the commotion makes you complicit in the harm, because you seem to have no responsibility to the machinery or the conditions by which the man came to be pinned under it. Instead it seems to be a case where you are morally neutral throughout because of your lack of knowledge, and your lack of active effort to avoid gaining knowledge that would otherwise make you complicit by trying to avoid becoming morally culpable. That is not the case here and so your example seems to lack the necessary conditions to make the point.

My reading of the post is quite different: This isn't an argument that, morally, you ought to save the drowning man. The distant commotion thought experiment is designed to help you notice that it would be great if you had saved him and to make you genuinely want to have saved him. Applying this to real life, we can make sacrifices to help others because we genuinely/wholeheartedly want to, not just because morality demands it of us. Maybe morality does demand it of us but that doesn't matter because we want to do it anyway.

Weird, that sounds strange to me because I don't really regret things since I couldn't have done anything better than what I did under the circumstances or else I would have done that, so the idea of regret awakening compassion feels very alien. Guilt seems more clear cut to me, because I can do my best but my best may not be good enough and I may be culpable for the suffering of others as a result, perhaps through insufficient compassion.

I found this really motivating and inspiring. Thanks for writing. I've always found the "great opportunity" framing of altruism stretched and not very compelling but I find this subtle reframing really powerful. I think the difference for me is the emphasis on the suffering of the drowning man and his family, whereas "great opportunity" framings typically emphasise how great it would be for YOU to be a hero and do something great. I prefer the appeal to compassion over ego.

I usually think more along Singerian obligation lines and this has led to unhealthy "morality as taxes" thought patterns. On reflection, I realise that I haven't always thought about altruism in this way and I actually used to think about it in a much more wholehearted way. Somehow, I largely lost that wholehearted thinking. This post has reminded me why I originally cared about altruism and morality and helped me revert to wholehearted thinking, which feels very uplifting and freeing. I plan on revisiting this whenever I notice myself slipping back into "morality as taxes" thought patterns.

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
LewisBollard
 ·  · 8m read
 · 
> How the dismal science can help us end the dismal treatment of farm animals By Martin Gould ---------------------------------------- Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. ---------------------------------------- This year we’ll be sharing a few notes from my colleagues on their areas of expertise. The first is from Martin. I’ll be back next month. - Lewis In 2024, Denmark announced plans to introduce the world’s first carbon tax on cow, sheep, and pig farming. Climate advocates celebrated, but animal advocates should be much more cautious. When Denmark’s Aarhus municipality tested a similar tax in 2022, beef purchases dropped by 40% while demand for chicken and pork increased. Beef is the most emissions-intensive meat, so carbon taxes hit it hardest — and Denmark’s policies don’t even cover chicken or fish. When the price of beef rises, consumers mostly shift to other meats like chicken. And replacing beef with chicken means more animals suffer in worse conditions — about 190 chickens are needed to match the meat from one cow, and chickens are raised in much worse conditions. It may be possible to design carbon taxes which avoid this outcome; a recent paper argues that a broad carbon tax would reduce all meat production (although it omits impacts on egg or dairy production). But with cows ten times more emissions-intensive than chicken per kilogram of meat, other governments may follow Denmark’s lead — focusing taxes on the highest emitters while ignoring the welfare implications. Beef is easily the most emissions-intensive meat, but also requires the fewest animals for a given amount. The graph shows climate emissions per tonne of meat on the right-hand side, and the number of animals needed to produce a kilogram of meat on the left. The fish “lives lost” number varies significantly by
Neel Nanda
 ·  · 1m read
 · 
TL;DR Having a good research track record is some evidence of good big-picture takes, but it's weak evidence. Strategic thinking is hard, and requires different skills. But people often conflate these skills, leading to excessive deference to researchers in the field, without evidence that that person is good at strategic thinking specifically. I certainly try to have good strategic takes, but it's hard, and you shouldn't assume I succeed! Introduction I often find myself giving talks or Q&As about mechanistic interpretability research. But inevitably, I'll get questions about the big picture: "What's the theory of change for interpretability?", "Is this really going to help with alignment?", "Does any of this matter if we can’t ensure all labs take alignment seriously?". And I think people take my answers to these way too seriously. These are great questions, and I'm happy to try answering them. But I've noticed a bit of a pathology: people seem to assume that because I'm (hopefully!) good at the research, I'm automatically well-qualified to answer these broader strategic questions. I think this is a mistake, a form of undue deference that is both incorrect and unhelpful. I certainly try to have good strategic takes, and I think this makes me better at my job, but this is far from sufficient. Being good at research and being good at high level strategic thinking are just fairly different skillsets! But isn’t someone being good at research strong evidence they’re also good at strategic thinking? I personally think it’s moderate evidence, but far from sufficient. One key factor is that a very hard part of strategic thinking is the lack of feedback. Your reasoning about confusing long-term factors need to extrapolate from past trends and make analogies from things you do understand better, and it can be quite hard to tell if what you're saying is complete bullshit or not. In an empirical science like mechanistic interpretability, however, you can get a lot more fe