(Cross-posted from Hands and Cities)

In Peter Singer’s classic thought experiment, you can save a drowning child, but only by ruining your clothes. Singer’s argument is that (a) you’re obligated to save the child, and that (b) many modern humans are in a morally analogous relationship to the world’s needy. (Note that on GiveWell’s numbers, the clothes in question need to cost thousands of dollars; but the numbers aren’t my point at present).

I was first exposed to arguments like this in high school, and they left a lasting impression on me, as they do on many. But I also think they tend to leave some people with a reluctant, defensive, and quasi-adversarial relationship to the omnipresence of Singerian decisions of this kind, once this omnipresence is made clear. See, for example, the large literature on moral “demandingness,” which focuses heavily on the question of what costs you “have” to pay to help others, and what you are “allowed” not to pay.

Morality, in this conception, is sort of like taxes; it invades your personal domain — the place where you make choices based on what you want and care about — and then takes some amount of stuff for its own ends. One hopes it isn’t too much; one wonders how little one can get away with paying, without breaking “the rules.”

I’m not going to evaluate this conception directly here. But here’s a version of Singer’s thought experiment that I sometimes imagine, which lands in a different register in my mind (your mileage may vary).

Suppose that I am setting off to walk in the forest on a crisp fall afternoon — something I’ve been looking forward to all week. As I near the forest, I notice, far away, what looks like some sort of commotion down by the river, off of my walking route, though I can’t see very clearly. I consider going to see what’s going on, but the light is fading, so I decide to continue on my way.

I learn later that while I was walking, a man in his early forties drowned in that river. He was pinned under some sort of machinery. Five other people were there, including his wife and son, but they weren’t strong enough to lift the machinery by themselves. One extra person might have made the difference.

(Here I try to imagine vividly what it was like trying to save him — his wife desperate, weeping, pulling at him, his own eyes frantic, the fear and chaos, the eventual despair– and the pain of his absence afterwards; and the counterfactual world in which instead, another person arrived in time to help, and he lives.)

The intuition this pumps for me is: I wish I’d gone to the river. Importantly, though, at least for me, the case leaves the focus of attention on the drowned man himself, and the clear sense in which a beautiful walk would be worth trading, cancelling, disappearing, to grant him multiple decades of life, and his family multiple decades of time together. The question of whether my choice to continue walking was wrong, though not entirely absent, is less salient. That is, for me (at least as a thought experiment — who knows how I’d feel if something like this really happened), the case touches most centrally into a feeling of regret, rather than guilt. I wish I could go back, and create a world where I had one fewer beautiful walk, and he lived.

Really, the cases here aren’t very different. But I like the way this one feels less oriented towards, as it were, calling someone an asshole (which isn’t to say that it’s not possible to be an asshole), even though it still makes a specific choice and trade-off salient. I think this avoids triggering some defenses around fearing reproach and wrongdoing, and hones in on the values that animate regret, sadness, and a desire to make things better. As a result, the emotional reaction feels to me somehow more wholehearted and internal. In weighing the walk vs. his life, I’m not asking whether or not I “have” to give up the walk, or whether I am allowed” to keep it. I want to give it up; I wish I could; the choice feels clear, and continuous with other choices I make in weighing what matters to me most.

(I also like this version because it’s clearer and more immediate to me how I value long walks on fall afternoons than it is how I value e.g. money. Indeed, the fact that walks are a substantively meaningful good in my life — as opposed to something with stronger connotations of shallowness and frivolity, like an expensive suit — also makes me feel more like I’m in connection with the fact that giving it up is a genuine loss — albeit, a clearly worthwhile one.)

Notice the possibility, on a “morality as taxes” approach, of being glad that e.g. you weren’t able to see more clearly what was happening at the river. For perhaps, if it was clear to you that a man was drowning, then this would’ve triggered an obligation to give up your walk, and you would’ve “had” to do it, on pain of having been bad, wrong, worthy of reproach, etc. That is, on this view, what you care about is the walk; but sometimes, unfortunately for you, morality demands that you give up what you care about. You may well obey dutifully in such situations; but from the perspective of what you care about, you’re glad to avoid encounters with morality. And indeed, since morality is partly a matter of how you act in response to what you believe is happening, or to what is salient to you, you’re incentivized, subtly or not-so-subtly, to avoid forming certain kinds of beliefs, or making certain kinds of things salient. Thus a pull towards self-deception (though of course, self-deception in these respects will be condemned by morality as well).

None of this is particularly new (see, for example, Nate Soares on harmful uses of “should”; and comments from various effective altruists who prefer to avoid Singerian “obligation” framings of altruism). And there’s much more to say. In particular, I’m not here claiming that a conception of morality as something in some sense external to or constraining of what you care about is without basis, or that more wholehearted approaches resolve all questions about demandingness. But the difference between wholehearted approaches to Singerian decision-making and “morality as taxes” approaches is an important one in my world, and I try, where possible, to stay rooted in the former.

Comments8


Sorted by Click to highlight new comments since:

I just got around to reading this, and I love it.

I think it's much better than previous writing on obligation vs opportunity framings of EA, which always left me unsatisfied. 

I like the thought experiment, but I think (unfortunately) the Singerian analogy is closer to reality.

In the "woodland commotion" case, you don't feel bad for not going to help because, well, how could you have known this weird situation was occurring? But it doesn't seem like the world is like that, where it's so non-obvious how we help that no one could blame us for not seeing it.

Indeed, even if the world were like that to us initially, the situation changes as soon as someone tells you what you can do to help.

To adjust your case, suppose you hear a commotion in the distance, but then someone next to you who has binoculars, sees what's going on and say "hey, there's a man stuck over there, shall we go help?" Then the case becomes much like Singer's shallow pond where you can easily help someone else at a cost to you and you know it.  So all the concerns about demandingness resurface.  But Singer, effective altruists, and many others in society, are basically being the guy with binoculars ("hey, do you know how you can do good? Don't buy that latte, buy a bednet instead") so once you've heard their pitch, you can hardly claim you had no idea how you could have helped. 

I like the reframing, but I don't feel like it centrally addresses the problem of demandingness. With your example (and knowing a man was pinned under machinery) and seeing a drowning child, I imagine wanting to leap into action. If I dragged a child out of a pond, and I imagine being wet and cold but looking at the child and seeing that they're okay, and maybe the parents are grateful and people around me are happy, I feel actively glad I jumped in the pond, and would feel similar regret if I passed by. 

The unpleasant feeling of wondering if I can get away with doing less, with not looking, hoping too much won't be asked of me, etc., is still triggered for me in your framing if I imagine that this scenario happens on every walk I went on, and every time I tried to take a walk in the woods I thought "oh geez, probably someone will be in trouble and I'll have to help, and it will be the right thing, but can I ever just have a walk the woods in peace?" I imagine I would even gradually become inured to the situation, possibly feel impatience and not want to see the family's panic, etc.

In other words, it's mostly the near-omnipresence of opportunities to help that makes me feel the aversive demandingness reaction, and the temptation of self-deception. And I still feel unsure how to deal with that in a world where it does basically feel like people are drowning in every river, pond, and other body of water I see. 

I really like this. To me, it emphasizes that moral reason is a species of practical reason more generally and that the way moral reasons make themselves heard to us is through the generic architecture of practical reasoning. More precisely: Acting in a manner consistent with one's moral duties is not about setting one's preferences aside and living a life of self-denial; it's about being sufficiently attentive to one's moral world that one's preferences naturally evolve in response to sound moral reasons, such that satisfying those preferences and fulfilling one's duties are one and the same.

These cases seem not at all analogous to me because of the differing amount of uncertainty in each.

In the case of the drowning child, you presumably have high certainty that the child is going to die. The case is clear cut in that way.

In the case of the distant commotion on an autumn walk, it's just that, a distant commotion. As the walker, you have no knowledge about what it is and whether or not you could do anything. That you later learn you could have done something might lead you to experience regret, but in the moment you lacked information to make it clear you should have investigated. I think this entirely accounts for the difference in feeling about the two cases, and eliminates the power of the second case.

In the second case, any imposition on the walker to do anything hinges on their knowledge of what the result of the commotion will be. Given the uncertainty, you might reasonably conclude in the moment that it is better to avoid the commotion, maybe because you might do more harm than good by investigating.

Further, this isn't a case of negligence, where you failing to respond to the commotion makes you complicit in the harm, because you seem to have no responsibility to the machinery or the conditions by which the man came to be pinned under it. Instead it seems to be a case where you are morally neutral throughout because of your lack of knowledge, and your lack of active effort to avoid gaining knowledge that would otherwise make you complicit by trying to avoid becoming morally culpable. That is not the case here and so your example seems to lack the necessary conditions to make the point.

My reading of the post is quite different: This isn't an argument that, morally, you ought to save the drowning man. The distant commotion thought experiment is designed to help you notice that it would be great if you had saved him and to make you genuinely want to have saved him. Applying this to real life, we can make sacrifices to help others because we genuinely/wholeheartedly want to, not just because morality demands it of us. Maybe morality does demand it of us but that doesn't matter because we want to do it anyway.

Weird, that sounds strange to me because I don't really regret things since I couldn't have done anything better than what I did under the circumstances or else I would have done that, so the idea of regret awakening compassion feels very alien. Guilt seems more clear cut to me, because I can do my best but my best may not be good enough and I may be culpable for the suffering of others as a result, perhaps through insufficient compassion.

I found this really motivating and inspiring. Thanks for writing. I've always found the "great opportunity" framing of altruism stretched and not very compelling but I find this subtle reframing really powerful. I think the difference for me is the emphasis on the suffering of the drowning man and his family, whereas "great opportunity" framings typically emphasise how great it would be for YOU to be a hero and do something great. I prefer the appeal to compassion over ego.

I usually think more along Singerian obligation lines and this has led to unhealthy "morality as taxes" thought patterns. On reflection, I realise that I haven't always thought about altruism in this way and I actually used to think about it in a much more wholehearted way. Somehow, I largely lost that wholehearted thinking. This post has reminded me why I originally cared about altruism and morality and helped me revert to wholehearted thinking, which feels very uplifting and freeing. I plan on revisiting this whenever I notice myself slipping back into "morality as taxes" thought patterns.

Curated and popular this week
 ·  · 11m read
 · 
Does a food carbon tax increase animal deaths and/or the total time of suffering of cows, pigs, chickens, and fish? Theoretically, this is possible, as a carbon tax could lead consumers to substitute, for example, beef with chicken. However, this is not per se the case, as animal products are not perfect substitutes.  I'm presenting the results of my master's thesis in Environmental Economics, which I re-worked and published on SSRN as a pre-print. My thesis develops a model of animal product substitution after a carbon tax, slaughter tax, and a meat tax. When I calibrate[1] this model for the U.S., there is a decrease in animal deaths and duration of suffering following a carbon tax. This suggests that a carbon tax can reduce animal suffering. Key points * Some animal products are carbon-intensive, like beef, but causes relatively few animal deaths or total time of suffering because the animals are large. Other animal products, like chicken, causes relatively many animal deaths or total time of suffering because the animals are small, but cause relatively low greenhouse gas emissions. * A carbon tax will make some animal products, like beef, much more expensive. As a result, people may buy more chicken. This would increase animal suffering, assuming that farm animals suffer. However, this is not per se the case. It is also possible that the direct negative effect of a carbon tax on chicken consumption is stronger than the indirect (positive) substitution effect from carbon-intensive products to chicken. * I developed a non-linear market model to predict the consumption of different animal products after a tax, based on own-price and cross-price elasticities. * When calibrated for the United States, this model predicts a decrease in the consumption of all animal products considered (beef, chicken, pork, and farmed fish). Therefore, the modelled carbon tax is actually good for animal welfare, assuming that animals live net-negative lives. * A slaughter tax (a
MarieF🔸
 ·  · 4m read
 · 
Summary * After >2 years at Hi-Med, I have decided to step down from my role. * This allows me to complete my medical residency for long-term career resilience, whilst still allowing part-time flexibility for direct charity work. It also allows me to donate more again. * Hi-Med is now looking to appoint its next Executive Director; the application deadline is 26 January 2025. * I will join Hi-Med’s governing board once we have appointed the next Executive Director. Before the role When I graduated from medical school in 2017, I had already started to give 10% of my income to effective charities, but I was unsure as to how I could best use my medical degree to make this world a better place. After dipping my toe into nonprofit fundraising (with Doctors Without Borders) and working in a medical career-related start-up to upskill, a talk given by Dixon Chibanda at EAG London 2018 deeply inspired me. I formed a rough plan to later found an organisation that would teach Post-traumatic stress disorder (PTSD)-specific psychotherapeutic techniques to lay people to make evidence-based treatment of PTSD scalable. I started my medical residency in psychosomatic medicine in 2019, working for a specialised clinic for PTSD treatment until 2021, then rotated to child and adolescent psychiatry for a year and was half a year into the continuation of my specialisation training at a third hospital, when Akhil Bansal, whom I met at a recent EAG in London, reached out and encouraged me to apply for the ED position at Hi-Med - an organisation that I knew through my participation in their introductory fellowship (an academic paper about the outcomes of this first cohort can be found here). I seized the opportunity, applied, was offered the position, and started working full-time in November 2022.  During the role I feel truly privileged to have had the opportunity to lead High Impact Medicine for the past two years. My learning curve was steep - there were so many new things to
Sarah Cheng
 ·  · 2m read
 · 
TL;DR: The EA Opportunity Board is back up and running! Check it out here, and subscribe to the bi-weekly newsletter here. It’s now owned by the CEA Online Team. EA Opportunities is a project aimed at helping people find part-time and volunteer opportunities to build skills or contribute to impactful work. Their core products are the Opportunity Board and the associated bi-weekly newsletter, plus related promos across social media and Slack automations. It was started and run by students and young professionals for a long time, and has had multiple iterations over the years. The project has been on pause for most of 2024 and the student who was running it no longer has capacity, so the CEA Online Team is taking it over to ensure that it continues to operate. I want to say a huge thank you to everyone who has run this project over the three years that it’s been operating, including Sabrina C, Emma W, @michel, @Jacob Graber, and Varun. From talking with some of them and reading through their docs, I can tell that it means a lot to them, and they have some grand visions for how the project could grow in the future. I’m happy that we are in a position to take on this project on short notice and keep it afloat, and I’m excited for either our team or someone else to push it further in the future. Our plans We plan to spend some time evaluating the project in early 2025. We have some evidence that it has helped people find impactful opportunities and stay motivated to do good, but we do not yet have a clear sense of the cost-effectiveness of running it[1]. We are optimistic enough about it that we will at least keep it running through the end of 2025, but we are not currently committing to owning it in the longer term. The Online Team runs various other projects, such as this Forum, the EA Newsletter, and effectivealtruism.org. I think the likeliest outcome is for us to prioritize our current projects (which all reach a larger audience) over EA Opportunities, which