Hide table of contents

TL;DR

This thought experiment responds to the common denial of altruism as an "obligation." It highlights the absence of a meaningful difference between acts and omissions, suggesting that both involve prioritizing personal interests over the well-being of others. By exploring this, I hope to clarify why I view altruism as a moral obligation rather than just a kind choice.

 

Thought Experiment: The Invisible Button and the Value of a Child's Life 

Imagine you find yourself alone in a room, standing before a button. No one knows it exists, and no one ever will. If you press it, your bank account increases by $6,000. But there's a catch: somewhere in the developing world, a random child will contract malaria and die. You know this with certainty. The decision is entirely yours, untraceable and beyond judgment from anyone else.

Now, consider a different scenario: you are $6,000 richer and have the option to press a different button, this one preventing a child from contracting malaria and dying. You’re essentially faced with the same decision—whether to prioritize your own interests or save a child's life—but framed in a radically different context. The former situation involves taking an active step that results in a child's death, while the latter involves the choice to prevent it. It's worth noting that with cost-effective charities like the Against Malaria Foundation in which marginal contributions save a life at $6,000, we effectively find ourselves in this situation every day.

 Some would argue that there's a profound moral distinction between pressing a button to kill and simply declining to press a button to save. But in both cases, aren't you privileging your own $6,000 of interests (for example, the difference between an income of $65,000 or $71,000 in a given year) over the life of another human being? Whether through action or inaction, the result is the same: a child's life is weighed against a sum of money, and the money is chosen. 

Altruism as an Obligation 

This thought experiment is offered in response to the common framing of helping others as an "obligation." Many in the Effective Altruism community reject this framing, finding it unhelpful. Instead, they prefer to think of altruism as a kind and commendable action, but not something that is morally required of us.

To those who reject the "obligation" framing, I would ask: How would you feel about the decision to affirmatively press the button that results in a child's death, or in animals being tortured, for the sake of personal gain? For me, the reason I view altruism as an obligation is because I see no principled difference between the person who presses the button to harm and the one who declines to press the button to help in a scenarios where there is no possibility of discovery. Both decisions privilege personal interests over the well-being of others, and both, I believe, carry moral weight.

The Role of Detection and Consequences 

The distinction between acts and omissions often hinges on whether harm can be traced back to the action of an agent, as affirmative actions can have broader, traceable effects on the entire ecosystem, leading us to view them as more morally egregious than omissions. For instance, the societal impact of a murder is often far greater than that of an accidental death because the act of murder disrupts social trust, incites fear, and demands justice in a way that an accident does not. This potential traceability is why it's crucial that the thought experiment stipulates the undetectability of the action, ensuring that the moral decision is not influenced by the possibility of detection or broader consequences.

Implications for Effective Altruism 

This thought experiment challenges us to reconsider how we frame our moral decisions, particularly in the context of Effective Altruism. If we acknowledge that there’s little difference between choosing not to help and choosing to harm, then perhaps we should rethink the idea that altruism is merely a "kind" act rather than a moral imperative. For those of us committed to doing the most good, this perspective urges us to consider the true implications of our choices and the responsibility we bear.

Conclusion

In the end, whether you view altruism as an obligation or a gratuitous kindness may depend on how you interpret the ethical landscape of these decisions. This thought experiment extends beyond just the decision to help or harm a child; it also applies to choices like allowing animals to be tortured through factory farming or making decisions that impact future generations of beings. The denial of altruistic obligations implies that it is not wrong to privilege your own interests orders of magnitude higher than those you could help with your choices. Whether that choice arises in an affirmative or negative context, to me, does not change the fundamental wrongfulness of privileging one's interests over the well-being of others to such an obscene degree.

Comments8


Sorted by Click to highlight new comments since:

Thank you for this interesting post, even though I don’t agree with your conclusions.

I believe one key difference between killing someone and letting someone die is its effect on one’s conscience.

If I kill someone, I violate their rights. Even if no one would directly know what I did with the invisible button, I’d know what I did, and that would eat at my conscience, and affect how I’d interact with everyone after that. Suddenly, I’d have less trust in myself to do the right thing (to not do what my conscience strongly tells me not to do), and the world would seem like a less safe place because I’d suspect that others would’ve made the same decision I did, and now might be effectively willing to kill me for a mere $6,000 if they could get away with it.

If I let someone die, I don’t violate their rights, and, especially if I don’t directly experience them dying, there’s just less of a pull on my conscience. 

One could argue that our consciences don’t make sense and they should be more inline with classic utilitarianism, but I’d argue that we should be extremely careful about making big changes to human consciences in general without thoroughly thinking through and understanding the full range of the effects of these.

 

Also, I don’t think use of the term “moral obligation” is optimal, since to me it implies a form of emotional bullying/blackmail: you’re not a good person unless you satisfy your moral obligations. Instead, I’d focus on people being true to their own consciences. In my mind, it’s a question of trying to use someone’s self-hate to “beat goodness into them” versus trying to inspire their inner goodness to guide them because that’s what’s ultimately best for them.

By “self-hate,” I mean hate of the parts of ourselves that we think are “bad person” parts, but are really just “human nature” parts that we can accept about ourselves without that meaning we have to indulge them.

Yeah I think in the case of both choosing not to act to save the kid and acting to kill the kid (in this narrow hypothetical) you're violating the kid's rights just as much (privileging your financial interests over his life).

And regarding your point regarding conscience... You're appealing to our moral intuitions which we can question the validity of, particularly with such thought experiments as these.

I suppose I would agree that acting as a moral person requires a significant consideration of other conscious beings with regard to our choices. And I think the vast majority of people fail to take adequate consideration thereof. I suppose that's how I consider my own "conscience": am I making choices with sufficient regard for the interests of other beings across space and time? I think attempting to act accordingly is part of my "inner goodness".

The difference is that property is distributed based on morally significant, non-random, voluntary activities. See Governing Least by Dan Moller for a moral defense or property. This implies that a) you are entitled to your property because you earned it through morally legitimate means and b) It is a good thing for society more broadly to accept the moral legitimacy of property that is earned through creation, discovery, etc., so the norm that people in general are entitled to their property in most cases is pro-social.

In contrast to most forms of property, accepting money for murder is not a defensible basis for property. This means that a) you are not entitled to that money, and b) supporting such a norm would be bad.

There are of course cases in which you might acquire property in a non-morally-legitimate way. I think the distinction there is far more tenuous, but that is not the case for the bulk of most people's money.

I'm not saying you're not legally entitled to the money.

I'm saying that, in an ultimate sense, the kid is more morally entitled not to die from malaria than you are to retain your $6k.

And there are no norms that would develop in the thought experiment. Your activity would be totally secret. The further policy issues might indicate that people ought to have a right to their money, but that does not bear on whether they would be morally obligated to exercise it in certain ways.

Yes but I think it's significant that one is morally entitled, not just legally entitled. In other words, imagine replacing pressing the button with actually doing the work to earn 6k. Do you think you are, for example, obligated to drive 12 hours each way in order to pull a drowning child out of a lake? The amount of money in your bank account is endogenous to how much work and effort you put into filling it, whereas I think the way this thought experiment is framed makes it sound like that money fell from the sky.

If you think you are in fact obligated drive 24 hours/increase your own risk of death by taking on a risky job/give up time with your children in order to save a stranger, then I am more sympathetic to the idea that you are obligated give up money for that stranger. However I do not share that intuition.

My intuition doesn't really change significantly if you change the obligation from a financial one to the amount of labor that would correspond to the financial one.

If I recall correctly, the value of a statistical life used by government agencies is $10 mil/life, which is calculated by using how much people value their own lives implicitly through choices they make that entail avoiding risk by incurring costs and getting benefits by incurring risk to themselves.

If we round up the cost to save a life in the developing world to $10k, people in the developing world could save 1,000 lives for the cost at which they value their own lives.

I simply think that acting in a way that you value another person 1,000 times less than you do yourself is immoral. This is why I do think that incorporating the value of other conscious beings to some degree is morally required.

I am curious where you think it stops. What standard of living are people "obligated" to sink to in order to help strangers? I don't deny any of this is good or praiseworthy, but it doesn't seem to have any limiting principle. Should everyone live in squalor, forego a family/deep friendships, and not pursue any passions because time and money can always be spent saving another stranger?

I think Peter Singer's book, The Life You Can Save, addresses this question more fully. But I would say that the obligations of people in wealthy countries is to make life choices, including sharing of their own wealth, in a way that shows some degree of consideration for their ability to help others in such an efficient way.

Failing to make some significant effort to help, perhaps to the degree of the 10% pledge (though I would probably think more than that even would in many situations be morally required). I do not know where exactly I would draw the line, but some degree of consideration similar to that of the 10% pledge would be a minimum.

I definitely think that the very demanding requirement you stated above would make more sense than none whatsoever in which one implicitly values others less than a thousandth of how one values oneself.

Curated and popular this week
abrahamrowe
 ·  · 9m read
 · 
This is a Draft Amnesty Week draft. It may not be polished, up to my usual standards, fully thought through, or fully fact-checked.  Commenting and feedback guidelines:  I'm posting this to get it out there. I'd love to see comments that take the ideas forward, but criticism of my argument won't be as useful at this time, in part because I won't do any further work on it. This is a post I drafted in November 2023, then updated for an hour in March 2025. I don’t think I’ll ever finish it so I am just leaving it in this draft form for draft amnesty week (I know I'm late). I don’t think it is particularly well calibrated, but mainly just makes a bunch of points that I haven’t seen assembled elsewhere. Please take it as extremely low-confidence and there being a low-likelihood of this post describing these dynamics perfectly. I’ve worked at both EA charities and non-EA charities, and the EA funding landscape is unlike any other I’ve ever been in. This can be good — funders are often willing to take high-risk, high-reward bets on projects that might otherwise never get funded, and the amount of friction for getting funding is significantly lower. But, there is an orientation toward funders (and in particular staff at some major funders), that seems extremely unusual for charitable communities: a high degree of deference to their opinions. As a reference, most other charitable communities I’ve worked in have viewed funders in a much more mixed light. Engaging with them is necessary, yes, but usually funders (including large, thoughtful foundations like Open Philanthropy) are viewed as… an unaligned third party who is instrumentally useful to your organization, but whose opinions on your work should hold relatively little or no weight, given that they are a non-expert on the direct work, and often have bad ideas about how to do what you are doing. I think there are many good reasons to take funders’ perspectives seriously, and I mostly won’t cover these here. But, to
Jim Chapman
 ·  · 12m read
 · 
By Jim Chapman, Linkedin. TL;DR: In 2023, I was a 57-year-old urban planning consultant and non-profit professional with 30 years of leadership experience. After talking with my son about rationality, effective altruism, and AI risks, I decided to pursue a pivot to existential risk reduction work. The last time I had to apply for a job was in 1994. By the end of 2024, I had spent ~740 hours on courses, conferences, meetings with ~140 people, and 21 job applications. I hope that by sharing my experiences, you can gain practical insights, inspiration, and resources to navigate your career transition, especially for those who are later in their career and interested in making an impact in similar fields. I share my experience in 5 sections - sparks, take stock, start, do, meta-learnings, and next steps. [Note - as of 03/05/2025, I am still pursuing my career shift.] Sparks – 2022 During a Saturday bike ride, I admitted to my son, “No, I haven’t heard of effective altruism.” On another ride, I told him, “I'm glad you’re attending the EAGx Berkely conference." Some other time, I said, "Harry Potter and Methods of Rationality sounds interesting. I'll check it out." While playing table tennis, I asked, "What do you mean ChatGPT can't do math? No calculator? Next token prediction?" Around tax-filing time, I responded, "You really think retirement planning is out the window? That only 1 of 2 artificial intelligence futures occurs – humans flourish in a post-scarcity world or humans lose?" These conversations intrigued and concerned me. After many more conversations about rationality, EA, AI risks, and being ready for something new and more impactful, I decided to pivot my career to address my growing concerns about existential risk, particularly AI-related. I am very grateful for those conversations because without them, I am highly confident I would not have spent the last year+ doing that. Take Stock - 2023 I am very concerned about existential risk cause areas in ge
 ·  · 3m read
 · 
Written anonymously because I work in a field where there is a currently low but non-negligible and possibly high future risk of negative consequences for criticizing Trump and Trumpism. This post is an attempt to cobble together some ideas about the current situation in the United States and its impact on EA. I invite discussion on this, not only from Americans, but also those with advocacy experience in countries that are not fully liberal democracies (especially those countries where state capacity is substantial and autocratic repression occurs).  I've deleted a lot of text from this post in various drafts because I find myself getting way too in the weeds discoursing on comparative authoritarian studies, disinformation and misinformation (this is a great intro, though already somewhat outdated), and the dangers of the GOP.[1] I will note that I worry there is still a tendency to view the administration as chaotic and clumsy but retaining some degree of good faith, which strikes me as quite naive.  For the sake of brevity and focus, I will take these two things to be true, and try to hypothesize what they mean for EA. I'm not going to pretend these are ironclad truths, but I'm fairly confident in them.[2]  1. Under Donald Trump, the Republican Party (GOP) is no longer substantially committed to democracy and the rule of law. 1. The GOP will almost certainly continue to engage in measures that test the limits of constitutional rule as long as Trump is alive, and likely after he dies. 2. The Democratic Party will remain constrained by institutional and coalition factors that prevent it from behaving like the GOP. That is, absent overwhelming electoral victories in 2024 and 2026 (and beyond), the Democrats' comparatively greater commitment to rule of law and democracy will prevent systematic purging of the GOP elites responsible for democratic backsliding; while we have not crossed the Rubicon yet, it will get much worse before things get better. 2. T
Relevant opportunities