Hide table of contents

In a recent bonus episode of the Bayesian Conspiracy podcast, Eneasz Brodski shared a thought experiment that caused no small amount of anguish. In the hypothetical, some eccentric but trustworthy entity is offering to give you an escalating amount of money for your fingers, starting at $10,000 for the first one and increasing 10x per finger up to $10 trillion for all of them.[1] On encountering this thought experiment, Eneasz felt (not without justification) that he mostly valued his manual dexterity more than wealth. Then, two acquaintances pointed out that one could use the $10 trillion to do a lot of good, and Eneasz proceeded to feel terrible about his decision.

I had several responses to this episode, but today I'm going to focus on one of them: the difference between cost and sacrifice.

How Ayn Rand Made Me a Better Altruist

But first, a personal anecdote. I was raised Catholic, and like the good Catholic boy that I was, I once viewed altruism through the lens of personal sacrifice. For the uninitiated, Catholic doctrine places a strong emphasis on this notion of sacrifice - an act of self-abnegation which places The Good firmly above one's own wants or needs. I felt obligated to help others because it was the Right Thing to Do, and I accepted that being a Good Person meant making personal sacrifices for the good of others, regardless of my own feelings. I divided my options into "selfish" and "selfless" categories, and felt guilty when choosing the former. Even as I grew older and my faith in Catholicism began to wane, this sense of moral duty persisted. It was a source of considerable burden and struggle, for me, made worse by the fact that the associated cultural baggage was so deeply ingrained as to be largely invisible to me.

Then, in a fittingly kabbalistic manner, Atlas Shrugged flipped my world upside down.[2] 

Ayn Rand, you see, did not believe in sacrifice. In her philosophy, the only real moral duty is the duty to oneself and one's own principles. She happened to hold a great many other convictions about what those principles ought to be, some of which I now dispute; but in this, I believe, she was wholly correct.

My teenage self, at least, found this perspective incredibly freeing. (Perhaps a bit too freeing, as I've always been the sort of person who enjoys being smugly right about things, and taking the word "selfish" as a compliment for a couple years did not do my social life any favors.) But I emerged from this phase like the titular unburdened Titan himself, having thoroughly abandoned all thought of dutifully adhering to any principles besides my own.

Which of course led me to wonder, for perhaps the first time: What are my principles? If my morals are not to be guided by God nor by the expectations of others, but by my own reflectively endorsed desires, then what do I actually want?

It turns out that I want to help people. I want to ease suffering and promote wellbeing; I want to create things people value; I want to surround myself with a thriving civilization filled with flourishing people.

In abandoning the values that had been imposed on me, I discovered that my own values included a strong preference for the wellbeing of others. And that makes all the difference.

Cost vs Sacrifice

Let's return to the ten-finger demon. We'll set aside, for now, the argument that the money from selling fingers has lots of selfish benefits. That's not what this post is about. Let's focus specifically on the opportunity to Do Good, and what it means for us.

Here's the thing about thought experiments. They're not supposed to be traps for the unwary. In the best case, they are ways to notice problems in our thinking by making choices stark and binary. If a decision posed in a thought experiment makes you feel utterly miserable, that is a warning sign.

In the podcast, one person says something to the effect of: "[I don't like it], but if you really pressed me, I would make the [painful] sacrifice so that I could use the money to help others."

I applaud the sentiment, but this is the wrong way to think about the problem.

Buying something more valuable with something less valuable should never feel like a terrible deal. If it does, something is wrong.

If enough money to end world hunger, lift millions out of poverty, delay global warming, fund a bunch of medical research, outspend the lobbying efforts of multibillion-dollar companies, and and do a half-dozen similar things seems more valuable to you then manual dexterity, then you may have discovered something interesting about your preferences. 

If, however, your instinct is to keep your fingers and feel guilty about it, then perhaps you should ask: whence comes this guilt? Am I failing to live up to a standard I have set for myself? Or am I allowing the standards set by others to override my own preferences? 

If you value $10 trillion worth of improvements to the world more than you value ten fingers, then this transaction is not a sacrifice. It is a cost you are paying to get more of what you want.

If, on reflection, you actually value your fingers more than the leverage $10 trillion buys you, then you shouldn't pay that cost.

Own Your Values

It's a mistake to do as I once did, and divide the outcomes you are capable of achieving into buckets of "selfish" and "selfless", especially if doing so makes you inclined to always let one bucket win at the expense of the other. The universe does not distinguish between selfish goals and selfless ones.

When I was a Reliability Engineer, I donated some of my money to the Against Malaria Foundation. I did not donate everything and decide to live as a pauper. Setting aside how that would have made me worse at my actual job (and at making money to donate), I didn't do that because I don't want to live that way.

I'd take the $10 trillion, even if I couldn't use it to buy prosthetics or live in luxury or whatever, because $10 trillion is a massive amount of leverage that I likely can't match any other way. With it, I could steer the world in ways that according to my own values are better than having functional hands. It's a slam dunk. But this is not, in my view, taking a "selfless" option over a "selfish" one. I just want the leverage more than I want my hands.

For the glowfic fans out there, Alicorn's Bella characters embodied this philosophy with their Three Questions: What do I want? What do I have? How can I use what I have to get what I want?

Don't ask, "Am I a bad person?" Instead, ask "What do I want to achieve?" and make it so. The Replacing Guilt series has more to say on this topic as well. [3]

I implore all altruists, non-altruists, and aspiring altruists alike to make your choices and own them. Leave the hand-wringing to those with all their fingers.

  1. ^

    It was further stipulated that this would not cause inflation or have some other horrible monkey's paw effect; it's just $10 trillion worth of anything money can buy you.

  2. ^

    I don't claim it's a perfect book, but it does contain messages that some people - like young Joe - badly need to hear, and that less emphatic sources often fail to convey.

  3. ^

    For those wondering, I found these posts valuable well before I started working for the author.

Comments1


Sorted by Click to highlight new comments since:

Executive summary: Effective altruism should be driven by personal values and rational cost-benefit analysis, not by guilt or externally imposed notions of sacrifice.

Key points:

  1. Traditional views of altruism often frame helping others as a painful sacrifice, which can create unnecessary guilt and psychological burden.
  2. Personal values should guide ethical decisions, not external standards of "selfless" versus "selfish" actions.
  3. Evaluating opportunities should focus on whether the potential impact aligns with one's own deeply held principles and desired outcomes.
  4. In the thought experiment of selling fingers for money, the key is to honestly assess whether the potential global good is more valuable to you than personal physical capability.
  5. Effective altruism is about leveraging resources to achieve meaningful change, not about self-abnegation or feeling morally inadequate.
  6. The most important questions are: "What do I want to achieve?" and "How can I use what I have to get what I want?"

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f