Epistemic status: Not new material, but hopefully points more directly at key EA intuitions.
Ana is a hypothetical junior software engineer in Silicon Valley making $150k/year. Every year, she spends 10% of her income to anonymously buy socks for her colleagues. Most people would agree that Ana is being altruistic, but not being particularly efficient about it. If utility is logarithmic in income, Ana can 40x her impact by giving the socks to a local homeless person instead who has an income of $5000. But in the EA community, we've noticed further multipliers:
- 40x: giving socks to local homeless people instead of her colleagues
- 10x more: giving socks to the poorest people in the world (income $500) instead of homeless people
- 2x more: giving cash (GiveDirectly) instead of socks
- 8x more: giving malaria nets rather than cash
- 10x more: farmed animal welfare rather than human welfare. [1]
- 4x more: working in a more lucrative industry like quant research, working longer hours, and doing salary negotiation to raise her salary to $600k[2]
- 8x more: donating 80% instead of 10%
- 10x more: taking on risk to shoot for charity entrepreneurship or billionairedom, producing $6M of expected value yearly[3]
Total multiplier: about 20,480,000x[4]
I think that many people new to EA have heard that multipliers like these exist, but don't really internalize that all of these multipliers stack multiplicatively. If Ana hits all of these bonuses, she will have a direct impact 20,480,000 times larger than giving socks to random colleagues. If she misses one of these multipliers, say the last one, Ana will still have a direct impact 2,048,000 times larger than with the initial socks plan. This sounds good until you realize that Ana is losing out on 90% of her potential impact, consigning literally millions of chickens to an existence worse than death. To get more than 50% of her maximum possible impact, Ana must hit every single multiplier. This is one way that reality is unforgiving.
Multipliers result from judgment, ambition, and risk
- Good judgment: responsible for multipliers (1) through (4), making the impact 80,000 times larger, and is implicit in (8) too, because going through with a bad for-profit or charity startup idea could be of zero or even negative value.
- Ambition: responsible for multipliers (6) through (8), making her expected impact 320x larger.
- Willingness to take on risk is mostly relevant in (8), though you could think of (5) as having risk from moral uncertainty.
This example is neartermist to make the numbers more concrete, but the same principles apply within longtermism. For a longtermist, good judgment and ambition are even more critical. It's difficult to tell the difference between a project that reduces existential risk by 0.02%, a project that reduces x-risk 0.002%, and a worthless project, so you need excellent judgment to get within 50% of your maximum impact. Ambition is in some sense what longtermism is all about-- longtermist causes have a huge multiplier resulting from astronomically larger scale and (longtermists argue) only somewhat worse tractability. And taking on risk allows hits-based giving whether in neartermism or longtermism.
More generally, actions, especially complicated actions and research directions, are an extremely high-dimensional space. If actions are vectors and the goodness of an action is its cosine similarity to the best action, and your action is 90% as good as the optimum (25° off the best path) in each of 50 orthogonal directions, the amount of good you do is capped at 0.9^50 = 0.005x the maximum.
Implications
- It's very difficult to take an arbitrary project that you're excited about for other reasons, and tweak it to "make it EA"[5]. An arbitrary project will have zero or one of these multipliers, and making it hit seven or eight more multipliers will often make it unrecognizable.
- People who are not totally dedicated to maximizing impact will make some concession to other selfish or altruistic goals, like having a child, working in whichever of (academia, industry, other) is most comfortable, living in a specific location, getting fuzzies, etc. If this would make them miss out on a multiplier, their "EA part" should try much harder to make a less costly concession instead, or find a way to still hit the multiplier.
- It's more important to have good judgment than to dedicate 100% of your life to an EA project. If output scales linearly with work hours, then you can hit 60% of your maximum possible impact with 60% of your work hours. But if bad judgment causes you to miss one or two multipliers, you could make less than 10% of your maximum impact. (But note that working really hard can sometimes enable multipliers-- see this comment by Mathieu Putz.)
- Aiming for the minimum of self-care is dangerous.
- Information is extremely valuable when it determines if you can apply a multiplier. For example, Ana should probably spend a year deciding whether she's a good fit for charity entrepreneurship, or thinking about whether her moral circle includes chickens, but not spend a year choosing between two careers that have similar impact. Networking is a special case of information.
- Finding multipliers is hard, so most people in the EA community (likely including me) are missing at least one multiplier, and consequently in some sense doing less than 50% the good they could be.
- ^
Assumes 40 chicken QALYs/$, 1 human QALY/$100, and that 400 chicken QALY = 1 human QALY due to neuron differences. Ana's moral circle includes all beings weighted by neuron count, but she hadn't thought about this enough.
- ^
As of 2022, typical pay for great quant researchers with a couple of years of experience, or great developers with a few years of experience.
- ^
Ana is in theory ambitious and skilled enough to start a charity or tech startup, but she hasn't heard of Charity Entrepreneurship yet.
Thanks for this comment, I made minor edits to that point clarifying that academia can be good or bad.
First off, I think we should separate concerns of truth from those of offputtingness, and be clear about which is which. With that said, I think "concession to other selfish or altruistic goals" is true to the best of my knowledge. Here's a version of it that I think about, which is still true but probably less offputting, and could have been substituted for that bullet point if I were more careful and less concise:
When your goal is to maximize impact, but parts of you want things other than maximizing impact, you must either remove these parts or make some concession to satisfy them. Usually stamping out a part of yourself is impossible or dangerous, so making some concession is better. Some of these concessions are cheap (from an impact perspective), like donating 2% of your time to a cause you have personal connection to rather than the most impactful one. Some are expensive in that they remove multipliers and lose >50% of your impact, like changing your career from researching AI safety to working at Netflix because your software engineer friends think AI safety is weird. Which multipliers are cheap vs expensive depends on your situation; living in a particular location can be free if you're a remote researcher for Rethink Priorities but expensive if by far the best career opportunity for you is to work in a particular biosecurity lab. I want to caution people against making an unnecessarily expensive concession, or making a cheap concession much more expensive than necessary. Sometimes this means taking resources away from your non-EA goals, but it does not mean totally ignoring them.
Regarding having a child, I'm not an expert or a parent, but my impression is it's rare for having kids to actually create more impact than not having the desire in the first place. I vaguely remember Julia Wise having children due to some combination of (a) non-EA goals, and (b) not having kids would make her sad, potentially reducing productivity. In this case, the impact-maximizer would say that (a) is fine/unavoidable-- not everyone is totally dedicated to impact-- and (b) means that being sad is a more costly concession than not having kids, so having kids is the least costly concession available. Maybe for some, having kids makes life meaningful and gives them something to fight for in the world, which would increase their impact. But I haven't met any such people.
It's possible to have non-impact goals that actually increase your impact. Some examples are being truth-seeking, increasing your status in the EA community, or not wanting to let down your EA friends/colleagues. But I have two concerns with putting too much emphasis on this. First, optimizing too hard for this other goal has Goodheart concerns: there are selfish rationalists, EAs who add to an echo chamber, and people who stay on projects that aren't maximally impactful. Second, the idea that we can directly optimize for impact is a core EA intuition, and focusing on noncentral cases of other goals increasing impact might distract from this. I think it's better to realize that most of us are not pure impact-maximizers, we must make concessions to other goals, and that which concessions we make is extremely important to our impact.