# 232

Epistemic status: Not new material, but hopefully points more directly at key EA intuitions.

Ana is a hypothetical junior software engineer in Silicon Valley making $150k/year. Every year, she spends 10% of her income to anonymously buy socks for her colleagues. Most people would agree that Ana is being altruistic, but not being particularly efficient about it. If utility is logarithmic in income, Ana can 40x her impact by giving the socks to a local homeless person instead who has an income of$5000. But in the EA community, we've noticed further multipliers:

1. 40x: giving socks to local homeless people instead of her colleagues
2. 10x more: giving socks to the poorest people in the world (income $500) instead of homeless people 3. 2x more: giving cash (GiveDirectly) instead of socks 4. 8x more: giving malaria nets rather than cash 5. 10x more: farmed animal welfare rather than human welfare. [1] 6. 4x more: working in a more lucrative industry like quant research, working longer hours, and doing salary negotiation to raise her salary to$600k[2]
7. 8x more: donating 80% instead of 10%
8. 10x more: taking on risk to shoot for charity entrepreneurship or billionairedom, producing $6M of expected value yearly[3] Total multiplier: about 20,480,000x[4] I think that many people new to EA have heard that multipliers like these exist, but don't really internalize that all of these multipliers stack multiplicatively. If Ana hits all of these bonuses, she will have a direct impact 20,480,000 times larger than giving socks to random colleagues. If she misses one of these multipliers, say the last one, Ana will still have a direct impact 2,048,000 times larger than with the initial socks plan. This sounds good until you realize that Ana is losing out on 90% of her potential impact, consigning literally millions of chickens to an existence worse than death. To get more than 50% of her maximum possible impact, Ana must hit every single multiplier. This is one way that reality is unforgiving. ## Multipliers result from judgment, ambition, and risk • Good judgment: responsible for multipliers (1) through (4), making the impact 80,000 times larger, and is implicit in (8) too, because going through with a bad for-profit or charity startup idea could be of zero or even negative value. • Ambition: responsible for multipliers (6) through (8), making her expected impact 320x larger. • Willingness to take on risk is mostly relevant in (8), though you could think of (5) as having risk from moral uncertainty. This example is neartermist to make the numbers more concrete, but the same principles apply within longtermism. For a longtermist, good judgment and ambition are even more critical. It's difficult to tell the difference between a project that reduces existential risk by 0.02%, a project that reduces x-risk 0.002%, and a worthless project, so you need excellent judgment to get within 50% of your maximum impact. Ambition is in some sense what longtermism is all about-- longtermist causes have a huge multiplier resulting from astronomically larger scale and (longtermists argue) only somewhat worse tractability. And taking on risk allows hits-based giving whether in neartermism or longtermism. More generally, actions, especially complicated actions and research directions, are an extremely high-dimensional space. If actions are vectors and the goodness of an action is its cosine similarity to the best action, and your action is 90% as good as the optimum (25° off the best path) in each of 50 orthogonal directions, the amount of good you do is capped at 0.9^50 = 0.005x the maximum. ## Implications • It's very difficult to take an arbitrary project that you're excited about for other reasons, and tweak it to "make it EA"[5]. An arbitrary project will have zero or one of these multipliers, and making it hit seven or eight more multipliers will often make it unrecognizable. • People who are not totally dedicated to maximizing impact will make some concession to other selfish or altruistic goals, like having a child, working in whichever of (academia, industry, other) is most comfortable, living in a specific location, getting fuzzies, etc. If this would make them miss out on a multiplier, their "EA part" should try much harder to make a less costly concession instead, or find a way to still hit the multiplier. • It's more important to have good judgment than to dedicate 100% of your life to an EA project. If output scales linearly with work hours, then you can hit 60% of your maximum possible impact with 60% of your work hours. But if bad judgment causes you to miss one or two multipliers, you could make less than 10% of your maximum impact. (But note that working really hard can sometimes enable multipliers-- see this comment by Mathieu Putz.) • Aiming for the minimum of self-care is dangerous. • Information is extremely valuable when it determines if you can apply a multiplier. For example, Ana should probably spend a year deciding whether she's a good fit for charity entrepreneurship, or thinking about whether her moral circle includes chickens, but not spend a year choosing between two careers that have similar impact. Networking is a special case of information. • Finding multipliers is hard, so most people in the EA community (likely including me) are missing at least one multiplier, and consequently in some sense doing less than 50% the good they could be. 1. ^ Assumes 40 chicken QALYs/$, 1 human QALY/$100, and that 400 chicken QALY = 1 human QALY due to neuron differences. Ana's moral circle includes all beings weighted by neuron count, but she hadn't thought about this enough. 2. ^ As of 2022, typical pay for great quant researchers with a couple of years of experience, or great developers with a few years of experience. 3. ^ Ana is in theory ambitious and skilled enough to start a charity or tech startup, but she hasn't heard of Charity Entrepreneurship yet. 4. ^ Could be off by 10x in either direction, but doesn't affect my core point. 5. ^ "make it EA" = "make it one of the highest-impact things you could be doing", not "make the EA community approve of it" # 232 New Comment 34 comments, sorted by Click to highlight new comments since: Great post! While I agree with your main claims, I believe the numbers for the multipliers (especially in aggregate and for ex ante impact evaluations) are nowhere near as extreme in reality as your article suggests for the reasons that Brian Tomasik elaborates on in these two articles: I mostly agree; the uncertain flow-through effects of giving socks to one's colleagues totally overwhelm the direct impact and are probably at least 1/1000 as big as the effects of being a charity entrepreneur (when you take the expected value according to our best knowledge right now). If Ana is trying to do good by donating socks, instead of saying she's doing 1/20,000,000th the good she could be, perhaps it's more accurate to say that she has an incorrect theory of change and is doing good (or harm) by accident. I think the direct impacts of the best interventions are larger than their expected (according to our current knowledge) net flow-through effects in a trivial sense, since if nothing else we can analyze flow-through effects of arbitrary interventions and come up with better interventions that optimize for this until we find the best ones. I agree - and if the multiplier numbers are lower, then some claims are don't hold, e.g.: To get more than 50% of her maximum possible impact, Ana must hit every single multiplier. This doesn't hold if the set of multipliers includes 1.5x, for example. Instead we might want to talk about the importance of hitting as many big multipliers as possible. And being willing to spend more effort on these over the smaller (e.g. 1.1x) ones. (But want to add that I think the post in general is great! Thanks for writing this up!) Well, you know what the stereotype is about women in Silicon Valley high tech companies & their sock needs... (Incidentally, when I wrote a sock-themed essay, which was really not about socks, I was surprised how many strong opinions on sock brands people had, and how expensive socks could be.) If you don't like the example 'buy socks', perhaps one can replace it with real-world examples like spending all one's free time knitting sweaters for penguins. (With the rise of Ravelry and other things, knitting is more popular than it has been in a long time.) I like this post, thanks Thomas! I want to make a comment for maybe newer people especially with some of the uses of the word "EA" here. I'll take an example to illustrate: "People who are not totally dedicated to EA will..." I actually think this means (or if it doesn't, it should mean), "people who are not totally dedicated to impartially maximizing impact as defined under a plausible moral theory [not the point of this to debate which are plausible] will..." or something like that. In other words, "people who are not totally dedicated to the basic principles of EA". It doesn't (or shouldn't) mean "people who are not totally dedicated to the EA community" or something else that might imply only working at an EA-branded org, only having EA friends, or only working on a cause area that some proportion of EAs think is worthwhile. The EA community is probably a good way to find multipliers and a useful signal for what is valuable, but it is not the final goal at all and doesn't have all the answers. I could imagine some case in which it makes sense to do something "less EA" (in the sense that fewer people in EA think it's valuable) because it's actually "more EA" (in the sense that it's actually more valuable for maximizing impact). The point of this example isn't to establish how likely this is, just to point out that the final goal is maximizing impact, not EA the community, and that "more EA/less EA" is a bit ambiguous. This might be totally obvious to most readers of this comment, but I wanted to write it anyway just in case there are people who don't find it obvious (or it isn't at all obvious, or not what Thomas meant). Thanks, I made a minor wording change to clarify. It's very difficult to take an arbitrary project that you're excited about for other reasons, and tweak it to "make it EA". I think it also applies here (which, by the way, is one of the most thought-provoking and useful parts of this post). I think some alternative phrasing like the below actually might make the point even more self-evident: "It's very difficult to take an arbitrary project that you're excited about for other reasons, and tweak it to make it the most maximally impactful project you could be working on." Great post, thanks for writing it! This framing appears a lot in my thinking and it's great to see it written up! I think it's probably healthy to be afraid of missing a big multiplier. I'd like to slightly push back on this assumption: If output scales linearly with work hours, then you can hit 60% of your maximum possible impact with 60% of your work hours First, I agree with other commenters and yourself that it's important not to overwork / look after your own happiness and wellbeing etc. Having said that, I do think working harder can often have superlinear returns, especially if done right (otherwise it can have sublinear or negative returns). One way to think about this is that the last year of one's career is often the most impactful in expectation, since one will have built up seniority and experience. Working harder is effectively a way of "pulling that last year forward a bit" and adding another even higher impact year after it. I.e. a year that is much higher-impact than your average year, hence the superlinearity. Another way to think about this is intuitively. If Sam Bankman-Fried had only worked 20% as hard, would he have made$4 billion instead of $20 billion? No. He would probably have made much much less. Speed is rewarded in the economy and working hard is one way to be fast. This makes the multiplier from working harder bigger than you would intuitively expect and possibly more important relative to judgment than you suggest. (I'm not saying everyone reading this should work harder. Some should, some shouldn't.) Edited shortly after posting to add: There's also a more straightforward reason that the claim "judgment is more important than dedication" is technically true but potentially misleading: one way to get better judgment is investing time into researching thorny issues. That seems to be what Holden Karnofsky has been doing for a decent fraction of his career. A key question for whether there are strongly superlinear returns seems to be the speed at which reality moves. For quant trading and crypto exchanges in particular, this effect seems really strong, and FTX's speed is arguably part of why it was so successful. This likely also applies to the early stages of a novel pandemic, or AI crunch time. In other areas (perhaps, research that's mainly useful for long AI timelines), it may apply less strongly. I agree that superlinearity is way more pronounced in some cases than in others. However, I still think there can be some superlinear terms for things that aren't inherently about speed. E.g. climbing seniority levels or getting a good reputation with ever larger groups of people. The examples you give fit my notion of speed - you're trying to make things happen faster than the people with whom you're competing for seniority/reputation. Similarly, speed matters in quant trading not primarily because of real-world influence on the markets, but because you're competing for speed with other traders. Fair, that makes sense! I agree that if it's purely about solving a research problem with long timelines, then linear or decreasing returns seem very reasonable. I would just note that speed-sensitive considerations, in the broad sense you use it, will be relevant to many (most?) people's careers, including researchers to some extent (reputation helps doing research: more funding, better opportunities for collaboration etc). But I definitely agree there are exceptions and well-established AI safety researchers with long timelines may be in that class. FWIW I think superlinear returns are plausible even for research problems with long timelines, I'd just guess that the returns are less superlinear, and that it's harder to increase the number of work hours for deep intellectual work. So I quite strongly agree with your original point. Very nice post. It does seem like two of your points are potentially at odds: >People who are not totally dedicated to EA will make some concession to other selfish or altruistic goals, like having a child, working in academia, living in a specific location, getting fuzzies, etc. If this would make them miss out on a multiplier, their "EA part" should try much harder to avoid this concession, or find a way to still hit the multiplier. vs. It seems the "concessions" could fall under the category of self-care. Agree - and I would consider adjusting the first of those passages (the one starting with "people who are not totally dedicated to EA") for such reasons. all of these concessions except working in academia seem pretty unlikely to result in missing a multiplier, unless they result in working on the wrong project somehow. otherwise they look like efficiency losses, not multiplier losses. in particular, having a child and being tied to a particular location seem especially unlikely to result in loss of a multiplier, at least if you maintain enough savings to still be able to take risks. pursuing fuzzies is more complicated bc it depends how much of your time/money you spend on it, but you could e.g. allocate 10% of your altruism budget to fuzzies and it would only be a 10% loss. Some ways that these concessions can lose you >50% of your impact: • Having a child makes simultaneously founding a startup really hard (edit: and can anchor your family to a specific location) • Working in academia can force you to spend >50% of your effort researching unimportant problems as a grad student, playing politics, writing grants and such; it also has benefits but your research won't always benefit from them so in the worst case this eats >50% of your impact • If you prioritize AI safety, and think most good AI safety research happens at places like Redwood, MIRI, Anthropic, CHAI, etc., living in the CA Bay Area can be 2x better than living anywhere else • If you prioritize US policy, living in DC can be >2x better than living anywhere else Allocating 10% of your altruism budget to fuzzies is a good plan, and I'm mostly worried about people trying to get fuzzies in ways that are much more costly for impact. For instance, EA student groups being optimized for being a "thriving community" rather than having a good theory of change, or someone earning-to-give so that they can donate for fuzzies rather than doing direct work that's much more impactful. I know lots of people who are incredibly impactful and are parents and/or work in academia. For many, career choices such as academia are a good route to impact. For many, having children is a core part of leading a good life for them and (to take a very narrow lens) is instrumentally important to their productivity So I find those claims false, and find it very odd to describe those choices as "concession[s] to other selfish or altruistic goals". We shouldn't be implying "maximising your impact (and by implication being a good EA) is hard to make compatible with having a kid" - that's a good way to be a tiny, weird and shrinking niche group. I found that bullet point particularly jarring and off-putting (and imagine many others would also) - especially as I work in academia and am considering having a child. This was a shame as much of the rest of the post was very useful and interesting. Thanks for this comment, I made minor edits to that point clarifying that academia can be good or bad. First off, I think we should separate concerns of truth from those of offputtingness, and be clear about which is which. With that said, I think "concession to other selfish or altruistic goals" is true to the best of my knowledge. Here's a version of it that I think about, which is still true but probably less offputting, and could have been substituted for that bullet point if I were more careful and less concise: When your goal is to maximize impact, but parts of you want things other than maximizing impact, you must either remove these parts or make some concession to satisfy them. Usually stamping out a part of yourself is impossible or dangerous, so making some concession is better. Some of these concessions are cheap (from an impact perspective), like donating 2% of your time to a cause you have personal connection to rather than the most impactful one. Some are expensive in that they remove multipliers and lose >50% of your impact, like changing your career from researching AI safety to working at Netflix because your software engineer friends think AI safety is weird. Which multipliers are cheap vs expensive depends on your situation; living in a particular location can be free if you're a remote researcher for Rethink Priorities but expensive if by far the best career opportunity for you is to work in a particular biosecurity lab. I want to caution people against making an unnecessarily expensive concession, or making a cheap concession much more expensive than necessary. Sometimes this means taking resources away from your non-EA goals, but it does not mean totally ignoring them. Regarding having a child, I'm not an expert or a parent, but my impression is it's rare for having kids to actually create more impact than not having the desire in the first place. I vaguely remember Julia Wise having children due to some combination of (a) non-EA goals, and (b) not having kids would make her sad, potentially reducing productivity. In this case, the impact-maximizer would say that (a) is fine/unavoidable-- not everyone is totally dedicated to impact-- and (b) means that being sad is a more costly concession than not having kids, so having kids is the least costly concession available. Maybe for some, having kids makes life meaningful and gives them something to fight for in the world, which would increase their impact. But I haven't met any such people. It's possible to have non-impact goals that actually increase your impact. Some examples are being truth-seeking, increasing your status in the EA community, or not wanting to let down your EA friends/colleagues. But I have two concerns with putting too much emphasis on this. First, optimizing too hard for this other goal has Goodheart concerns: there are selfish rationalists, EAs who add to an echo chamber, and people who stay on projects that aren't maximally impactful. Second, the idea that we can directly optimize for impact is a core EA intuition, and focusing on noncentral cases of other goals increasing impact might distract from this. I think it's better to realize that most of us are not pure impact-maximizers, we must make concessions to other goals, and that which concessions we make is extremely important to our impact. I know lots of people who are incredibly impactful and are parents and/or work in academia This doesn't seem like much evidence one way or the other unless you can directly observe or infer the counterfactual. If you take OP at face value, you're traversing at least 6-7 OOMs within choices that can be made by the same individual, so it seems very plausible that someone can be observed to be extremely impactful on an absolute scale while still operating at only 10% of their personal best, or less. (also there is variance in impact across people for hard-to-control reasons, for example intelligence or nationality). If you prioritize US policy, being a permanent resident of a state and living in DC temporarily makes sense. But living permanently in DC forecloses an entire path through which you could have impact, i.e. getting elected to federal office. Maybe that's the right choice if you are a much much better fit for appointed jobs than elected ones, or if you have a particularly high-impact appointed job where you know you can accomplish more than you could in Congress. But on net I would expect being a permanent resident of DC to reduce most people's policy impact (as does being unwilling to move to DC when called upon to do so). This is great - thanks for writing it. Sam Bankman-Fried and Rob Wiblin discusses this general idea on the 80,000 Hours podcast: Rob Wiblin: What do people commonly get wrong about why you ended up having so much success in this area? Sam Bankman-Fried: I think for a lot of people, they just don’t have a model for how it happened. It’s just sort of this weird property of the world; it’s a little bit inexplicable. I don’t know, it happens sometimes: you look at someone and they have incredible success, and you’re like, “Huh. That person is really successful.” It’s sort of like when people think about why was Elon Musk so successful, or why is Jeff Bezos so successful? Most people don’t really have an answer for that, because they don’t even see it so much as a question they’re asking. It just is this weird property of the world, that they were. Sam Bankman-Fried: But my felt sense — from having been through a lot of it — the first thing is that, to the extent there are multiplicative factors in what’s going on (and I do think there are) that your ultimate “how well you do” is a product of a lot of different things. One thing that implies is that, if it’s a product of four different things, then in order to get anywhere near the peak, you need to do well sort of at all of them. You need to be pretty good at all of them. It’s a high bar. Rob Wiblin: Yeah. Sam Bankman-Fried: You can’t skip leg day, so to speak. Rob Wiblin: What does that mean? Sam Bankman-Fried: You can’t be like, “I’m going to be really good at some set of things and just ignore the others” — you just lose that multiplicative aspect of it. Obviously, some things are additive, and you can sort of ignore those. Sam Bankman-Fried: So we had to be good on a number of different realms. We had to be really ambitious. That was an important part of it. It was just so, so, so easy for us to fail to accomplish what we did, if we just decided our goal was a lot lower. Or in a lot of ways, just getting lazy when we started doing well and being like, “Ah, we’ve done well. No point trying anymore.” Sam Bankman-Fried: But also, just a lot of strategic decisions, where it’s like, “Are we willing to take any risk in our trading?” If the answer is no, it’s going to really limit the amount of trading we can do, but it is a safer thing to do. That’s an example of a question that we had to face and make decisions about. Another part of this was just aiming high and remembering that — not so much aiming high, but aiming to maximize expected value, is really what I’d say. Rob Wiblin: If I remember, it seemed like in those early days, you were often doing things that created some risk of going bust, but offered the potential of making manyfold more money. That was kind of your modus operandi. Sam Bankman-Fried: Yeah. I think the way I saw it was like, “Let’s maximize EV: whatever is the highest net expected value thing is what we should do.” As opposed to some super sublinear utility function, which is like, make sure that you continue on a moderately good path above all else, and then anything beyond that is gravy. Sam Bankman-Fried: I do think those are probably the right choices, but they were scary. I think even more so than some chance of going bust, what they sort of entailed was that we had to have a lot of faith in ourselves almost — that they really would have had a significant chance of going bust if we didn’t play our cards exactly right. There were a lot of things that were balanced on a knife’s edge. Any amount of sloppiness would have been pretty bad. I also think it was a little bit of a thing of, could we play this really well? Rob Wiblin: Just to back up and talk about the multiplicative model of entrepreneurship or productivity that you were talking about, this is the idea that your output is determined by multiplying together a whole bunch of different factors — like how good you are at all these different sub-skills of the thing that you’re trying to do. Which produces quite different results than what you get if you’re just adding together your skill in a bunch of different areas. Sam Bankman-Fried: Yeah. Rob Wiblin: Basically it means that you could be sabotaged by being extremely weak in any one area: if any of the things you’re multiplying together is zero or close to zero, then the whole project produces no output. Sam Bankman-Fried: Yep. Rob Wiblin: Do you want to elaborate on it a little bit more? Sam Bankman-Fried: Yeah. I think it’s an important and a weird point. It’s not an absolute point. I don’t want to claim that in all cases, this is the right way to think about things or anything like that. What I’d say instead is something like, you should try and understand in which ways something is multiplicative — in which ways it is the case that, were that factor set really low, you’d be basically fucked. As opposed to, that’s just another factor among many. Sam Bankman-Fried: What are some of those? One example of this, which I learned early on, is management. If you’re trying to scale something up big, and you’re very good at the object-level task but bad at managing people, and no one on the leadership team is good at managing people, it just becomes a mess. It almost doesn’t matter how good you are at the original thing — you’re not going to become great as a company. It’s really hard to substitute for that. It’s amazing how quickly things can go south, if organizational shit is not in a good state. Sam Bankman-Fried: That was one example of a case where I originally didn’t particularly think of it as multiplicative, but I do think it was. And I learned that lesson eventually, that you can’t forget about that. I think there are a lot of other things like that that came up. Rob Wiblin: Yeah. It’s a good example of the multiplicative effect. I suppose the multiplicative model is just kind of a model that can be helpful and is partially true and partially not true. Sam Bankman-Fried: Yeah. Rob Wiblin: But people have pointed out that founders falling out, or the original team growing a project coming to hate one another, is one of the main ways that a project fails. It’s a great example of how it kind of doesn’t matter how good a prototype they build or how good their accounting system was or their ops was — if the people working the project just end up despising one another, then it’s all for naught, basically. Sam Bankman-Fried: Yeah. I think that’s basically right. Rob Wiblin: I suppose there’s a few other things like that. And similarly, if they get on really well, but they’re terrible at designing a product, such that they’re never going to actually appeal to customers, then the whole thing is for naught again. Sam Bankman-Fried: Yeah. Rob Wiblin: It suggests that you kind of want an all-rounder or an all-rounder company or an all-rounder CEO. Well, at least that that’s better than someone who’s exceptional in one area and really weak in another. Do you think that’s a reasonable conclusion to draw? Sam Bankman-Fried: Yeah, with some caveats. I think it’s mostly right, but you have to be careful if you think about it that way. Again, I do think this is a reasonable way to think about it, in many senses, but you have to be careful that you don’t overdo it. And in particular, so OK, you go for the all-rounder approach. You don’t want to be left with a generic pile of mush, right? Sam Bankman-Fried: Part of this is again saying, in order to reach an extremely good outcome, you actually need a lot of things going very well. So some of this is sort of like, if you’re not in that case, you just are not going to end up in the extremely good outcome. That’s sort of how it is. It’s sort of sad, but true. I think part of this is as much saying that as anything else. Rob Wiblin: Yeah. I guess a modified version is that hopefully, the whole reason you’ve chosen to go into entrepreneurship on project X is that you’re amazing at some aspect of that thing, because you had discretion over what you were going to go into. So, why not choose something where at least you’re extremely knowledgeable about the product or whatever. And then, having gotten a really high value for that, on the rest of the stuff you want to do well enough that it doesn’t sabotage the project. Sam Bankman-Fried: Yeah. I think something like that. There are ways that you can try and cover for some of your flaws. There are things you can do to make it such that they matter less than they otherwise would. You can be a little bit strategic about that. Sam Bankman-Fried: Now, it’s always sad when you’re in covering-your-ass mode, so to speak. That’s not where you would ideally want to be coming from. But some examples of that, that I do think can be helpful: one thing that you can do is, if you choose an area where you are the first mover by a lot — like a consumer-facing business, and where your depth of product knowledge is not very good, you can build an OK product, and you’re good at corporate strategy and shit — that can potentially work. Sam Bankman-Fried: Because you might end up in a position where just the brand value of having been first is worth so much, that even if your product isn’t the best eventually, if it’s the best in an open area where there are no competitors, that might be enough to build up a pretty big head start. Obviously it’s better and worth a ton if you can also be great at product there, but that is an avenue you can try and play. Great post! My impression is that this is broadly right, and sometimes underappreciated. (Though I'm not sure about your quantitative bottom line for the reasons Darius mentions.) I think this also has implications for the allocation of resources at a community level because impact often is not only the product of decisions that are under a single person's control but also of environmental factors – e.g., the number of potential supporters (employees, funders, ...), the risk of a mental health crisis, and the number of valuable ideas one encounters in conversation all range over several orders of magnitude depending on one's circumstances and their value interacts with the other factors (if you charity implements an ineffective intervention, it doesn't matter if you meet lots of people who give you productivity advice or who are willing to work for you, etc.). So the upshot is not just that as individuals we need to make the right call on lots of decisions if we want to maximize impact, it's also that we need to structure the community in such a way that we 'match' different 'factors of production' in an optimal way with each other – the right people need to find each other, the right ideas, funding, advice, an environment allowing for peak and sustainable motivation, etc. – because we'll only get the impact 'super hits' in cases where all input factors are set to near-maximal levels. (I made similar points here.) One can't stack the farmed animal welfare multiplier on top of the ones about giving malaria nets or the one about focusing on developing countries, right? E.g. can't give chickens malaria nets. It seems like that one requires 'starting from scratch' in some sense. There might be analogies to the human case (e.g. don't focus on your pampered pets), but they still need to be argued. So I think the final number should be lower. (It's still quite high, of course!) The way I framed it was unclear, but the final number is correct because I was comparing the QALYs/$ of farmed animal interventions to that of malaria nets. See the footnote:

Assumes 40 chicken QALYs/$, 1 human QALY/$100, and that 400 chicken QALY = 1 human QALY due to neuron differences. Ana's moral circle includes all beings weighted by neuron count, but she hadn't thought about this enough.

I was directly comparing the following rough estimates

• 40 chicken QALY/$generated by broiler and cage-free interventions (Rethink Priorities has a mean of 41) • 0.01 human QALY/$ generated by malaria nets from AMF based on GiveWell data (life expectancy ~60 years divided by $6000/life saved) • 400 chicken QALY ~= 1 human QALY if we weight by neurons. Humans have about 86 billion neurons, the red junglefowl (ancestor of chickens) has 221 million, which is a ratio of 389. 40 / (0.01 * 400) gives you a multiplier of 10. Ana is a hypothetical junior software engineer in Silicon Valley making$150k/year. Every year, she spends 10% of her income to anonymously buy socks for her colleagues.

Those are some expensive socks! (or Ana has a lot of colleagues!)

I enjoyed this comment

Nice post!

One possibly obvious implication that I think is missing: when processes are multiplicative rather than additive, it is much more important to avoid zeros at some point in the process. There's an analogy here with (e.g.) the O-ring theory of economic development.

[-][anonymous]7mo 1

I think I feel a bit confused with the concept of it being crucial to  "stack a bunch of multipliers."

Like how would you describe this story:

A student is planning a career working on US economic policy (because this is something he's had an interest in for a little while). Then, he is exposed to some ideas in longtermism and decides to instead work on AI policy instead (because he thinks it seems like a big deal and is probably more important than economic policy).

This feels to me like it's just "one step" or "one multiplier" that puts this student on a much higher EV career path.

Ambition is in some sense what longtermism is all about-- it would be silly to claim that the value of the future is orders of magnitude larger than most things in the present and not aim for that value.

Have longtermists abandoned the ITC framework? It's all about importance, with no attention to tractability or crowdedness?

The ITC framework is correct. I meant to say that for longtermist interventions, importance tops out way higher than for neartermist interventions.

But it doesn't follow that because importance is very high, you should "aim for" longtermist interventions. You still have to account for tractability and crowdedness.