# 23

Confidence: Unlikely

Longtermists sometimes argue that some causes matter extraordinarily more than others—not just thousands of times more, but 10^30 or 10^40 times more. The reasoning goes: if civilization has astronomically large potential, then apparently small actions could have compounding flow-through effects, ultimately affecting massive numbers of people in the long-run future. And the best action might do far more expected good than the second-best.

I'm not convinced that causes differ astronomically in cost-effectiveness. But if they do, what does that imply about how altruists should choose their careers?

Suppose I believe cause A is the best, and it's astronomically better than any other cause. But I have some special skills that make me extremely well-suited to work on cause B. If I work directly on cause B, I can do as much good as a $100 million per year donation to the cause. Or instead, maybe I could get a minimum-wage job and donate$100 per year to cause A. If A is more than a million times better than B, then I should take the minimum-wage job, because the $100 I donate will do more good. This is an extreme example. Realistically, there are probably many career paths that can help the top cause. I expect I can find a job supporting cause A that fits my skill set. It might not be the best job, but it's probably not astronomically worse, either. If so, I can do much more good by working that job than by donating$100 per year.

But I might not be able to find an appropriate job in the top cause area. As a concrete example, suppose AI safety matters astronomically more than global priorities research. If I'm a top-tier moral philosopher, I could probably make a lot of progress on prioritization research. But I could have a bigger impact by earning to give and donating to AI safety. Even if the stereotypes are true and my philosophy degree doesn't let me get a well-paying job, I can still do more good by making a meager donation to AI alignment research than by working directly on a cause where my skills are relevant. Perhaps I can find a job supporting AI safety where I can use my expertise, but perhaps not.

(This is just an example. I don't think global priorities research is astronomically worse than AI safety.)

This argument requires that causes differ astronomically in relative cost-effectiveness. If causes A is astronomically better than cause B in absolute terms, but cause B is 50% as good in relative terms, then it makes sense for me to take a job in cause B if I can be at least twice as productive.

I suspect that causes don't differ astronomically in cost-effectiveness. Therefore, people should pay attention to personal fit when choosing an altruistic career, and not just the importance of the cause.

# 23

Pingbacks
New Comment

Longtermists sometimes argue that some causes matter extraordinarily more than others—not just thousands of times more, but 10^30 or 10^40 times more.

I don't think any major EA or longtermist institution believes this about expected impact for 10^30 differences. There are too many spillovers for that, e.g. if doubling the world economy of \$100 trillion/yr would modestly shift x-risk or the fate of wild animals, then interventions that affect economic activity have to have expected absolute value of impact much greater than 10^-30 of the most expected impactful interventions.

This argument requires that causes differ astronomically in relative cost-effectiveness. If causes A is astronomically better than cause B in absolute terms, but cause B is 50% as good in relative terms, then it makes sense for me to take a job in cause B if I can be at least twice as productive.

I suspect that causes don't differ astronomically in cost-effectiveness. Therefore, people should pay attention to personal fit when choosing an altruistic career, and not just the importance of the cause.

The premises and conclusion don't seem to match here. A difference of 10^30x is crazy, but rejecting that doesn't mean you don't have huge practical differences in impact like 100x or 1000x. Those would be plenty to come close to maxing out the possible effect of differences between causes(since if you're 1000x as good at rich-country homelessness relief as preventing  pandemics, then if nothing else your fame for rich country poverty relief would be a powerful resource to help out in other areas like public endorsements of good anti-pandemic efforts).

The argument seems sort of like "some people say if you go into careers like quant trading you'll make 10^30 dollars and can spend over a million dollars to help each animal with a nervous system. But actually you can't make that much money even as a quant trader, so people should pay attention to fit with different careers in the world when trying to make money, since you can make more money in a field with half the compensation per unit productivity if you are twice as productive there." The range for realistic large differences in compensation between fields (e.g. fast food cashier vs quant trading) is missing from the discussion.

You define astronomical differences at the start as 'not just thousands of times more' but the range to thousands of times more is where all the action is.

My main objection to this post is that personal fit still seems really important when choosing what to do within a cause. I think that one of EA's main insights is "if you do explicit estimates of impact, you can find really big differences in effectiveness between cause areas, and these differences normally swamp personal fit"; that's basically what you're saying here, and it's totally correct IMO. But I think it's a mistake to try to apply the same style of reasoning within causes, because the effectivenesses between different jobs are much more similar and so personal fit ends up dominating the estimate of which one will be better.

In addition to the issues raised by other commentators I would worry that someone trying to work on something they're a bad fit for can easily be harmful.

That especially goes for things related to existential risk.

And in addition to the obvious mechanisms, having most of the people in a field be ill-suited to what they're doing but persisting for 'astronomical waste' reasons will mean most participants struggle to make progress, get demoralized, and repel others from joining them.

My gut reaction was to be surprised that there are whole fields or causes in which some people not only aren't a good fit for the most important roles there but that they just can't use their skill set in a constructive way in which they would feel that they are making some contribution.

But on second thought, we are talking about extremely small fields with limited resources. This means that it would be difficult financially for people who aren't skilled in accordance with the top needs of the field.

Then again, the field might grow and people can upskill quite a bit if they are willing to wait a decade or two before working directly on their favorite x-risk.

If P, then Q.

where P = "Causes Differ Astronomically in Cost-Effectiveness" and Q = "Personal Fit In Career Choice Is Unimportant."

You've tagged your post "Unlikely." It's not clear to me from the confidence tag whether you mean to imply that you think Q is unlikely by itself, or if the implicature "if P, then Q" is unlikely. From context, I think it's the former, but the latter seems like a reasonable reading as well.

I thought Michael meant for the tag to mean that the premise P is unlikely by itself, not "if P, then Q" or Q by itself.

Ok so we have 3 different interpretations of what the confidence tag means for a very simple syllogism.

If you replace all four instances of "cost-effectiveness" in this post (including the title) with "expected cost-effectiveness" then I agree with it.

But if you literally mean the objective cost-effectiveness, then I think I disagree with the claim made in the post's title.

(Note: The rest of this long comment is merely explaining why I think I disagree in that case.)

I think in that case I disagree with the claim because it seems like enough subjective uncertainty about which cause (A or B) is the one that is astronomically more cost-effective than the other could reduce the expected cost-effectiveness to a small enough number that personal career fit may matter again. For example, in the extreme case in which you are exactly 50% confident that cause A is astronomically more cost-effective than cause B, and 50% confident that cause B is astronomically more cost-effective than A, then cause A and cause B have the same expected cost-effectiveness (assuming that the "astronomical" is the same magnitude for both). In this scenario, you should probably work on the cause where you have better personal fit, and only opt for the earning-to-give path if you can make enough money to employ people to do more work on some combination of the two causes than you'd be able to do yourself.

Even if your subjective uncertainty is not that uncertain (such that the expected value calculation says that one cause is still astronomically more cost-effective than the other in expectation), I think there may still be reasons to take personal fit into account when deciding on where to work or give. For example, suppose that you are 55% confident that cause A is one million times as cost-effective as cause B and 45% confident that cause B is a million times as cost-effective as cause A, such that your average expectation is that cause A is 100,000 times as cost-effective as cause B. In this case, if your 55% credence is not robust or is unstable in some sense, then rather than make an all-in bet on that 55% by ignoring your strong personal fit for a career in cause B and opting instead to donate a small amount to cause A, it may instead be more rational to take the job in cause B. If we imagine that you're part of a community where there are other people who are strongly suited to working on cause A who are 55% confident that cause B is astronomically-more-cost-effective, then surely we'd want the two of you to cooperate and engage in a trade where you both do direct-work for the other person's preferred cause (where you both have strong personal fit), rather than have both of you earn-to-give small amounts of money for the cause that you each think is more cost-effective. By coordinating so each of you work on the other person's preferred cause, you'll both think the world will be better off than if you both earned-to-give instead. Now in practice I think trying to coordinate these career trades may often be too impractical. But in the absence of being able to identify someone who takes the other view that we do and has the other skill-set, I think we should still consider that these complementary people may exist (I'd argue they probably exist in many cases) and that we should therefore perhaps cooperate in this hypothetical prisoner's dilemma by working on the cause where we have strong personal fit rather than defect by earning to give just a small amount in favor of the cause that we slightly prefer.

Okay, so I just realized that above I said "if your 55% credence is not robust or is unstable in some sense" and then made a completely different argument (a moral trade argument) for why personal fit in career choice may still be important even if one expects that causes differ astronomically in cost-effectiveness. I think there may be another argument that one could make for the same conclusion if one's estimate is unstable or not robust, though I'm not sure what that argument is--I just have an intuition.

Yeah, I was gonna say something similar.

Specifically, I wonder whether any longtermists (or any prominent ones) actually do argue that in expectation "some causes matter extraordinarily more than others—not just thousands of times more, but 10^30 or 10^40 times more". They may instead argue that that may be true in reality, but not in expectation, due to our vast uncertainty about which causes would be the most valuable ones. (This seems to be Michael's own position, given his final paragraph, and I think it's roughly what Tomasik argues in the link provided there.)

And aside from the reasons you mentioned, an additional reason for not going all-in on one's best guess when so very uncertain is that there may be a lot of information value in doing so, if:

• you'd be less good at exploring your best guess than at exploring something else that's plausibly similarly/more pressing, due to personal fit (e.g., you'd be much more suited to gaining insights in one area than the other)
• your best guess has already been explored more than something else that's plausibly similarly/more pressing (e.g., AI safety vs permanent totalitarianism), such that your/our credences about the latter are less robust

Your findings could then inform your future efforts, or future efforts by others.

On moral trade/coordination, these posts are also relevant: