Confidence: Unlikely
Longtermists sometimes argue that some causes matter extraordinarily more than others—not just thousands of times more, but 10^30 or 10^40 times more. The reasoning goes: if civilization has astronomically large potential, then apparently small actions could have compounding flow-through effects, ultimately affecting massive numbers of people in the long-run future. And the best action might do far more expected good than the second-best.
I'm not convinced that causes differ astronomically in cost-effectiveness. But if they do, what does that imply about how altruists should choose their careers?
Suppose I believe cause A is the best, and it's astronomically better than any other cause. But I have some special skills that make me extremely well-suited to work on cause B. If I work directly on cause B, I can do as much good as a $100 million per year donation to the cause. Or instead, maybe I could get a minimum-wage job and donate $100 per year to cause A. If A is more than a million times better than B, then I should take the minimum-wage job, because the $100 I donate will do more good.
This is an extreme example. Realistically, there are probably many career paths that can help the top cause. I expect I can find a job supporting cause A that fits my skill set. It might not be the best job, but it's probably not astronomically worse, either. If so, I can do much more good by working that job than by donating $100 per year.
But I might not be able to find an appropriate job in the top cause area. As a concrete example, suppose AI safety matters astronomically more than global priorities research. If I'm a top-tier moral philosopher, I could probably make a lot of progress on prioritization research. But I could have a bigger impact by earning to give and donating to AI safety. Even if the stereotypes are true and my philosophy degree doesn't let me get a well-paying job, I can still do more good by making a meager donation to AI alignment research than by working directly on a cause where my skills are relevant. Perhaps I can find a job supporting AI safety where I can use my expertise, but perhaps not.
(This is just an example. I don't think global priorities research is astronomically worse than AI safety.)
This argument requires that causes differ astronomically in relative cost-effectiveness. If causes A is astronomically better than cause B in absolute terms, but cause B is 50% as good in relative terms, then it makes sense for me to take a job in cause B if I can be at least twice as productive.
I suspect that causes don't differ astronomically in cost-effectiveness. Therefore, people should pay attention to personal fit when choosing an altruistic career, and not just the importance of the cause.
If you replace all four instances of "cost-effectiveness" in this post (including the title) with "expected cost-effectiveness" then I agree with it.
But if you literally mean the objective cost-effectiveness, then I think I disagree with the claim made in the post's title.
(Note: The rest of this long comment is merely explaining why I think I disagree in that case.)
I think in that case I disagree with the claim because it seems like enough subjective uncertainty about which cause (A or B) is the one that is astronomically more cost-effective than the other could reduce the expected cost-effectiveness to a small enough number that personal career fit may matter again. For example, in the extreme case in which you are exactly 50% confident that cause A is astronomically more cost-effective than cause B, and 50% confident that cause B is astronomically more cost-effective than A, then cause A and cause B have the same expected cost-effectiveness (assuming that the "astronomical" is the same magnitude for both). In this scenario, you should probably work on the cause where you have better personal fit, and only opt for the earning-to-give path if you can make enough money to employ people to do more work on some combination of the two causes than you'd be able to do yourself.
Even if your subjective uncertainty is not that uncertain (such that the expected value calculation says that one cause is still astronomically more cost-effective than the other in expectation), I think there may still be reasons to take personal fit into account when deciding on where to work or give. For example, suppose that you are 55% confident that cause A is one million times as cost-effective as cause B and 45% confident that cause B is a million times as cost-effective as cause A, such that your average expectation is that cause A is 100,000 times as cost-effective as cause B. In this case, if your 55% credence is not robust or is unstable in some sense, then rather than make an all-in bet on that 55% by ignoring your strong personal fit for a career in cause B and opting instead to donate a small amount to cause A, it may instead be more rational to take the job in cause B. If we imagine that you're part of a community where there are other people who are strongly suited to working on cause A who are 55% confident that cause B is astronomically-more-cost-effective, then surely we'd want the two of you to cooperate and engage in a trade where you both do direct-work for the other person's preferred cause (where you both have strong personal fit), rather than have both of you earn-to-give small amounts of money for the cause that you each think is more cost-effective. By coordinating so each of you work on the other person's preferred cause, you'll both think the world will be better off than if you both earned-to-give instead. Now in practice I think trying to coordinate these career trades may often be too impractical. But in the absence of being able to identify someone who takes the other view that we do and has the other skill-set, I think we should still consider that these complementary people may exist (I'd argue they probably exist in many cases) and that we should therefore perhaps cooperate in this hypothetical prisoner's dilemma by working on the cause where we have strong personal fit rather than defect by earning to give just a small amount in favor of the cause that we slightly prefer.
Okay, so I just realized that above I said "if your 55% credence is not robust or is unstable in some sense" and then made a completely different argument (a moral trade argument) for why personal fit in career choice may still be important even if one expects that causes differ astronomically in cost-effectiveness. I think there may be another argument that one could make for the same conclusion if one's estimate is unstable or not robust, though I'm not sure what that argument is--I just have an intuition.
Yeah, I was gonna say something similar.
Specifically, I wonder whether any longtermists (or any prominent ones) actually do argue that in expectation "some causes matter extraordinarily more than others—not just thousands of times more, but 10^30 or 10^40 times more". They may instead argue that that may be true in reality, but not in expectation, due to our vast uncertainty about which causes would be the most valuable ones. (This seems to be Michael's own position, given his final paragraph, and I think it's roughly what Tomasik argues in the link ... (read more)