Confidence: Unlikely
Longtermists sometimes argue that some causes matter extraordinarily more than others—not just thousands of times more, but 10^30 or 10^40 times more. The reasoning goes: if civilization has astronomically large potential, then apparently small actions could have compounding flow-through effects, ultimately affecting massive numbers of people in the long-run future. And the best action might do far more expected good than the second-best.
I'm not convinced that causes differ astronomically in cost-effectiveness. But if they do, what does that imply about how altruists should choose their careers?
Suppose I believe cause A is the best, and it's astronomically better than any other cause. But I have some special skills that make me extremely well-suited to work on cause B. If I work directly on cause B, I can do as much good as a $100 million per year donation to the cause. Or instead, maybe I could get a minimum-wage job and donate $100 per year to cause A. If A is more than a million times better than B, then I should take the minimum-wage job, because the $100 I donate will do more good.
This is an extreme example. Realistically, there are probably many career paths that can help the top cause. I expect I can find a job supporting cause A that fits my skill set. It might not be the best job, but it's probably not astronomically worse, either. If so, I can do much more good by working that job than by donating $100 per year.
But I might not be able to find an appropriate job in the top cause area. As a concrete example, suppose AI safety matters astronomically more than global priorities research. If I'm a top-tier moral philosopher, I could probably make a lot of progress on prioritization research. But I could have a bigger impact by earning to give and donating to AI safety. Even if the stereotypes are true and my philosophy degree doesn't let me get a well-paying job, I can still do more good by making a meager donation to AI alignment research than by working directly on a cause where my skills are relevant. Perhaps I can find a job supporting AI safety where I can use my expertise, but perhaps not.
(This is just an example. I don't think global priorities research is astronomically worse than AI safety.)
This argument requires that causes differ astronomically in relative cost-effectiveness. If causes A is astronomically better than cause B in absolute terms, but cause B is 50% as good in relative terms, then it makes sense for me to take a job in cause B if I can be at least twice as productive.
I suspect that causes don't differ astronomically in cost-effectiveness. Therefore, people should pay attention to personal fit when choosing an altruistic career, and not just the importance of the cause.
I don't think any major EA or longtermist institution believes this about expected impact for 10^30 differences. There are too many spillovers for that, e.g. if doubling the world economy of $100 trillion/yr would modestly shift x-risk or the fate of wild animals, then interventions that affect economic activity have to have expected absolute value of impact much greater than 10^-30 of the most expected impactful interventions.
The premises and conclusion don't seem to match here. A difference of 10^30x is crazy, but rejecting that doesn't mean you don't have huge practical differences in impact like 100x or 1000x. Those would be plenty to come close to maxing out the possible effect of differences between causes(since if you're 1000x as good at rich-country homelessness relief as preventing pandemics, then if nothing else your fame for rich country poverty relief would be a powerful resource to help out in other areas like public endorsements of good anti-pandemic efforts).
The argument seems sort of like "some people say if you go into careers like quant trading you'll make 10^30 dollars and can spend over a million dollars to help each animal with a nervous system. But actually you can't make that much money even as a quant trader, so people should pay attention to fit with different careers in the world when trying to make money, since you can make more money in a field with half the compensation per unit productivity if you are twice as productive there." The range for realistic large differences in compensation between fields (e.g. fast food cashier vs quant trading) is missing from the discussion.
You define astronomical differences at the start as 'not just thousands of times more' but the range to thousands of times more is where all the action is.