Confidence: Unlikely
Longtermists sometimes argue that some causes matter extraordinarily more than others—not just thousands of times more, but 10^30 or 10^40 times more. The reasoning goes: if civilization has astronomically large potential, then apparently small actions could have compounding flow-through effects, ultimately affecting massive numbers of people in the long-run future. And the best action might do far more expected good than the second-best.
I'm not convinced that causes differ astronomically in cost-effectiveness. But if they do, what does that imply about how altruists should choose their careers?
Suppose I believe cause A is the best, and it's astronomically better than any other cause. But I have some special skills that make me extremely well-suited to work on cause B. If I work directly on cause B, I can do as much good as a $100 million per year donation to the cause. Or instead, maybe I could get a minimum-wage job and donate $100 per year to cause A. If A is more than a million times better than B, then I should take the minimum-wage job, because the $100 I donate will do more good.
This is an extreme example. Realistically, there are probably many career paths that can help the top cause. I expect I can find a job supporting cause A that fits my skill set. It might not be the best job, but it's probably not astronomically worse, either. If so, I can do much more good by working that job than by donating $100 per year.
But I might not be able to find an appropriate job in the top cause area. As a concrete example, suppose AI safety matters astronomically more than global priorities research. If I'm a top-tier moral philosopher, I could probably make a lot of progress on prioritization research. But I could have a bigger impact by earning to give and donating to AI safety. Even if the stereotypes are true and my philosophy degree doesn't let me get a well-paying job, I can still do more good by making a meager donation to AI alignment research than by working directly on a cause where my skills are relevant. Perhaps I can find a job supporting AI safety where I can use my expertise, but perhaps not.
(This is just an example. I don't think global priorities research is astronomically worse than AI safety.)
This argument requires that causes differ astronomically in relative cost-effectiveness. If causes A is astronomically better than cause B in absolute terms, but cause B is 50% as good in relative terms, then it makes sense for me to take a job in cause B if I can be at least twice as productive.
I suspect that causes don't differ astronomically in cost-effectiveness. Therefore, people should pay attention to personal fit when choosing an altruistic career, and not just the importance of the cause.
In addition to the issues raised by other commentators I would worry that someone trying to work on something they're a bad fit for can easily be harmful.
That especially goes for things related to existential risk.
And in addition to the obvious mechanisms, having most of the people in a field be ill-suited to what they're doing but persisting for 'astronomical waste' reasons will mean most participants struggle to make progress, get demoralized, and repel others from joining them.
My gut reaction was to be surprised that there are whole fields or causes in which some people not only aren't a good fit for the most important roles there but that they just can't use their skill set in a constructive way in which they would feel that they are making some contribution.
But on second thought, we are talking about extremely small fields with limited resources. This means that it would be difficult financially for people who aren't skilled in accordance with the top needs of the field.
Then again, the field might grow and people can upskill quite a bit if they are willing to wait a decade or two before working directly on their favorite x-risk.