If you want to do impactful work, you should just do it immediately rather than “earning career capital first”.
Or maybe you must always prioritise career capital before attempting direct work?
The problem is that the evidence for either position is underdetermined. What I've seen EAs increasingly offer as we matured from a community of smart people with little experience to a community of smart people with lots of experience, is that those with experience often offer anecdotes, personal stories, or post-hoc rationalisations of their own paths.
This is often in good faith and with a kind heart. It's welcome as a personal anecdote. But it does not meet the epistemic standards EA claims to value. We should be much more willing to say “we don’t know” instead of confidently prescribing life strategies on the basis of vibes and selective examples.
The cost of making things up where the evidence is not there is not neutral. It actively distracts attention away from what matters. When we overconfidently promote one path as obviously correct, we risk steering people away from low hanging, path of least resistance options that fit their actual circumstances and not predictably worse.
Someone might already have an unusually good opportunity to build relevant career capital cheaply, or conversely an unusually good opportunity to do direct work now with high leverage. Bad advice does not just fail to help, it can actively block progress.
More concretely, a smart person with a clear direct contribution opportunity may second-guess themselves and delay unnecessarily because they were told that “earning career capital first” is the serious EA thing to do. Equally, someone with an easy and obvious way to gain career capital may be encouraged to jump prematurely into direct work they are not yet well positioned for. In both cases, the harm comes from treating anecdote as evidence and culture as proof. If we care about impact, we should stop presenting confident prescriptions where we do not have rigorous backing, and instead help people reason clearly about their own constraints, opportunities, and comparative advantage.

I think this point is potentially significant, but the post is clearly LLM-generated, and thus, most of the paragraphs don't add much beyond the initial point of "there's no Script of Truth and it depends on the person's context". In practice, I have no clear examples of people making wrong choices based on overconfident EA advice - in fact, my experience has been the opposite: people don't want to give high-level advice, because they think it depends too much on the options that are available to me, and they couldn't choose from there. Sure, counterexamples could exist, but this post hasn't convinced me of this.
I'd have found the post much more valuable if it had a few anonymized examples, rather than LLM-generated text to complete the main post.