[EDIT: I realize that this is not always true and am definitely interested in arguments/evidence for that too]
For context, I lead a university group and constantly find myself talking to members about why I don't think there is a real sacrifice to wellbeing in choosing to work on the most pressing problems [as opposed to the ones that students gravitated to when they were young]. Any resources that address concerns about sacrificing happiness when using EA to inform career plans would be much appreciated!
Agree; moving into "EA-approved" direct work later in your career while initially doing skill- or network-building is also a good option for some. I would actually think that if someone can achieve a lot at the conventional career, e.g., achieving some local prominence (just as a goal in itself or as preparation to move into a more "directly EA role"), that's great. My thinking here was especially influenced by an article about the neoliberalism community.
(Urgency of some problems, most prominently AI risk, might be indeed a decisive factor under some worldviews held in the community. I guess most people should plan their career as it most makes sense to them under their own worldviews, but I can imagine changing my mind here. I need to acknowledge that I think that short timelines and existential risk concerns are "psychoactive," and people should be carefully exposed to them to avoid various failure modes.)