The EA mindset of focusing on doing the most good can exacerbate mental health issues (particularly self-esteem, guilt, and insecurity). It seems like this is occasionally addressed in blog posts (such as those by Julia Wise at Giving Gladly and Nate Soares' Replacing Guilt series) or podcasts (like Howie's 80k episode). But is there anyone working full-time on trying to improve the mental health of EAs at a large scale - e.g. by generating more material like this? And if not, what would the best interventions for someone in that position to try? I'm thinking particularly about a hypothetical person funded by grants to do so, since it doesn't seem like something which relies too heavily on being at an existing org.
I recently completed an internship at Nonlinear. As part of that internship, I interviewed a few people in order to learn about their experiences with mental health. I could write up a summary of results if anyone is interested (although I'm writing up the results of my interviews re: the AI safety pipeline is higher on the list).
When I left, Nonlinear was considering investing in Multiplers for Existing Talent in AI (possibly other existential risks too). The idea was to identify high impact people and offer them funding for items or services that might improve their productivity. Therapy was one such service that seemed promising.