To add on to Abby, I think it’s true of impactful paths in general, not just AI safety, that people often (though not always) have to spend some time building career capital without having much impact before moving across. I think spending time as a software engineer, or ML engineer before moving across to safety will both improve your chances, and give you a very solid plan B. That said, a lot of safety roles are hard to land, even with experience. As someone who hasn’t coped very well with career rejection myself, I know that can be really tough.
I was rejected from career advising when I applied! So I definitely am aware it can be costly. I won’t name names, but I also know of some other people who have gone on to have successful careers in the space who were rejected. Sometimes, this is because reviewing is hard, and we make mistakes. Sometimes, this is because the thing the applicant needs most is to just read more of 80k’s broad advice before trying to tailor it specifically to them. We're trying to use our time as best we can and to provide support to the people who would most benefit from our advice, so if we can cast a wider net and get more of those people to apply, we want to do that. But I hope we can minimize these costs anyone experiences. I know some people benefit just from thinking through the questions in the application, and we've updated the application to make it less work for people. And we really encourage people not to take it as a strong negative signal if they don't get an advising call — I'd appreciate any additional suggestions on how to convey this message!> Is there a way that people can orient towards applying even though there is a high chance of rejection?While it’s easier said than done, I’d try to think of applying as being mostly upside - the application is a useful exercise for career planning in and of itself, and then if we think it makes sense to have a call, you’ll get some extra advice.
My guess is that in a lot of cases, the root cause of negative feelings here is going to be something like perfectionism. I certainly felt disenchanted when I wasn’t able to make as much progress on AI as I would have liked. But I also felt disenchanted when I wasn’t able to make much progress on ethics, or being more conscientious, or being a better dancer. I think EA does some combination of attracting perfectionists, and exacerbating their tendencies. My colleagues have put together some great material on this, and other mental health issues:
That said, even if you have a healthy relationship with failure/rejection, feeling competent is really important for most people. If you’re feeling burnt out, I’d encourage you to explore more and focus on building aptitudes. When I felt AI research wasn’t for me, I explored research in other areas, community building, earning to give, and others. I also kept building my fundamental skills, like communication, analysis and organisation. I didn’t know where I would be applying these skills, but I knew that they’d be useful somewhere.