(Writing out this post has helped me answer the question for myself, but I still want to post it to see other thoughts).
I have been hesitant to get into AI safety research. Then I watched Robert Miles's video that walked through Stuart Russell's 10 reasons for not working on AI safety. The point of the video (and the point of Stuart Russell making the list) was to argue against all 10 reasons. The arguments against the 10 reasons convinced me to pay more attention and potentially pursue AI safety as a career path.
However, there is one "reason" that I don't think was covered. My concern is that "working on AI safety" might just mean "working on AI." The process of pursuing Safe AI might just advance the field enough to the point that it creates Dangerous AI. The safety researchers might discover something that gets twisted by bad faith actors/researchers/engineers.
My question is: are there any papers/videos/blogs that discuss this concern in more detail?
(My best counter-argument to my own concern is an unrefined analogy that I could probably improve: ignoring AI safety because "doing AI safety work might lead to something bad" is kind of like ignoring going to the hospital because something worse could technically happen to you on the car ride there.)
I'd like to add that I think there are ways in which safety work gets done without people working on 'AI safety'. This isn't in conflict with what you said, but it does mean that people who want to work on safety could not go on to work on it but there are still people doing the jobs of AI safety researchers.
It seems plausible to me that a person could end up working on AI and economic incentives push them to work on a topic related to safety (e.g. google want to build TAI -> they want to understand what is going on in their deep neural nets better ->... (read more)