(Writing out this post has helped me answer the question for myself, but I still want to post it to see other thoughts).
I have been hesitant to get into AI safety research. Then I watched Robert Miles's video that walked through Stuart Russell's 10 reasons for not working on AI safety. The point of the video (and the point of Stuart Russell making the list) was to argue against all 10 reasons. The arguments against the 10 reasons convinced me to pay more attention and potentially pursue AI safety as a career path.
However, there is one "reason" that I don't think was covered. My concern is that "working on AI safety" might just mean "working on AI." The process of pursuing Safe AI might just advance the field enough to the point that it creates Dangerous AI. The safety researchers might discover something that gets twisted by bad faith actors/researchers/engineers.
My question is: are there any papers/videos/blogs that discuss this concern in more detail?
(My best counter-argument to my own concern is an unrefined analogy that I could probably improve: ignoring AI safety because "doing AI safety work might lead to something bad" is kind of like ignoring going to the hospital because something worse could technically happen to you on the car ride there.)