After reading the comments on this EA Reddit post about the recent 80,000 Hours newsletter and similar stories on the EA Forum, about how difficult it is to secure a job in AI safety, even with relevant credentials and experience, I remembered an AMA with Peter Singer. When asked what he'd do today if he were in his twenties and wanted to significantly help the world, Singer responded: “I'm not sure that I'd be a philosopher today. When I was in my twenties, practical ethics was virtually a new field, and there was a lot to be done. […] Now there are many very good people working in practical ethics, and it is harder to have an impact. Perhaps I would become a full-time campaigner, either for effective altruism in general, or more specifically, against factory farming”.
This got me thinking: Isn't AI safety facing a similar situation? There are already many skilled and highly capable people working directly in AI safety research and policy, making it increasingly difficult for newcomers to have a significant impact. Hundreds of books and thousands of papers have already been written on this topic and, having done a fair amount of reading on autonomous weapons myself, let me tell you if you don’t already know, much of it is rehashed material, with occasional novel ideas here and there.
If you've spent months, or even years, unsuccessfully trying to land an AI safety role, consider for a moment that you're essentially competing with hundreds of other skilled AI researchers to contribute to papers or reports that might, at best, result in minor amendments to policies that are largely drafted but not implemented. In many ways, probably the bulk of the urgent research has already been done, but without implementation, it remains worthless.
AI policy research will likely accelerate over the next few years, not only because of highly skilled and motivated people who are rushing in, but also because AI itself will increasingly assist with policymaking. On the other hand, AI won’t take to the streets with banners, chanting and demanding it’s own regulation.
For all the AI safety laypeople, wouldn’t it make more sense to focus on activism, which is currently almost nonexistent, and begin protesting Jody Williams style? The same way she and the International Campaign to Ban Landmines successfully campaigned in the 1990s, leading to the 1997 Ottawa Treaty banning anti-personnel mines and ultimately earning the Nobel Peace Prize.
Courtesy Linda Panetta, Optical Realities Photography
For all the AI safety researchers, why not take to the streets as well? Knowledgeable voices are urgently needed beyond academia, think tanks, or AI labs.
I'm not sure the policies have been mostly worked out but not implemented. Figuring out technical AI governance solutions seems like a big part of what is needed.
I think there's an intersection between the PauseAI kind of stuff, and a great-powers reconciliation movement.
Most of my scenario-forecast likelihood-mass, where the scenarios feature near-term mass-death situations, exist in this intersection between great-power cold-wars, proxy-wars in the global-south, AI brinkmanship, and asymmetrical biowarfare.
Maybe combining PauseAI with a 🇺🇸/🇨🇳 reconciliation and collaboration movement, would be a more credible orientation.
I mostly disagree with this conclusion. Protests are only rarely effective, and in this case you don't have charismatic victims (like in the landmine example you gave) or warning shots to point to. I suspect you won't summon the popular attention needed until that point. Meanwhile, AI safety work is something where we can at least try to make real progress now.
I think even if there's only mild support until watching shots, having an organization and infrastructure ready to ramp up the moment a warning shot hits could be critical - restart than scrambling to organize when it does occur.