Two Strange Things About AI Safety Policy

by Jay_Shooster 3y28th Sep 201628 comments

17


Views expressed here are solely my own, and not of my employer (I’ve always wanted to say that)



Not to brag (but probably to brag a little bit) I work at a pretty awesome place for those that want to be involved in national security policy. Just Security (JS), where I work as an editor, is one of the leading publications on national security law, policy, and strategy. Our readership and editorial staff consists of the top lawyers, journalists, law professors, and government officials working in this space.

 

Naturally, as a self-described “hardcore EA,” my first thought when I arrived at JS was “how can I weave AI Safety into our programming?” I quickly got permission from my bosses to start exploring some kind of extensive project on AI-safety: a high profile panel event, an in-depth blog post, or a special feature for our website (a video, podcast, or data visualization).

 

Surely, the AI safety-focused EAs were going be thrilled when they heard that I could leverage JS’s platform/connections to promote the best thinking in AI-safety, right? Wrong. Not even close.

 

When I reached out to two leading AI safety-focused EAs about this opportunity, I was sorely disappointed by their lack of enthusiasm. To be clear, they were very kind, but they didn’t think there was much I could really do to help. Both AI experts discouraged me from doing an event on anything that touches on the catastrophic risks posed by AI and recommended I try something on biosecurity or autonomous weapons instead, in spite of the fact that they both agreed that the catastrophic risk from AI is a much more important issue in expectation. That’s strange.


I have little doubt that if I reached out to two random poverty or animal-focused EAs with the pitch “I can get a bunch of respected journalists, academics, and policymakers to hear the exact perspective you want me to share with them on our trusted/prestigious platform,” they would be pretty psyched about that (as I think they should be). So what’s so different about AI safety?

 

A few things, perhaps. One thing that came up in both my conversations was the sentiment that the existential risks from AI are too far off and thus too nebulous to say anything concrete and interesting about them. A related concern was that basically anyone with sound views on this subject (except for Stuart Russell) is bound to be perceived to be a crazy person for talking about it when the tech is still so far off; and by raising the concern too early, we risk branding ourselves as fearmongers and making it harder to be taken seriously down the line when the threat is more clearly materializing (or at least once the field of safety research has become more institutionalized and prestigious).

 

Another thing they suggested is that, in many ways, the field is already quite crowded. They offered the idea of doing an event on autonomous weapons or surveillance, as a way of building credentials/capital in this general area. But both noted that this is already a very sexy field and that fancy/smart safety-oriented folks are already thinking about this. In the few short weeks working at JS, I’m already seeing more and more discussion of autonomous weapons systems popping up.

 

They mentioned that there are already influential people in the US government that are already completely aware of the issues around AI safety. I think the suggestion there was that there is little I could do to advance the policy conversation in a strategically sound way that they wouldn’t already be on top of.

 

All of this is to say: maybe AI safety not as neglected (or tractable) as I thought. I’m interested in hearing people’s thoughts about this. Perhaps it’s just that AI research is basically the only way to be helpful in this space right now, and maybe there’s an extreme lack of talent in this space that we need to develop. And maybe, the best way to sustainably grow the field is by doing it kind of under the radar. I can accept that. But it’s certainly...different.

Another related strange thing is that nobody is trying to slow down the development of AGI even though many EAs have decided that an unfriendly/misaligned AGI is basically the worst thing in the world by many orders of magnitude.

 

I spoke to one EA who made an argument against slowing down AGI development that I think is basically indefensible: that doing so would slow the development of machine learning-based technology that is likely to lead to massive benefits in the short/medium term. But by the own arguments of the AI-focused EAs, the far future effects of AGI dominate all other considerations by orders of magnitude. If that’s the case, then getting it right should be the absolute top priority, and virtually everyone agrees (I think) that the sooner AGI is developed, the higher the likelihood that we were ill prepared and that something will go horribly wrong. So, it seems clear that if we can take steps to effectively slow down AGI development we should.

 

Of course, EAs have made good arguments for not trying to slow down AI progress. The big ones that came up with the experts I consulted were (1) that it’s intractable because the forces of industry are stacked against us, and (2) that amplifying fear of AI might exacerbate the arms race ( a common concern that applies to all technological developments).

 

The first point I think is reasonable to a degree. If (as is likely the case, for now) marginal resources are better spent on safety research than slowing down development, then we should avoid it. But, it seems likely that we might get to a point where the research field becomes so saturated that this changes. And, even more importantly, there are already lots of people like myself, that have advocacy skills that would be very applicable to changing institutional policies on AI, but lack technical skills useful for AI research. Even if we think that slowing down development would be very intractable, the scale of the problem is so great that it seems like a plausible contender for the most impactful thing lots of folks could work on (especially for those that believe anything not AI-related is just a rounding error in the utilitarian calculus).

 

The second point (that the very act of trying to slow down development could exacerbate the problem) is one that I take seriously. But, it’s certainly far from obvious that any attempts to slow down development will have negative consequences in expectation. And I’d definitely like to see more discussion of  this crucially important question.

 

 

Endnotes:

 

I’m nowhere near an expert in this area, and I’m doing my best to reflect the opinions of the experts I consulted, but I’m surely going to misrepresent their views at least slightly. Sorry!

 

 

Honest question: can we invest in making more Stuart Russells? (e.g. Safety-oriented authority figures in AI). Can we use our connections in academia to give promising EAs big prestige-building opportunities (conference invites, publication opportunities, scholarships, research and teaching positions, co-authorships) in academia etc.? (Also can we do this more in general?)