Hide table of contents

Quoting from Rohin Shah's Alignment Newsletter (with light editing):

In the US, the National Security Commission on AI released their report to Congress. The full pdf is over 750 pages long, so I have not read it myself, and instead I’m adding in some commentary from others. In their newsletter, CSET says that highlights include:

- A warning that the U.S. military could be at a competitive disadvantage within the next decade if it does not accelerate its AI adoption. The report recommends laying the foundation for widespread AI integration by 2025, comprising a DOD-wide digital ecosystem, a technically literate workforce, and more efficient business practices aided by AI.

- A recommendation that the White House establish a new “Technology Competitiveness Council,” led by the vice president, to develop a comprehensive technology strategy and oversee its implementation.

- A recommendation that the U.S. military explore using autonomous weapons systems, provided their use is authorized by human operators.

- A proposal to establish a new Digital Service Academy and a civilian National Reserve to cultivate domestic AI talent.

- A call to provide $35 billion in federal investment and incentives for domestic semiconductor manufacturing.

- A recommendation to double non-defense AI R&D funding annually until it reaches $32 billion per year, and to triple the number of National AI Research Institutes.

- A call for reformed export controls, coordinated with allies, on key technologies such as high-end semiconductor manufacturing equipment.

- A recommendation that Congress pass a second National Defense Education Act and reform the U.S. immigration system to attract and retain AI students and workers from abroad.

While none of the report’s recommendations are legally binding, it has reportedly been well-received by key members of both parties.

Matthew van der Merwe also summarizes the recommendations in Import AI; this has a lot of overlap with the CSET summary so I won't copy it here.

Jeff Ding adds in ChinAI #134:

"[I]f you make it past the bluster in the beginning — or take it for what it is: obligatory marketing to cater to a DC audience hooked on a narrow vision of national security — there’s some smart moderate policy ideas in the report (e.g. chapter 7 on establishing justified confidence in AI systems)."

In email correspondence, Jon Rodriguez adds some commentary on the safety implications:

"1. The report acknowledges the potential danger of AGI, and specifically calls for value alignment research to take place (pg. 36). To my knowledge, this is one of the first times a leading world government has called for value alignment.

2. The report makes a clear statement that the US prohibits AI from authorizing the launch of nuclear weapons (pg. 98).

3. The report calls for dialogues with China and Russia to ensure that military decisions made by military AI at "machine speed" does not lead to out-of-control conflict escalation which humans would not want (pg. 97)." 

And here's a very boiled down version of the executive summary (with the phrasings of the recommendations being the exact phrasings the report uses as headings):

  • The NSCAI recommends the government take the following 7 actions to "Defend America in the AI Era":
    1. Defend against emerging AI-enabled threats to America’s free and open society
    2. Prepare for future warfare
    3. Manage risks associated with AI-enabled and autonomous weapons
    4. Transform national intelligence
    5. Scale up digital talent in government
    6. Establish justified confidence in AI systems
    7. Present a democratic model of AI use for national security
  • The NSCAI also recommends the government take the following 8 actions to "Win the Technology Competition":
    1. Organize with a White House–led strategy for technology competition
    2. Win the global talent competition
    3. Accelerate AI innovation at home
    4. Implement comprehensive intellectual property (IP) policies and regimes
    5. Build a resilient domestic base for designing and fabricating microelectronics
    6. Protect America’s technology advantages
    7. Build a favorable international technology order
    8. Win the associated technologies competitions

(Like Rohin, I've not read the full report - I've just read the executive summary, read all mentions of "nuclear", and listened to some episodes of the accompanying podcast series.)

Do "we" pay too little attention to things like this report?

Epistemic status: Just thinking aloud. As usual, these are my personal views only.

I found the executive summary of the report, the accompanying podcast episodes I listened to, and the commentary Rohin collected interesting regarding:

  1. Topics like AI development, AI risk, AI governance, and how this intersects with national security and international relations
  2. How the US government is thinking about about framing those things, how the government might react to or try to influence those things, etc.
    • Obviously the US government is massive and does not act as a single, coherent agent. But this report should still provide one interesting window into its thinking.

I've found what I've read more interesting for (2) than for (1), though I expect the full report would teach me a lot more about (1) as well. And I expect many other Forum readers would find the report - or at least its executive summary - useful for learning about those things as well. In particular, the report and accompanying podcast episodes seemed to have substantially different framings, focuses, and tones to what the longtermist/EA/rationalist communities talk about in relation to AI.

Yet there seems to be no previous link-post to this report on the EA Forum or LessWrong. (The section of the Alignment Newsletter linked to above serves a similar function, but doesn't provide a dedicated title and link just for this report.) Nor does there seem to be much discussion of the report on other posts on those sites. I also hadn't really considered looking at the report myself until last week, despite knowing it existed. 

I wonder if this is a sign that (parts of) the longtermist, EAs, and rationalist communities are paying insufficient attention to the available info on how (a) other communities and (b) relevant powerful actors other than AI labs are thinking about these issues? (I think there are also other signs of this, and that similar criticisms/concerns have been raised before.)

Counterpoints: I've seen the report mentioned by some professional longtermist researchers (that was how it came to my attention). And the report is very long and (unsurprisingly) doesn't seem to be primarily focused on things like existential risk from AI. And it makes sense for people to specialise. So maybe it's unsurprising that things like this report would be discussed mostly by a handful of professional specialists, rather than in venues like the EA Forum or LessWrong.

Counter-counterpoint: The Forum and LessWrong aren't just like regular blogs for semi-laypeople; they're also where a nontrivial fraction of longtermist thinking occurs and is disseminated. 

Thus concludes this round of thinking aloud.

51

0
0

Reactions

0
0

More posts like this

Comments3
Sorted by Click to highlight new comments since: Today at 6:49 AM

A lot of longtermists do pay attention to this sort of stuff, they just tend not to post on the EA Forum / LessWrong. I personally heard about the report from many different people after it was published, and also from a couple of people even before it was published (when there was a chance to provide input on it).

In general I expect that for any sufficiently large object-level thing, the discourse on the EA Forum will lag pretty far behind the discourse of people actively working on that thing (whether that discourse is public or not).  I read the EA Forum because (1) I'm interested in EA and (2) I'd like to correct misconceptions about AI alignment in EA. I would not read it as a source of articles relevant to AI alignment (though every once in a while they do come up).

Yeah, that makes sense. 

It still seems to me like this is a sufficiently important and interesting report that it'd be better if there was a little more mention of it on the Forum, for the sake of "the general longtermist public", since (a) the Forum seems arguably the main, central hub for EA discourse in general, and (b) there is a bunch of other AI governance type stuff here, so having that without things like this report could give a distorted picture. 

But it also doesn't seem like a horrible or shocking error has been committed. And it does make sense that these things would be first, and mostly, discussed in more specialised sub-communities and venues.

I have gotten the general feeling that there is not nearly enough curiosity in this community about the ins and outs of politics vs stuff like the research and tech world. Reports just aren't very sexy. Specialization can be good but there are  topics that EAs engage with that are probably as specialized (hyper specific notions of how an ai might kill us?) that see much more engagement and I don't think it is due to impact estimates.

I don't read much on AI safety so I could be way off but it feels pretty important. The US government could snap their fingers and double the amount of funding going into AI safety. This seems very salient for predicting impacts  of EA AI safety orgs. Either way, this has made me more interested in reading through  https://forum.effectivealtruism.org/tag/ai-governance.

Curated and popular this week
Relevant opportunities