I (Phib) thought Zvi's treatment of the AI policy from Nick Whitaker and Leopold Aschenbrenner was worth sharing in particular from his recent "AI #75 Math is Easier". 

I think "national security" as an influence with regards to the future of AI will only become more relevant, if not dominant, in the near future, so it's interesting to read what people (like Leopold, I believe) who are primarily focused on the national security perspective are suggesting–and also to read what proposals Zvi  agrees with (most of them) and the underlying framing he disagrees with. 

For further discussion of National Security and why it might be suboptimal for the future of AI, I suggest this post, "Against Aschenbrenner: How 'Situational Awareness' constructs a narrative that undermines safety and threatens humanity".

Humanity macrosecuritization suggests the object of security is to defend all of humanity, not just the nation, and often invokes logics of collaboration, mutual restraint and constraints on sovereignty. Sears uses a number of examples to show that when issues are constructed as issues of national security, macrosecuritization failure tends to occur, and the actions taken often worsen, rather than help, the issue.

Below is the excerpt from Zvi:

Nick Whitaker offers A Playbook for AI Policy at the Manhattan Institute, which was written in consultation with Leopold Aschenbrenner.

Its core principles, consistent with Leopold’s perspective, emphasize things differently than I would have, and present them differently, but are remarkably good:

  1. The U.S. must retain, and further invest in, its strategic lead in AI development.
    1. Defend Top AI Labs from Hacking and Espionage.
    2. Dominate the market for top AI talent (via changes in immigration policy).
    3. Deregulate energy production and data center construction.
    4. Restrict flow of advanced AI technology and models to adversaries.
  2. The U.S. must protect against AI-powered threats from state and non-state actors.
    1. Pay special attention to ‘weapons applications.’
    2. Oversight of AI training of strongest models (but only the strongest models).
    3. Defend high-risk supply chains.
    4. Mandatory incident reporting for AI failures, even when not that dangerous.
  3. The U.S. must build state capacity for AI.
    1. Investments in various federal departments.
    2. Recruit AI talent into government, including by increasing pay scales.
    3. Increase investment in neglected domains, which looks a lot like AI safety: Scalable oversight, interpretability research, model evaluation, cybersecurity.
    4. Standardize policies for leading AI labs and their research and the resulting frontier model issues, apply them to all labs at the frontier.
    5. Encourage use of AI throughout government, such as in education, border security, back-office functions (oh yes) and visibility and monitoring.
  4. The U.S. must protect human integrity and dignity in the age of AI.
    1. Monitor impact on job markets.
    2. Ban nonconsensual deepfake pornography.
    3. Mandate disclosure of AI use in political advertising.
    4. Prevent malicious psychological or reputational damage to AI model subjects.

It is remarkable how much framing and justifications change perception, even when the underlying proposals are similar.

Tyler Cowen linked to this report, despite it calling for government oversight of the training of top frontier models, and other policies he otherwise strongly opposes.

Whitaker calls for a variety of actions to invest in America’s success, and to guard that success against expropriation by our enemies. I mostly agree.

There are common sense suggestions throughout, like requiring DNA synthesis companies to do KYC. I agree, although I would also suggest other protocols there.

Whitaker calls for narrow AI systems to remain largely unregulated. I agree.

Whitaker calls for retaining the 10^26 FLOPS threshold in the executive order (and in the proposed SB 1047 I would add) for which models should be evaluated by the US AISI. If the tests find sufficiently dangerous capabilities, export (and by implication the release of the weights, see below) should be restricted, the same as similar other military technologies. Sounds reasonable to me.

Note that this proposal implies some amount of prior restraint, before making a deployment that could not be undone. Contrast SB 1047, a remarkably unrestrictive proposal requiring only internal testing and with no prior restraint.

He even says this, about open weights and compute in the context of export controls.

These regulations have successfully prevented advanced AI chips from being exported to China, but BIS powers do not extend to key dimensions of the AI supply chain. In particular, whether BIS has power over the free distribution of models via open source and the use of cloud computing to train models is not currently clear.

Because the export of computing power via the cloud is not controlled by BIS, foreign companies are able to train models on U.S. servers. For example, the Chinese company iFlytek has trained models on chips owned by third parties in the United States. Advanced models developed in the U.S. could also be sold (or given away, via open source) to foreign companies and governments.

To fulfill its mission of advancing U.S. national security through export controls, BIS must have power over these exports. That is not to say that BIS should immediately exercise these powers—it may be easier to monitor foreign AI progress if models are trained on U.S. cloud-computing providers, for example—but the powers are nonetheless essential.

When and how these new powers are exercised should depend on trends in AI development. In the short term, dependency on U.S. computing infrastructure is an advantage. It suggests that other countries do not have the advanced chips and cloud infrastructure necessary to enable advanced AI research. If near-term models are not considered dangerous, foreign companies should be allowed to train models on U.S. servers.

However, the situation will change if models are evaluated to have, or could be easily modified to have, powerful weapons capabilities. In that case, BIS should ban agents from countries of concern from training of such AIs on U.S. servers and prohibit their export.

I strongly agree.

If we allow countries with export controls to rent our chips, that is effectively evading the export restrictions.

If a model is released with open weights, you are effectively exporting and giving away the model, for free, to foreign corporations governments. What rules you claim to be imposing to prevent this do not matter, any more than your safety protocols will survive a bit of fine tuning. China’s government and corporations will doubtless ignore any terms of service you claim to be imposing.

Thus, if and when the time comes that we need to restrict exports of sufficiently advanced models, if you can’t fully export them then you also can’t open their weights.

We need to be talking price. When would such restrictions need to happen, under what circumstances? Zuckerberg’s answer was very clear, it is the same as Andreessen’s, and it is never, come and take it, uber alles, somebody stop me.

My concern is that this report, although not to the extreme extent of Sam Altman’s editorial that I discuss later, frames the issue of AI policy entirely in nationalistic terms. America must ‘maintain its lead’ in AI and protect against its human adversaries. That is the key thing.

The report calls for scrutiny instead of broadly-capable AIs, especially those with military and military-adjacent applications. The emphasis on potential military applications reveals the threat model, which is entirely other humans, the bad guy with the wrong AI, using it conventionally to try and defeat the good guy with the AI, so the good AI needs to be better sooner. The report extends this to humans seeking to get their hands on CBRN threats or to do cybercrime.

Which is all certainly an important potential threat vector. But I do not think they are ultimately the most important ones, except insofar as such fears drive capabilities and thus the other threat vectors forward, including via jingoistic reactions.

Worrying about weapons capabilities, rather than (among other things) about the ability to accelerate further AI research and scientific progress that leads into potential forms of recursive self-improvement, or competitive pressures to hand over effective control, is failing to ask the most important questions.

Part 1 discusses the possibility of ‘high level machine intelligence’ (HLMI) or AGI arriving soon. And Leopold of course predicts its arrival quite soon. Yet this policy framework is framed and detailed for a non-AGI, non-HLMI world, where AI is strategically vital but remains a ‘mere tool’ typical technology, and existential threats or loss of control are not concerns.

I appreciated the careful presentation of the AI landscape.

For example, he notes that RLHF is expected to fail as capabilities improve, and presents ‘scalable oversight’ and constitutional AI as ‘potential solutions’ but is clear that we do not have the answers. His statements about interpretability are similarly cautious and precise. His statements on potential future AI agents are strong as well.

What is missing is a clear statement of what could go wrong, if things did go wrong. In the section ‘Beyond Human Intelligence’ he says superhuman AIs would pose ‘qualitatively new national security risks.’ And that there are ‘novel challenges for controlling superhuman AI systems.’ True enough.

But reading this, would someone who was not doing their own thinking about the implications understand that the permanent disempowerment of humanity, or outright existential or extinction risks from AI, were on the table here? Would they understand the stakes, or that the threat might not come from malicious use? That this might be about something bigger than simply ‘national security’ that must also be considered?

Would they form a model of AI that would then make future decisions that took those considerations into account the way they need to be taken into account, even if they are far more tractable issues than I expect?

No. The implication is there for those with eyes to see it. But the report dare not speak its name.

The ‘good news’ is that the proposed interventions here, versus the interventions I would suggest, are for now highly convergent.

For a central example: Does it matter if you restrict chip and data and model exports in the name of ‘national security’ instead of existential risk? Is it not the same policy?

If we invest in ‘neglected research areas’ and that means the AI safety research, and the same amount gets invested, is the work not the same? Do we need to name the control or alignment problem in order get it solved?

In these examples, these could well be effectively the same policies. At least for now. But if we are going to get through this, we must also navigate other situations, where differences will be crucial.

The biggest danger is that if you sell National Security types on a framework like this, or follow rhetoric like that now used by Sam Altman, then it is very easy for them to collapse into their default mode of jingoism, and to treat safety and power of AI the way they treated the safety and power of nuclear weapons - see The Doomsday Machine.

It also seems very easy for such a proposal to get adopted without the National Security types who implement it understanding why the precautions are there. And then a plausible thing that happens is that they strip away or cripple (or simply execute poorly) the parts that are necessary to keep us safe from any threat other than a rival having the strong AI first, while throwing the accelerationist parts into overdrive.

These problems are devilishly hard and complicated. If you don’t have good epistemics and work to understand the whole picture, you’ll get it wrong.

For the moment, it is clear that in Washington there has been a successful campaign by certain people to create in many places allergic reactions to anyone even mentioning the actual most important problems we face. For now, it turns out the right moves are sufficiently overdetermined that you can make an overwhelming case for the right moves anyway.

But that is not a long term solution. And I worry that abiding by such restrictions is playing into the hands of those who are working hard to reliably get us all killed.

9

0
0

Reactions

0
0

More posts like this

Comments1
Sorted by Click to highlight new comments since:

Executive summary: Zvi analyzes an AI policy proposal focused on national security, agreeing with many recommendations but criticizing its framing and failure to address existential risks from advanced AI.

Key points:

  1. The policy proposal emphasizes US strategic leadership in AI and protecting against threats from adversaries.
  2. Recommendations include investing in AI talent, restricting technology flow to adversaries, and oversight of frontier AI models.
  3. Zvi agrees with many specific policy suggestions, including export controls on advanced AI.
  4. The proposal's framing focuses on national security rather than existential risks from advanced AI.
  5. Zvi warns that this framing could lead to dangerous jingoism and acceleration of AI development without proper safeguards.
  6. While current policy recommendations align with safety concerns, Zvi argues long-term solutions require addressing existential risks explicitly.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

More from Phib
Curated and popular this week
Relevant opportunities