The Biden White House has released a memorandum on “Advancing United States’ Leadership in Artificial Intelligence” which includes, among other things, a directive for the National Security apparatus to become a world leader in the use of AI. Under direction from the White House, the national security state is expected to take up this leadership position by poaching great minds from academia and the private sector and, most disturbingly, leveraging already functioning private AI models for national security objectives. 
 

Private AI systems like those operated by tech companies are incredibly opaque. People are uncomfortable—and rightly so—with companies that use AI to decide all sorts of things about their lives–from how likely they are to commit a crime, to their eligibility for a job, to issues involving immigration, insurance, and housing. Right now, as you read this, for-profit companies are leasing their automated decision-making services to all manner of companies and employers and most of those affected will never know that a computer made a choice about them and will never be able to appeal that decision or understand how it was made. 
 

But it can get worse; combining both private AI with national security secrecy threatens to make an already secretive system even more unaccountable and untransparent. The constellation of organizations and agencies that make up the national security apparatus are notoriously secretive. EFF has had to fight in court a number of times in an attempt to make public even the most basic frameworks of global dragnet surveillance and the rules that govern it. Combining these two will create a Frankenstein’s Monster of secrecy, unaccountability, and decision-making power. 
 

While the Executive Branch pushes agencies to leverage private AI expertise, our concern is that more and more information on how those AI models work will be cloaked in the nigh-impenetrable veil of government secrecy. Because AI operates by collecting and processing a tremendous amount of data, understanding what information it retains and how it arrives at conclusions will all become incredibly central to how the national security state thinks about issues. This means not only will the state likely make the argument that the AI’s training data may need to be classified, but they may also argue that companies need to, under penalty of law, keep the governing algorithms secret as well. 
 

As the memo says, “AI has emerged as an era-defining technology and has demonstrated significant and growing relevance to national security.  The United States must lead the world in the responsible application of AI to appropriate national security functions.” As the US national security state attempts to leverage powerful commercial AI to give it an edge, there are a number of questions that remain unanswered about how much that ever-tightening relationship will impact much needed transparency and accountability for private AI and for-profit automated decision making systems. 

3

0
0

Reactions

0
0
Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities