The invasion of Ukraine in February 2022 has resulted in hundreds of thousands of casualties and provided a sickening laboratory for the development of the technology of war. Since then, major advancements have been made in unmanned drones and more generally, lethal autonomous weapon systems (LAWS), defined by the ability to search for and engage targets without a human operator. 

Although the conflict has not yet birthed the first queasy sight of a fully autonomous battlefield, according to assessments of those like Kateryna Bondar of the Center for Strategic and International Studies, a conversion to fully autonomous forces is being actively pursued. "We strive for full autonomy,” said Mykhailo Fedorov, the deputy prime minister of Ukraine, to the Guardian in a June article. Proof of concept demonstrations of basic fully autonomous capabilities have existed now for at least several years. 

Others have long called for regulations or bans on LAWS. "Human control over the use of force is essential,” said the United Nations Secretary-General António Guterres at a meeting in May. “We cannot delegate life-or-death decisions to machines.” However, substantive, binding regulations have yet to be adopted by any nations that lead in the development of LAWS, as surveyed in a September 2025 book by Matthijs Maas. 

Large-scale deployment of LAWS therefore looks increasingly likely to occur, even though researchers like Maas caution against seeing autonomous warfare as inevitable. "The military AI landscape at present is at a crossroads," Maas wrote. Regulations remain a possibility post-deployment, or in response to a stigmatization of the technology that it seems likely to cause. 

Nonetheless, the reality that AI is likely to go to war has driven researchers to expand from a "prevailing preoccupation" on how AI will be used—for example, in the form of LAWS—to whether this use will significantly alter geopolitical norms. This was the intriguing argument made by scholars Toni Erskine and Steven Miller in a January article, as well as articles in an accompanying issue of the Cambridge Forum on AI: Law and Governance

Amongst scholars, this shift from seeing LAWS as tools to seeing them as strategic influences has been far from totally uniform or completely new. Research on military AI and LAWS is spread across many sectors of academic study. Nonetheless, it is possible to sketch how and why such a shift has happened, and to explain some of the findings of the new research.

Surprisingly, some scholars have come to somewhat comforting conclusions. For example, in a July 2025 study from the RAND corporation, the authors assessed that AI is not likely to lead to big new wars. "AI’s net effect may tend toward strengthening rather than eroding international stability," the authors wrote. 

These recent studies rely on arguments that deserve to be interrogated further. However, it is worth lingering for a while on the broader dimensions of the transition that has been happening in research on AI and LAWS. 

Continue reading at foommagazine.org ...

7

0
0

Reactions

0
0
Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities