Login/Logout

*
*  

"In my home there are few publications that we actually get hard copies of, but [Arms Control Today] is one and it's the only one my husband and I fight over who gets to read it first."

– Suzanne DiMaggio
Senior Fellow, Carnegie Endowment for International Peace
April 15, 2019
AI Commission Warns of Escalatory Dangers


March 2021
By Michael T. Klare

For the past two years, the National Security Commission on Artificial Intelligence (NSCAI), established by Congress, has been laboring to develop strategies for the rapid integration of artificial intelligence (AI) into U.S. military operations.

Inside the National Security Agency (NSA) and U.S. Cyber Command Integrated Cyber Center and Joint Operations Center. (Photo credit: NSA) On Mar. 1, the commission is poised to deliver its final report to Congress and the White House. From the very start, this effort has been deemed an essential drive to ensure U.S. leadership in what is viewed as a competitive struggle with potential adversaries, presumably China and Russia, to weaponize advances in AI.

According to its charter, embedded in the National Defense Authorization Act of 2019, the NSCAI was enjoined to consider the “means and methods for the United States to maintain a technological advantage in artificial intelligence, machine learning, and other associated technologies related to national security and defense.”

To allow the public a final opportunity to weigh in on its findings, the NSCAI released a draft of its final report at the beginning of January and discussed it at a virtual plenary meeting Jan. 25. Three main themes emerge from the draft report and the public comments of the commissioners: (1) AI constitutes a “breakthrough” technology that will transform all aspects of human endeavor, including warfare; (2) the United States risks losing out to China and Russia in the competitive struggle to harness AI for military purposes, putting the nation’s security at risk; and (3) as a consequence, the federal government must play a far more assertive role in mobilizing the nation’s scientific and technical talent to accelerate the utilization of AI by the military.

The report exudes a distinct Cold War character in the degree to which it portrays AI as the determining factor in the outcome of future conflicts. Whereas competition involving nuclear-armed ballistic missiles was the issue of the U.S.-Soviet Cold War era, the NSCAI warns that a potential adversary—in this case, China—could overtake the United States in mastering the application of AI for military purposes.

“In the future, warfare will pit algorithm against algorithm,” the report states. “The sources of battlefield advantage will shift from traditional factors like force size and levels of armaments, to factors like superior data collection and assimilation, connectivity, computing power, algorithms, and system security.”

Although the United States enjoys some advantages in this new mode of warfare, the report argues, it risks losing out to China over the long run. “China is already an AI peer, and it is more technically advanced in some applications,” it asserts. “Within the next decade, China could surpass the United States as the world’s AI superpower.”

To prevent this from happening, the NSCAI report argues the United States must accelerate its efforts to exploit advances in computing science for military purposes. As most of the nation’s computing expertise is concentrated in academia and the private sector, much of the report is devoted to proposals for harnessing that talent for military purposes. But it also addresses several issues of deep concern to the arms control community, notably autonomous weapons and nuclear escalation.

Claiming that autonomy will play a critical role in future military operations and that Russia and China, unlike the United States, cannot be relied on to follow ethical standards in the use of autonomous weapon systems on the battlefield, the commission rules out U.S. adherence to any binding international prohibition on the deployment of such systems.

In contrast to those in the human rights and arms control community who warn that fully autonomous weapons cannot be trusted to comply with the laws of war and international humanitarian law, the final report affirms that “properly designed and tested AI-enabled and autonomous weapon systems have been and can continue to be used in ways which are consistent” with international humanitarian law. A treaty banning the use of such systems, the NSCAI report contends, would deny the United States and its allies the benefit of employing such systems in future conflicts while having zero impact on its adversaries, as “commitments from states such as Russia or China likely would be empty ones.”

But in one area—the use of AI in battlefield decision-making—the report does express concern about the implications of its rapid weaponization.

“While the [c]ommission believes that properly designed, tested, and utilized AI-enabled and autonomous weapon systems will bring substantial military and even humanitarian benefit,” it states, “the unchecked global use of such systems potentially risks unintended conflict escalation and crisis instability.”

This section of the report echoes statements to the commission made by a group of arms control experts in an informal dialogue with commission representatives organized by the Arms Control Association on Nov. 24, 2020.

On that occasion, the arms control experts argued that excessive reliance on AI-enabled command-and-control systems in the heat of battle could result in precipitous escalatory actions, possibly leading to the early and unintended use of nuclear weapons, a danger likely to be compounded when commanders on both sides relied on such systems for combat decision-making and the resulting velocity of battle exceeded human ability to comprehend the action and avert bad outcomes.

To prevent this from happening, the arms control experts insisted on the importance of retaining human control over all decisions involving nuclear weapons and called for the insertion of automated “tripwires” in advanced command-and-control systems to disallow escalatory moves without human approval.

Recognizing that these escalatory dangers are just as likely to arise from the automation of Chinese and Russian command-and-control systems, the arms control experts proposed that the United States and Russia discuss these risks in their future strategic security dialogues and that such talks be conducted with China.

All of these recommendations were incorporated into the commission’s final report in one form or another.