“We continue to count on the valuable contributions of the Arms Control Association.”
Pentagon Seeks to Facilitate Autonomous Weapons Deployment
March 2023
By Michael Klare
The U.S. Defense Department released an updated version of its directive on developing and fielding autonomous weapons systems that seems designed to facilitate the integration of such devices into the military arsenal.
The original version of directive 3000.09, “Autonomy in Weapons Systems,” was published in 2012. Since then, the Pentagon has made considerable progress in using artificial intelligence (AI) to endow unmanned combat platforms with the capacity to operate autonomously and now seems keen to accelerate their deployment.
The new version of the directive was released on Jan. 25 and appears intended to make it easier to advance such efforts by clarifying the review process that proposed autonomous weapons systems must undergo before winning approval for battlefield use.
“Given the dramatic advances in technology happening all around us, the update to our autonomy in weapon systems directive will help ensure we remain the global leader of not only developing and deploying new systems, but also safety,” said Deputy Secretary of Defense Kathleen Hicks in announcing the new version.
When the original version was released 10 years ago, the development of autonomous weapons was just getting under way, and few domestic or international rules governed their use. Accordingly, that version broke new ground just by establishing policies for autonomous weapons systems testing, assessment, and employment.
Chief among these instructions was the mandate that proposed autonomous weapons “shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.” In consonance with this edict, the directive decreed that any proposed system be subjected to a rigorous review process intended to test its compliance with that overarching principle and to ensure that the system’s software was free of any glitches that might hamper its performance or cause it to act in an improper manner.
The meaning of “appropriate levels of human judgment” was not defined in the 2012 version, but its promulgation has allowed senior U.S. officials to insist over the years that the United States is not building self-governing lethal devices, or “killer robots” as they are termed by opponents.
In 2012, those requirements seemed a reasonable basis for regulating the development of proposed autonomous weapons systems. But much has occurred since then, including a revolt by Google workers against the company’s involvement in military-related AI research. (See ACT, July/August 2018.) In addition, there have been efforts by some states-parties to the Convention on Certain Conventional Weapons to impose an international ban on lethal autonomous weapons systems. (See ACT, January/February 2022.)
Such developments have fueled concerns within academia, industry, and the military about the ethical implications of weaponizing AI. Questions have also arisen about the reliability of weapons systems using AI, especially given the propensity of many AI-empowered devices to exhibit racial and gender biases in their operation or to behave in unpredictable, unexplainable, and sometimes perilous ways.
To overcome these concerns, the Defense Department in February 2020 adopted a set of ethical principles governing AI use, including one requirement that the department take “deliberate steps to minimize unintended bias in AI capabilities” and another mandating that AI-empowered systems possess “the ability to detect and avoid unintended consequences.” (See ACT, May 2020.) With these principles in place, the Pentagon then undertook to revise the directive.
At first reading, the new version appears remarkably similar to the first. The overarching policy remains the same, that proposed autonomous weapons systems must allow their operators “to exercise appropriate levels of human judgment over the use of force,” while again omitting any clarification of the term “appropriate levels of human judgment.” As with the original directive, the new text mandates a high-level review of proposed weapons systems and specifies the criteria for surviving that review.
But on closer reading, significant differences emerge. The new version incorporates the ethical principles adopted by the Defense Department in 2020 and decrees that the use of AI capabilities in autonomous weapons systems “will be consistent with” those principles. It also establishes a working group to oversee the review process and ensure that proposed systems comply with the directive’s requirements.
The new text might lead to the conclusion that the Pentagon stiffened the requirements for deploying autonomous weapons systems, which in some sense is true, given the inclusion of the ethical principles. Another conclusion is equally valid: that by clarifying the requirements for receiving high-level approval and better organizing the bureaucratic machinery for such reviews, it lays out a road map for succeeding at this process and thus facilitates autonomous weapons systems development.
This interpretation is suggested by the statement that full compliance with the directive’s requirements will “provide sufficient confidence” that such devices will work as intended, an expression appearing six times in the new text and nowhere in the original. The message, it would seem, is that weapons designers can proceed with development of autonomous weapons systems and ensure their approval for deployment so long as they methodically check off the directive’s requirements, a process facilitated by a flow chart incorporated into the new version.