ACA Submission on Autonomous Weapons Systems to the UN Secretary-General

The Arms Control Association (ACA) welcomes this opportunity to respond to a letter dated February 7, 2024, from the UN Office for Disarmament Affairs inviting civil society organizations to submit their views on autonomous weapons systems (AWS) and ways to address the challenges and concerns they raise from humanitarian, legal, security, technological, and ethical perspectives.

These submissions, the letter indicated, would contribute to the Secretary-General’s forthcoming report to the General Assembly, pursuant to General Assembly Resolution 78/241 of December 22, 2023, calling on the Secretary-General to seek the views of civil society organizations on these matters.

In Resolution 78/241, the General Assembly expressed its concern, inter alia, about the “impact of autonomous weapon systems on global security and regional and international stability, including the risk of an emerging arms race [and] lowering the threshold for conflict and proliferation.”

The Arms Control Association shares these concerns about the impact of AWS on international peace and stability. For more than fifty years, the ACA has worked to promote effective measures to reduce nuclear risks through national self-restraint, diplomatic engagement, bilateral and multilateral arms control, nonproliferation, and disarmament, and other forms of international regulation.

Notwithstanding the ACA’s primary focus on reducing the dangers posed by nuclear weapons and achieving full nuclear disarmament, we believe that the deployment of autonomous weapons systems and automated battlefield command-and-control (C2) systems pose a significant risk to strategic stability, and therefore require strict regulation and oversight.

We highlight in this submission two specific escalatory concerns: the integration of autonomy with nuclear command, control, and communications (NC3) systems, and the use of conventionally armed AWS to target and destabilize nuclear forces. 

Threats to Strategic Stability Between Nuclear-Armed States

In its influential 2016 report on autonomy, the U.S. Defense Science Board distinguished between two categories of intelligent systems: those employing autonomy at rest and those employing autonomy in motion. “In broad terms,” it stated, “systems incorporating autonomy at rest operate virtually, in software, and include planning and expert advisory systems, whereas systems incorporating autonomy in motion have a presence in the physical world and include robotics and autonomous vehicles.”1

Both categories of intelligent systems pose troubling implications for strategic stability and the risk of nuclear war. Major powers are automating their battlefield C2 systems and equipping them with algorithms for calculating enemy moves and intentions, selecting the optimal countermoves, and dispatching attack orders directly to friendly units for implementation—all with minimal human oversight. Research by a number of analysts suggest that in future conflicts among the major powers, such systems will contribute to and increase the risk of mutually reinforcing escalatory moves, potentially igniting accidental or inadvertent nuclear escalation.2

Although none of the nuclear powers are thought to be extending this type of software to autonomously manage their nuclear forces, many states see a potential for and are likely already developing AI algorithms to assist discrete components of their nuclear early warning and launch systems, for example with the interpretation of possible enemy missile launches.3

It is essential that AI software used to support these applications remain physically disconnected from nuclear launch authority to prevent any possibility of an unintended AI-triggered nuclear exchange. Concern about this possibility reinforces the already strong rationale for nuclear-armed states to move away from nuclear postures that call for prompt retaliatory nuclear counterattack.

Meanwhile, autonomy in motion in the form of conventionally armed AWS, in combination with advanced, AI-enhanced autonomous intelligence and reconnaissance systems, could contribute to accidental or unintended nuclear escalation by creating the impression that an attacker is conducting a disarming counterforce strike, aimed at eliminating or degrading the target state’s nuclear retaliatory capabilities. Crisis instability created by the possibility of disarming conventional strikes against nuclear forces is a long-standing concern, but the introduction of autonomous systems to the problem further exacerbates nuclear dangers. 

Of particular concern is the potential of loitering AWS to reveal the location of elusive nuclear retaliatory forces, such as mobile ICBMs or ballistic missile submarines. Deployed in enough numbers, AI-enabled AWS swarms could endanger those nuclear forces states presently believe to be the most survivable.4 The fear that an AI-controlled AWS swarm could uncover the locations of a nuclear-armed state’s submerged submarines or road-mobile ICBMs could prompt that state to place its weapons on a higher state of alert in a crisis and possibly trigger their unintended or accidental use.

Retaining Human Control

The Arms Control Association strongly adheres to the principle that the decision to use nuclear weapons must always remain the responsibility of a human being, and that such decisions conform with the Laws of War and particularly International Humanitarian Law, which rules out the employment of nuclear weapons particularly in response to nonnuclear threats. The profound legal, ethical, and humanitarian ramifications of any nuclear weapons employment—potentially extinguishing the lives of millions of people and rendering the planet uninhabitable—demand that humans, and never machines, bear the responsibility and moral culpability for their use.

Starting from this premise, and in recognition of the risks of escalation described above, we also believe that any fully autonomous weapons systems or automated battlefield C2 systems operating outside of continuous human supervision when in combat should be prohibited under binding international law.

In addition, we believe that all other lethal weapons systems featuring autonomy should be regulated in order to ensure compliance with international humanitarian law, including by insisting on human responsibility and accountability.

In response to expert warnings and the United States’ own concerns about the integration of AI in NC3 systems, the Biden administration’s 2022 Nuclear Posture Review asserts that the United States “will maintain a human ‘in the loop’ for all actions critical to informing and executing decisions by the President to initiate or terminate nuclear weapons.”

We also note the statement by China’s Permanent Representative to the UN Ambassador Zhang Jun at the UN Security Council Briefing on March 18, 2024, on Nuclear Disarmament and Non-proliferation, in which he declared: “Countries should continue to enhance the safety, reliability, and controllability of AI technology and ensure that relevant weapon systems are under human control at all times.”

While helpful, such assurances and exhortations are insufficient to guard against the significant risks of AI integration in battlefield decision-support systems and especially in NC3 systems involved in nuclear weapons employment. Therefore, we believe that binding legal measures, of the sort described above, are needed to ensure human control over the use of nuclear weapons.

Recommended Actions

In accordance with these basic principles, the Arms Control Association offers these additional recommendations to the Secretary General and the General Assembly:

1. Mindful that the use or threat of nuclear weapons has been deemed “inadmissible” and contrary to international law and the Treaty on the Prohibition on Nuclear Weapons, the UN General Assembly should call on all nuclear-armed states to commit—either through coordinated action or in a binding agreement—to retain human control over any decision to use nuclear weapons and to insert automated, failsafe “tripwires” in advanced command-and-control systems to disallow action leading to or resulting in nuclear weapons employment or escalation without human approval.

Ideally, the nuclear weapons states should themselves take steps toward creating an international norm that recognizes and affirms this principle by issuing unilateral statements that decisions involving nuclear use will always be reserved for human beings. A more ambitious but more effective measure would be a multilateral statement by the P5 that jointly commits to the same norm.5 The Secretary-General of the United Nations can assist in the creation of this norm by encouraging nuclear weapons states to discuss the topic in multilateral and bilateral formats.

To give effect to a norm reserving nuclear use to human control, the nuclear weapons states should integrate technical tripwires that prevent escalation to nuclear weapons use without human intervention in all deployed C2 systems. Critically, this would also mean ensuring that all AI-enabled C2 systems for conventional military operations are carefully and deliberately prohibited from giving instructions to nuclear weapons systems.

2. The UN General Assembly should call upon on all states to commit to retaining uninterrupted human control over any AWS potentially involved in strategic counterforce missions and to exclude such weapons systems from AI-enabled decision-support systems that could assign and authorize counterforce missions without human oversight.

Such commitments are urgently needed because unauthorized, accidental strikes on nuclear forces by loitering autonomous strike systems could give rise to false warning of an incoming strategic attack. Likewise, unauthorized conventional strikes could be accidentally launched either by AWS originally assigned an observation mission or by a central decision-support AI that issues erroneous commands to an AWS strike force.

To prevent this category of accidental escalation, states should ensure that forces assigned to conventional counterforce missions with strategic implications remain under human control at all times and forego integration with AI systems altogether. This would also preclude the possibility of a battlefield C2 system programed to support strategic counterforce missions accidentally gaining authority to launch nuclear weapons.

3. The UN General Assembly should convene an expert body to assess the types and roles of AI algorithms that are used in nuclear command and control systems and the dangers these could pose, and to consider limitations on such algorithms. This body should also evaluate whether there are certain roles within NC3 systems that should never be assigned to algorithms.

Automation of simple tasks within the nuclear chain-of-command is not new, but the types of AI algorithms that might be adapted to military operations are expanding. AI models have increased in capability and complexity as early machine learning methods have been superseded by deep learning techniques. As AI researchers develop these techniques further, the capabilities of tomorrow’s algorithms may expand significantly.

Given the rapid pace of research into new AI models and the lack of existing norms and understandings between nuclear powers about their application, the United Nations could play a key role in convening experts to track the technical evolution of these models. A multilateral technical effort would supplement unilateral research into risks, creating an ecosystem in which advanced research can be shared among concerned states.

Conclusion

At this juncture, the most pressing priority is the endorsement by the United Nations of basic norms regarding human control over nuclear weapons launch decisions to which all nuclear weapons states can agree. Although there may be little serious opposition to the principle that humans must remain in control of nuclear weapons systems, arriving at a formulation acceptable to the nuclear weapons states will still require deft diplomacy as well as the full-throated support of UN member states.

The UN General Assembly is also poised to play a pivotal role in promoting research on the dangers posed by AWS and AI-enabled battlefield C2 systems to nuclear stability, and to devise practical measures to reduce these risks. We therefore urge the Secretary-General and the General Assembly to carefully consider our assessment of the risks posed by autonomous weapons systems to strategic stability and our recommendations for reducing those risks. 


1. Office of the Under Secretary of Defense for Acquisition, Technology and Logistics “Report of the Defense Science Board Summer Study on Autonomy,” June 2016, https://apps.dtic.mil/sti/pdfs/AD1017790.pdf, p. 5.

2. See Eric Schmidt, et al., “Final Report of the National Security Commission on Artificial Intelligence,” March 2021, https://cybercemetery.unt.edu/nscai/20211005220330/https://www.nscai.gov/, and Michael T. Klare, “Assessing the Dangers: Emerging Military Technologies and Nuclear (In)Stability,” Arms Control Association Report, February 2023, https://www.armscontrol.org/sites/default/files/files/Reports/ACA_Report_EmergingTech_digital_0.pdf.

3. Alice Saltini, “AI and Nuclear Command, Control and Communications: P5 Perspectives,” Report, European Leadership Network, Nov. 2023, https://www.europeanleadershipnetwork.org/wp-content/uploads/2023/11/AVC-Final-Report_online-version.pdf, pp. 16-17.

4. James S. Johnson, “Artificial Intelligence: A Threat to Strategic Stability,” Strategic Studies Quarterly, Vol. 14, No. 1 (Spring 2020), pp. 20-22.

5. As suggested by, inter alia, Geneva Center for Security Policy, “P5 Experts’ Roundtable on Nuclear Risk Reduction – Co-Convenors’ Summary,” Dec. 14, 2023, https://www.gcsp.ch/global-insights/p5-experts-roundtable-nuclear-risk-reduction-co-convenors-summary; Michael Horowitz and Paul Scharre, “AI and International Stability: Risks and Confidence-Building Measures,” Report, Center for a New American Security, Jan. 12, 2021, https://www.cnas.org/publications/reports/ai-and-international-stability-risks-and-confidence-building-measures, p. 20.