Login/Logout

*
*  

"[The Arms Control Association is an] 'exceptional organization that effectively addresses pressing national and international challenges with an impact that is disproportionate to its small size.'" 

– John D. and Catherine T. MacArthur Foundation
January 19, 2011
A Strategy for Reducing the Escalatory Dangers of Emerging Technologies


December 2020
By Michael T. Klare

Throughout human history, military forces have sought to exploit innovations in science and technology to achieve success on the battlefield, very often fielding new technologies before societies could weigh the risks of doing so and impose controls on their use.

U.S. Air Force Chief of Staff Gen. David Goldfein speaks to the Air Force Association's Air, Space and Cyber Conference in September 2019. As great-power competition in cyberwarfare pushes the technology forward, there are risks that potential escalatory consequences are being ignored. (Photo: Wayne Clark/U.S. Air Force)During World War I, for example, Germany exploited advances in chemical production to develop asphyxiating gases for military use, provoking widespread public outrage and prompting postwar efforts to ban such munitions. During World War II, the United States exploited advances in nuclear science to create the atomic weapons used against Japan, again generating public outrage and prompting postwar control efforts. Today, rapid advances in a range of scientific and technological fields—artificial intelligence (AI), robotics, cyberspace, remote sensing, and microelectronics—are again being exploited for military use. And, as before, control efforts are lagging far behind the process of weaponization.

As in those historical examples, the current lack of progress in devising control measures reflects the challenge of grappling with new and unfamiliar technologies. Some of these innovations, such as AI and cyberoperations, are said to pose a particularly severe challenge to arms control because they cannot be as easily quantified and monitored as other weapons limited by arms control agreements, such as intercontinental ballistic missiles (ICBMs). However, this deficiency also represents a failure to grasp the unique ways in which the weaponization of cutting-edge technologies can imperil international peace and stability. To avoid a new round of global catastrophes, it is essential to identify the distinctive risks posed by the military use of destabilizing technologies and overcome the obstacles to their effective control.

Before considering such measures, it is necessary to examine the geopolitical and strategic setting in which the weaponization of these technologies is taking place, as well as the novel ways in which this process endangers international stability.

The Pursuit of Technological Superiority

The level of risk associated with the military exploitation of cutting-edge technologies cannot be separated from the geopolitical context in which this process is occurring, given that the principal enablers of such weaponization—China, Russia, and the United States—perceive themselves to be engaged in a competitive struggle for military advantage at a time when war among them is deemed entirely possible. Under these conditions, all three countries are enhancing their capacity for what the Pentagon calls “high end” warfare, or all-out combat among the modern, well-equipped forces of their adversaries—combat that is expected to make use of every advance in military technology.

The U.S. military leadership first described this evolving environment in its National Defense Strategy of February 2018. “We face an ever more lethal and disruptive battlefield, combined across domains, and conducted at increasing speed and reach,” it stated. “The security environment is also affected by rapid technological advancements and the changing character of war. The drive to develop new technologies is relentless…and moving at accelerating speed.”1

If the United States is to retain its technological edge and prevail in future wars, the leadership asserted, it must master these new technologies and incorporate them into its major military systems.

A very similar outlook regarding the strategic environment is embedded in Chinese and Russian military doctrines. In language strikingly similar to that of the U.S. strategy, but in mirror image, China’s July 2019 white paper on national defense asserts that the United States “has provoked and intensified competition among major countries, significantly increased its defense expenditure, pushed for additional capacity in nuclear, outer space, cyber, and missile defense, and undermined global strategic stability.” If Chinese forces are to prevail in this environment, it states, “greater efforts have to be invested in military modernization to meet national security demands.”2 Russian doctrine makes similar claims and places equal emphasis on the utilization of emerging technologies to ensure success on the battlefield.3

The modernization and enhancement of front-line conventional forces are common themes in the military doctrines of all three countries, but so also is the modernization of strategic nuclear forces. All three are engaged in costly upgrades to their nuclear delivery systems, in some cases involving the replacement of older ICBMs, bombers, and missile-carrying nuclear submarines with newer, more capable versions. More worrisome still, all three are developing nuclear warheads for use in nonstrategic scenarios, for example, to defeat an overpowering conventional assault by an adversary. This is an explicit goal of the Nuclear Posture Review adopted by the Trump administration in February 20184 and is believed to figure in Russian military doctrine. China is less transparent about its nuclear weapons policies, but is known to have developed nuclear warheads for its medium- and intermediate-range ballistic missiles designed for use against U.S. and allied forces in the Asia-Pacific region.

The Eroding Nuclear Firebreak

In light of these developments, many analysts believe that the barriers to nuclear weapons use have been substantially eroded in recent years. Most of these obstacles were erected during the Cold War era, when leaders of the United States and the Soviet Union came to realize that any nuclear conflict between them would result in their mutual annihilation, impelling them to devise a variety of measures intended to prevent a conventional war from escalating across the “firebreak” separating non-nuclear from nuclear combat. These measures included the “hotline” agreement of 1963; successive limitations on the size of each other’s nuclear arsenals, beginning with Strategic Arms Limitation Talks agreement in 1972; and the Intermediate-Range Nuclear Forces Treaty of 1987. In the language of the time, these measures were designed to preserve “strategic stability” by eliminating the risk of accidental, inadvertent, or unintended escalation across the nuclear firebreak.

In today’s strategic environment, however, analysts fear that strategic stability is being undermined by changes in the nuclear doctrines of the major powers and by the introduction of increasingly capable non-nuclear weapons. These developments include, on one hand, the adoption of policies envisioning the use of “tactical” or “nonstrategic” nuclear arms in response to overwhelming non-nuclear attack by an adversary, and, on the other, the deployment of sophisticated cyber and conventional weapons thought capable of locating and destroying an adversary's nuclear combat capabilities, especially its nuclear command, control, communications, and intelligence (C3I) systems. Also contributing to this environment of instability, analysts warn, is the dissolution of the arms control regime established by the two superpowers during the Cold War era and the emergence of India and Pakistan as major nuclear weapons powers.5

None of these countries would deliberately choose to initiate a nuclear exchange, recognizing that the costs of doing so in terms of homeland devastation would be so high. Yet, they have adopted military doctrines that emphasize non-nuclear attacks on their adversary’s critical military assets—radars, missile batteries, command centers, and so on—at the very onset of a conflict. In most cases, these assets are primarily intended for conventional operations, but some also house nuclear C3I facilities or perform dual-use functions, both conventional and nuclear—a situation described by James M. Acton as “entanglement.” If these dual-use or co-located facilities come under attack, the target state might conclude this was the prelude to a nuclear strike and decide to launch its own nuclear munitions before they could be destroyed by its adversary’s incoming weapons. “Entanglement,” says Acton, “could lead to escalation because both sides in a U.S.-Chinese or U.S.-Russian conflict could have strong incentives to attack the adversaries dual-use C3I capabilities to undermine its non-nuclear operations.”6

With all these countries fielding ever more capable conventional weapons and embracing nuclear policies that authorize the use of nuclear weapons in response to severe non-nuclear threats, the risk of such scenarios is bound to increase under any circumstances. Worse still, these dangers are being further amplified by the utilization of emerging technologies for military use. Such technologies pose an added threat to the durability of the nuclear firebreak by multiplying the types of non-nuclear attacks that can be launched on critical enemy assets and by increasing the vulnerability of nuclear C3I systems to non-nuclear attack.

The Risk of Nuclear Escalation

The pathways in which militarized emerging technologies could increase the risk of nuclear escalation can be summarized in four areas.

First, increasingly capable air and naval autonomous weapons systems equipped with advanced sensors and AI processors could be deployed in self-directed “swarms” to find and destroy key enemy assets, such as surface ships and submarines, air defense radars, anti-air and anti-ship missiles, and major C3I facilities. To an adversary, such attacks could be interpreted as the prelude to a nuclear first strike, especially if they result in the destruction of nuclear C3I systems co-located with non-nuclear C3I facilities, prompting it to launch its own nuclear weapons for fear of losing them to enemy weapons.7

A Russian MiG-31 aircraft carries a Kinzhal hypersonic over Moscow's Victory Day parade in 2018. High-speed weapons like this, capable of carrying conventional or nuclear warheads, risk escalating conflicts as decision makers have little time to assess an ambiguous threat. (Photo: Kremlin.ru) Second, multiple strikes by hypersonic missiles could be used early in a conflict to destroy key enemy assets like those described above, again causing the target state to fear that a nuclear strike is imminent and cause it to launch its own nuclear arms. This danger is multiplied by the fact that the flight time of hypersonic missiles is extremely brief and that many of these weapons now being developed by the major powers are designed to carry a nuclear or a conventional warhead, leaving a target country in doubt as to an attacker’s ultimate intentions, especially if key C3I facilities are degraded, preventing senior leaders from knowing the nature of the attack and inclining them to assume the worst.8

Third, just before or at the very onset of a conflict, a belligerent could launch a cyberattack on its adversary’s early-warning and C3I systems, hoping thereby to degrade that country’s ability to resist a full-scale assault by conventional forces. Because many of these systems are also used to warn of a nuclear attack and to communicate with nuclear as well as conventional forces, the target country’s leaders might conclude they are facing an imminent nuclear attack and order the immediate launch of their own nuclear weapons.9

Fourth, as the speed and complexity of warfare increases, the major powers are coming to rely ever more heavily on AI-empowered machines to sort through sensor data on enemy movements, calculate enemy intentions, and select optimal responses. This increases the danger that humans will cede key combat decision-making tasks to machines that lack a capacity to gauge social and political context in their calculations and are vulnerable to hacking, spoofing, and other failures, possibly leading them to propose extreme military responses to ambiguous signals and thereby cause inadvertent escalation. With machines controlling the action on both sides, this danger can only grow worse.10

These are some of the major pathways to escalation that are being created by the weaponization of emerging technologies, but other pathways of a similar nature have been identified in the academic literature and are likely to arise as these technologies are pressed into military service.11

How to Control Destabilizing Technologies

Until now, efforts to control the military use of emerging technologies have largely focused on three aspects of the problem: eliminating the danger that autonomous weapons systems will prove incapable of distinguishing between combatants and noncombatants in contested urban areas, leading to unnecessary harm to the latter; ensuring that cyberspace is not used for attacks on critical military and civilian infrastructure; and guarantying the reliability, safety, and unbiased nature of AI-empowered military systems.

These endeavors, each valuable in its own way, have resulted in some important if modest gains. Efforts to curb the deployment of autonomous weapons systems, also called “killer robots,” have yet to result in the adoption of a legally binding international ban on such munitions under the auspices of the Convention on Certain Conventional Weapons (CCW); however some two dozen states are now calling for negotiations leading to such a ban outside of the CCW framework.12 UN discussions on rules governing cyberspace have produced agreement on certain bedrock principles of noninterference in a state’s critical cyberinfrastructure, but no binding obligations.13 Finally, concerns over the military use of AI have spurred the U.S. Department of Defense to adopt a set of principles for the ethical and responsible use of AI-empowered systems,14 but other countries have yet to follow suit, and it is unclear how the Pentagon principles will be implemented.

None of these measures, valuable as they are, addresses the additive role of emerging technologies in increasing the risk of nuclear escalation. If this, the most critical aspect of the military-technological revolution, is to be brought under effective international control, a more targeted set of measures will be required. These must focus specifically on those applications of emerging technologies that increase the risk that a conventional conflict results in the accidental or unintended use of nuclear weapons by one side or another.

A focused strategy must span a variety of technologies and require many components and so cannot be encompassed in a single agreement. Rather, what is needed is a framework strategy aimed at restricting the military use of those technologies deemed most threatening to strategic stability. Recognizing that implementing all the components of such a strategy will prove difficult in the current political environment, the framework must envision a succession of steps aimed at imposing increasingly specific and meaningful restrictions on destabilizing technologies. Drawing on the toolbox of measures developed by arms control practitioners over decades of experience and experimentation, as well as proposals advanced by other experts in the field,15 such a strategy should be composed of the following elements, in an approximate order of implementation.

Awareness-Building. This would include efforts to highlight the additive risks to nuclear stability posed by the weaponization of emerging technologies. Some important research has already been conducted on these dangers, but more work is needed to identify the escalatory risks inherent in the weaponization of emerging technologies and to make the results widely known.

Additional effort is needed to bring these findings to the attention of policymakers. An important start to such endeavors was made by the German Foreign Ministry in November 2020 with its virtual conference titled “Capturing Technology, Rethinking Arms Control.” At the conclusion of this event, the foreign ministers of the Czech Republic, Finland, Germany, the Netherlands, and Sweden issued a joint proclamation expressing their concern over the “mounting risks for international peace and stability created by the potential misuse of new technologies.”16 More such events, involving a wider spectrum of nations, would help raise awareness of these dangers. In the United States, for example, Congress should be encouraged to hold hearings on the destabilizing impacts of certain emerging technologies.

Track 2 and Track 1.5 Diplomacy. Government officials from China, Russia, and the United States are barely speaking to each other about strategic nuclear matters, let alone about the dangers posed by the weaponization of emerging technologies. In the absence of such official discourse, it is imperative that scientists, technicians, and arms control experts from these countries meet in neutral settings to assess the additive risks to nuclear stability posed by the weaponization of these technologies and to devise practical measures for their regulation and control. Building on the experience of the Pugwash organization in assembling arms control experts from many nations, such meetings could, for example, evaluate measures for controlling or limiting the deployment of hypersonic missiles or the use of cyberspace for attacks on enemy C3I systems.

Ideally, such Track 2 (nongovernmental) consultations can be followed by Track 1.5 meetings, in which government advisers and former government officials also participate, lending them greater authority and helping to ensure that any proposals developed at such gatherings will be given consideration at higher levels and form the basis for future formal arrangements.

Strategic Stability Talks. Before governments can even begin to consider formal arrangements to curb the deployment of destabilizing technologies, senior officials must become more familiar with the nature of these technologies and the significant risks they pose; even more essential, officials on all sides must come to understand how their adversaries view these risks. The best way to do this, many experts agree, is to convene a series of “strategic stability talks,” composed of government officials, military officers, and technical experts, who together can build on the work begun by under Track 2 and 1.5 diplomacy by further assessing the dangers posed by the weaponization of destabilizing technologies and devising measures to restrict or control the technologies in question.

Some preliminary efforts of this sort have occurred under the auspices of the strategic security dialogue conducted by U.S. and Russian officials in recent years, albeit without achieving any concrete results.17 With a new, more arms control-friendly administration about to take office in Washington, one can hope that these talks will resume in a more serious and productive atmosphere, resulting in a thorough discussion of the mutual risks posed by the weaponization of emerging technologies and leading over time to concrete proposals for their regulation and control. Proposals have also been made to expand these bilateral talks to include Chinese participants or to organize a separate strategic security dialogue between the United States and China. Hopefully, this too can now be undertaken.

Unilateral Measures. Given the current state of international affairs, it could prove difficult for the United States and Russia, the United States and China, or all three to agree on formal measures for the control of especially destabilizing technologies. Yet, it may be possible for these states to adopt unilateral measures in the hope that they will induce parallel steps by their adversaries and eventually lead to binding bilateral and multilateral agreements. Experts in the field have suggested several areas where this would be desirable and practical. In the cyberspace realm, for example, Acton has called on governments to adopt a “risk-averse” policy under which they insert barriers against the inadvertent or precipitous initiation of attacks on an enemy’s nuclear C3I.18 A similar approach could be extended to AI-empowered command decision-support systems to limit the risk of mutual interference and inadvertent escalation.

Greater effort could also be made by the Pentagon and the military organizations of other states to adopt, refine, and enforce guidelines for the safe and ethical utilization of AI for military purposes. The Defense Department took an important first step in this direction with its February 2020 announcement of a set of principles governing the military use of AI, but additional measures are needed to ensure that these principles are fully implemented. The militaries of other countries should also adopt principles of this sort and ensure full compliance with them. As part of these endeavors, military contractors must be made aware of their obligation to comply with such principles and military officers will have to be trained to use AI-empowered weapons in a safe and ethical manner.

Bilateral and Multilateral Arrangements. Once the leaders of the major powers come to appreciate the escalatory risks posed by the weaponization of emerging technologies, it may be possible for them to reach accord on bilateral and multilateral arrangements intended to minimize these risks. Such accords could begin with nonbinding agreements of various sorts and, as trust grows, could be followed by binding treaties and arrangements. To help build trust, moreover, the major powers could engage in confidence-building measures of various sorts, such as exchanges of information on ethical standards and protocols for delegating decision-making authority to machines.19

As an example of a useful first step, the leaders of the major nuclear powers could jointly pledge to eschew cyberattacks against each other’s nuclear C3I systems. “While such an agreement would not be verifiable in the traditional sense,” says Acton, it would be “enforceable” in that each state would possess the ability to detect and retaliate against such an intrusion.20 Some analysts have also proposed that states agree to abide by a code of conduct governing the military use of AI, incorporating many of the principles contained in the Defense Department’s roster of principles. In particular, such measures should require that humans retain ultimate control over all instruments of war, including autonomous weapons systems and computer-assisted combat decision-support devices.21

If the major powers are prepared to discuss binding restrictions on the military use of destabilizing technologies, certain priorities take precedence. The first would be an agreement or agreements prohibiting attacks on the nuclear C3I systems of another state by cyberspace means or via missile strikes, especially hypersonic strikes. Another top priority would be measures aimed at preventing swarm attacks by autonomous weapons on another state’s missile submarines, mobile ICBMs, and other second-strike retaliatory systems. Strict limitations should be imposed on the use of automated decision-support systems with the capacity to inform or initiate major battlefield decisions, including a requirement that humans exercise ultimate control over such devices. In negotiations for these agreements, progress made in earlier stages of this progression, from Track 2 and 1.5 diplomacy to strategic stability talks and nonbinding measures, will allow policymakers to devise practical agreements to achieve these ends.

Without the adoption of measures such as these, cutting-edge technologies will be converted into military systems at an ever-increasing tempo, and the dangers to world security will grow apace. These perils are inseparable from the larger context of mutual antagonisms and arms racing among the major powers: the weaponization of emerging technologies is being rushed because these states seek every possible advantage in any war that might arise among them, and only a relaxation in these great-power tensions will make it possible to address the full spectrum of nuclear dangers. A more thorough understanding of the distinctive threats to strategic stability posed by certain destabilizing technologies and the imposition of restraints on their military use would go a long way toward reducing the risks of Armageddon.

 

ENDNOTES

1. “Summary of the 2018 National Defense Strategy of the United States,” U.S. Department of Defense, n.d., https://dod.defense.gov/Portals/1/Documents/pubs/2018-National-Defense-Strategy-Summary.pdf (emphasis in original).

2. State Council Information Office of the People’s Republic of China, “China’s National Defense in the New Era,” July 2019, http://english.www.gov.cn/atts/stream/files/5d3943eec6d0a15c923d2036.

3. See Roger McDermott, “Russia’s Military Scientists and Future Warfare,” Eurasia Daily Monitor, June 5, 2019, https://jamestown.org/program/russias-military-scientists-and-future-warfare/.

4. U.S. Department of Defense, “Nuclear Posture Review 2018,” February 2018, https://media.defense.gov/2018/Feb/02/2001872886/-1/-1/1/2018-NUCLEAR-POSTURE-REVIEW-FINAL-REPORT.PDF.

5. See Steven E. Miller, “A Nuclear World Transformed: The Rise of Multilateral Disorder,” Dædalus, Vol. 149, No. 2 (Spring 2020): 17-36.

6. See James M. Acton, “Escalation Through Entanglement,” International Security, Vol. 43, No. 1 (Summer 2018): 56-99.

7. Michael T. Klare, “Autonomous Weapons Systems and the Laws of War,” Arms Control Today, March 2019, pp. 6–12.

8. Michael T. Klare, “An ‘Arms Race in Speed’: Hypersonic Weapons and the Changing Calculus of Battle,” Arms Control Today, June 2019, pp. 6–13.

9. Michael T. Klare, “Cyber Battles, Nuclear Outcomes? Dangerous New Pathways to Escalation,” Arms Control Today, November 2019, pp. 6–13.

10. Michael T. Klare, “‘Skynet’ Revisited: The Dangerous Allure of Nuclear Command Automation,” Arms Control Today, April 2020, pp. 10–15.

11. See, for example, Christopher F. Chyba, “New Technologies & Strategic Stability,” Dædalus, vol. 149, no. 2 (Spring 2020), pp. 150–70.

12. See Human Rights Watch (HRW), “Stopping Killer Robots: Country Positions on Banning Fully Autonomous Weapons and Retaining Human Control,” August 10, 2020, https://www.hrw.org/report/2020/08/10/stopping-killer-robots/country-positions-banning-fully-autonomous-weapons-and.

13. See UN General Assembly (UNGA) Report A/70/174, Report of the Group of Governmental Experts on Developments in the Field of Information and Telecommunications in the Context of International Security, July 22, 2015, which was adopted by the full UNGA in Resolution 70/237 of December 23, 2015, Developments in the Field of Information and Telecommunications in the Context of International Security.

14. U.S. Department of Defense, “DOD Adopts Ethical Principles for Artificial Intelligence,” press release, February 24, 2020, https://www.defense.gov/Newsroom/Releases/Release/Article/2091996/dod-adopts-ethical-principles-for-artificial-intelligence/.

15. For a review of the arms control “toolbox” and other proposals for controlling destabilizing technologies, see Jon Brook Wolfsthal, “Why Arms Control?” Dædalus, vol. 149, no. 2 (Spring 2020), pp. 101–15. See also Giacomo Persi Paoli, Kerstin Vignard, David Danks, and Paul Meyer, Modernizing Arms Control: Exploring Responses to the Use of AI in Military Decision-Making (Geneva, Switzerland: UN Institute for Disarmament Research, 2020).

16. “Minister’s Declaration at the Occasion of the Conference ‘Capturing Technology, Rethinking Arms Control,’” November 6, 2020, https://rethinkingarmscontrol.de/wp-content/uploads/2020/11/Ministerial-Declaration-RAC2020.pdf.

17. Kingston Reif and Shannon Bugos, “No Progress Toward Extending New START,” Arms Control Today, July/August 2020, pp. 31–32.

18. James M. Acton, “Cyber Warfare & Inadvertent Escalation,” Dædalus, vol. 149, no. 2 (Spring 2020), pp. 143–44.

19. See Michael C. Horowitz, Lauren Kahn, and Casey Mahoney, “The Future of Military Applications of Artificial Intelligence: A Role for Confidence-Building Measures? Orbis, Fall 2020, pp. 527–43.

20. Acton, “Cyber Warfare & Inadvertent Escalation,” p. 145.

21. See, for example, Vincent Boulanin, Kolja Brockmann, and Luke Richards, Responsible Artificial Intelligence Research and Innovation for International Peace and Security (Stockholm: Stockholm International Peace Research Institute, 2020).


Michael T. Klare is a professor emeritus of peace and world security studies at Hampshire College and senior visiting fellow at the Arms Control Association. This article follows his four-part “Arms Control Tomorrow” series published in Arms Control Today in 2019 and 2020.