AI and the Bomb: Nuclear Strategy and Risk in the Digital Age
December 2023
Nuclear Deterrence: Unsafe at Machine Speed
AI and the Bomb: Nuclear Strategy and Risk in the Digital Age
By James Johnson
Oxford University Press
May 2023
James Johnson’s book is the most important book about preventing nuclear war that has been published in recent years. The author confronts head-on the complexity of the dangers that artificial intelligence (AI) and other emerging technologies pose for nuclear deterrence. He combines a commanding view of deterrence theory with the imagination to point toward where technology is already obscuring deterrence practice and concludes darkly that, “[i]n the context of AI and autonomy, particularly information complexity, misinformation, and manipulation, rationality-based deterrence logic appears an increasingly untenable proposition.”
AI and the Bomb opens with a gripping account of a “flash war” between China and the United States, taking place over less than two hours in June 2025, in which nuclear weapons are used, millions of people die, and afterward, no one on either side can explain exactly what happened. This story underscores the fact that even if not given control of nuclear weapons, AI and emerging technologies connected to adjacent or seemingly unrelated systems may combine in unforeseen ways to render nuclear escalation incomprehensible to the humans in (or on) the loop.
Johnson’s book is the first comprehensive effort to understand the implications of the AI revolution for the Cold War notion of “strategic stability” at the core of nuclear deterrence. He finds new challenges for deterrence theory and practice in emerging technologies, centering inadvertent escalation as a “new model for nuclear risk.” He formulates a novel “AI-security dilemma” more volatile and unpredictable than the past. He also adds a new dimension of “catalytic nuclear war” by which states without nuclear weapons or nonstate actors might use AI to cause nuclear war among nuclear-armed states.
Artificial Intelligence, Emerging Technology, and Deterrence Theory
The author embraces and extends the emerging conventional wisdom that AI should not be plugged into nuclear command-and-control systems, observing that “the delegation of the decision-making process (to inform and make decisions) to machines is not a binary choice but rather a continuum between the two extremes—human decision-making and judgment and machine autonomy at each stage of the kill chain.” Beyond using AI to facilitate nuclear launch decisions, Johnson shows how AI could affect the nuclear balance by changing nuclear weapons system accuracy, resilience and survivability, and intelligence, surveillance, and reconnaissance for targeting. AI capabilities also may give conventional weapons systems dramatic new capabilities to attack nuclear weapons systems, through increased ability to penetrate air defenses; increased ability to “detect, track, target, and intercept” nuclear missiles; and advanced cybercapabilities, potentially including manipulation of “the information ecosystem in which strategic decisions involving nuclear weapons take place.”
Importantly, Johnson uses AI as a shorthand for referring to AI and a suite of other emerging technologies that enable AI, including “cyberspace, space technology, nuclear technologies, GPS technology, and 3D printing.” This choice mirrors the practice of other thought leaders, including Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher in The Age of AI and Mustafa Suleyman in The Coming Wave.
The book is a grim journey for scholars of nuclear deterrence theory, forcing them to confront concepts such as “machine-speed AI warfare,” “non-human agents,” nuclear arsenals with a “larger attack surface” in a world in which ubiquitous sensors feed data oceans, and “disinformation cascades” that could lead to an “unravelling of deterrence in practice.” These ominous signs begin to flesh out the broad concerns about nuclear strategy that Kissinger, Schmidt, and Huttenlocher raise, including that “[t]he management of nuclear weapons, the endeavor of half a century, remains incomplete and fragmentary” and that the “unsolved riddles of nuclear strategy must be given new attention.”
Johnson centers Barry Posen’s concept of “[i]nadvertant escalation” as “a new model for nuclear risk.” He finds that “AI-enhanced systems operating at higher speeds, levels of sophistication, and compressed decision-making timeframes will likely…reduce the scope for de-escalating situations and contribute to future mishaps.” He observes AI undermining the utility of Herman Kahn’s familiar “escalation ladder” metaphor: “AI is creating new ways (or ‘rungs’) and potential shortcuts up (and down) the ladder, which might create new mechanisms for a state to perceive (or misperceive) others to be on a different rung, thus making some ‘rungs’ more (or less) fluid or malleable.” Instead of a discrete escalation ladder, Johnson helps the reader envision any number of misperceptions, miscommunications, accidents, and errors interacting with one another across distances, failure modes, and time scales beyond effective human cognition.
‘The AI Security Dilemma’
The book arrives at a moment of urgent, real-world demand for updated nuclear deterrence theory. Last year, Admiral Charles Richard, the U.S. Strategic Command commander, told the annual Space and Missile Defense Symposium in Huntsville, Alabama, that his command was “furiously” rewriting deterrence theory to solve a “three body problem” resulting from China’s emergence as a near-peer nuclear arms competitor to the United States and Russia.1
Johnson carefully examines the specific challenges that AI poses for nuclear deterrence theory. He identifies three ways that AI and other emerging technologies have become a singular aggravator of the security dilemma, the enduring challenge at the heart of international relations by which development of defensive capabilities by one state necessarily threatens others.
First, the AI security dilemma features the possibility of extraordinarily fast technological breakthroughs, incentivizing states in competition with peers in AI technology to move first rather than risk being second. For example, the U.S. National Security Commission on AI found that “defending against AI-capable adversaries [notably China] operating at machine speeds without employing AI is an invitation to disaster.”
Second, the AI security dilemma risks placing latent offensive capabilities in civilian hands, such as the massive data facilitated by communication and navigation satellites. Whereas the traditional security dilemma is driven primarily by the misinterpretation of defensive military capabilities, the AI security dilemma also can be driven by the misinterpretation of ostensibly peaceful commercial capabilities.
Third, the AI security dilemma is driven by commercial and market forces not under the positive control of states. Whereas the traditional security dilemma causes states to fear each other’s actions, the AI security dilemma drives states increasingly to fear the actions of private firms. Taken together, these three novel characteristics of potentially explosive technological breakthroughs, ambiguous commercial capabilities, and the absence of positive control over commercial capabilities led Johnson to conclude that AI is “a dilemma aggravator primus inter pares.”
AI extends the problem of nuclear deterrence stability beyond the nuclear-armed states to all states or other actors with offensive AI capabilities. During the Cold War, nuclear proliferation threatened a possible future world with too many nuclear-armed states for confidence in stable nuclear deterrence. Fortunately, nuclear proliferation has been limited enough to be forced, however awkwardly, into various dyads by which mutual threats render nuclear deterrence practice more or less comprehensible, stable, and aligned with necessary assumptions. Johnson worries that offensive AI capabilities may add additional variables to the nuclear escalation equation. Even without the further spread of nuclear weapons, states or other actors could use AI to leverage the deterrent arsenals of nuclear-armed states through “catalytic war.” As the author writes, “The catalyzing chain of reaction and counter-retaliation dynamics set in motion by nonstate or third-party actor’s deliberate action is fast becoming a more plausible scenario in the digital era.”
Beyond Rational Nuclear Deterrence?
The book demonstrates repeatedly how revolutionary change in the technological terrain in which nuclear deterrence takes place demands urgent theoretical and practical adaptation. Old assumptions and human rationality may decrease sharply in effectiveness as tools for preventing nuclear war.
Johnson offers some initial ideas of how to manage the stark challenges that AI poses for nuclear deterrence. Arms control will remain important, if challenging, in new ways; he suggests that banning AI enhancements to nuclear deterrence capabilities might be an important first step.
Another early step that could align with Johnson’s insight might be to work toward the internationalization of processes modeled on the U.S. nuclear “failsafe review” mandated by Congress in the 2022 National Defense Authorization Act and now underway at the Department of Defense. The failsafe review “aims to identify nuclear risk-reduction measures that the [United States] could implement to strengthen safeguards against the unauthorized, inadvertent, or mistaken use of a nuclear weapon, including through false warning of an attack.” Since early 2020, Co-chair Sam Nunn and Co-chair and CEO Ernest J. Moniz of the Nuclear Threat Initiative have championed the initiative’s effort to encourage the U.S. government to undertake such a review aimed at strengthening nuclear failsafe and to challenge other nuclear powers to conduct their own internal reviews.2
Johnson recommends applying AI as part of the solution to support nuclear risk reduction, including through “normative, behavioral and confidence building measures to increase mutual trust.” There may be ways that dangers created or accelerated by AI can be mitigated or better managed through adjustments to legacy nuclear deterrence force structures and practices in which patterns of daily life and the massive “data exhaust” of people and systems constituted less of a vulnerability.
The author also recommends bilateral and multilateral dialogue on strategic stability, including with an expanded range of stakeholders through which “partnerships should be forged between commercial AI developers and researchers to explore risk reduction measures in the nuclear enterprise.” AI-enabled capabilities make more states and even nonstate actors immediately relevant to strategic stability. Multinational corporations and leading innovators increasingly own capabilities and data that may be implicated in nuclear deterrence.
Elon Musk’s change to Starlink operations in apparent response to a nuclear threat from Russian President Vladimir Putin earlier this year is a clear signal that the potential exposure of nuclear deterrence to the commercial sector should no longer be ignored. Observing that “it is inevitable that AI is going to be used for things that touch nuclear weapons,” Jill Hruby, the administrator of the U.S. National Nuclear Security Administration, recently imagined a path forward, a future in which “you’re almost going to need AI systems battling each other to do the verification.”3 If the world wants to prevent a future in which algorithms fight nuclear war, leaders must act and invest now in algorithms to prevent nuclear war.
Ultimately, Johnson expects that “AI technology in the nuclear domain will likely be a double-edged sword: strengthening the [nuclear command-and-control] systems while expanding the pathways and tools available to adversaries to conduct cyberattacks and electronic warfare operations against these systems.” He encourages policymakers to act “before the pace of technological change outpaces (or surpasses) strategic affairs.”
Johnson concludes his book with a quote from machine learning pioneer Alan Turing: “We can only see a short distance ahead, but we can see plenty there that needs to be done.” AI and the Bomb is a must read for those seeking to understand the first signals of revolutionary change that AI is bringing to the challenge of preventing nuclear war. It sends a clear warning that the world does not yet know how to manage the effects of AI on nuclear deterrence and, without significant urgent effort, it is likely to fall short.
ENDNOTES
1. Theresa Hitchens, “The Nuclear 3 Body Problem: STRATCOM ‘Furiously’ Rewriting Deterrence Theory in Tripolar World,” Breaking Defense, August 11, 2022, https://breakingdefense.com/2022/08/the-nuclear-3-body-problem-stratcom-furiously-rewriting-deterrence-theory-in-tri-polar-world/.
2. Nuclear Threat Initiative, “The Failsafe Review,” January 25, 2023, https://www.nti.org/analysis/articles/the-failsafe-review/.
3. Jill Hruby, Remarks to the Nuclear Threat Initiative Board of Directors, Washington, October 24, 2023.