“For 50 years, the Arms Control Association has educated citizens around the world to help create broad support for U.S.-led arms control and nonproliferation achievements.”
Pentagon Struggles to Exploit Advances in AI
December 2023
By Michael T. Klare
The U.S. Defense Department has announced several initiatives designed to accelerate the military’s appropriation of private sector advances in artificial intelligence (AI) while still adhering to its commitments regarding the responsible and ethical utilization of these technologies.
Senior Pentagon officials are keen to exploit recent progress in AI in order to gain a combat advantage over China and Russia, considered the most capable potential U.S. adversaries.
But they recognize that the large language models used to power ChatGPT and other such generative AI programs have been found to produce false or misleading outcomes, termed “hallucinations” by computer experts, that make them unsuitable for battlefield use. Overcoming this technical challenge and allowing for the rapid utilization of the new technologies have become major Pentagon priorities. The Defense Department took one step toward that goal on Nov. 2 with the release of an updated “Data, Analytics, and Artificial Intelligence Adoption Strategy,” which will govern the military’s use of AI and related technology in the years ahead.
Pentagon officials said the strategy, which updates earlier versions from 2018 and 2020, is needed to take advantage of the enormous advances in AI achieved by private firms over the past few years while complying with the department’s stated principles on the safe, ethical use of AI.
“We’ve worked tirelessly for over a decade to be a global leader in the fast and responsible development and use of AI technologies in the military sphere,” Deputy Defense Secretary Kathleen Hicks told a Nov. 2 briefing on the new strategy. Nevertheless, she said, “safety is critical because unsafe systems are ineffective systems.”
Although the new strategy claims to balance the two overarching objectives of speed and safety in utilizing the new technologies, the overwhelming emphasis is on speed. “The latest advancements in data, analytics, and artificial intelligence technologies enable leaders to make better decisions faster, from the boardroom to the battlefield,” the strategy states. “Therefore, accelerating the adoption of these technologies presents an unprecedented opportunity to equip leaders at all levels of the Department with the data they need.”
The emphasis on speed is undergirded by what appears to be an arms racing mindset. “[China] and other strategic competitors…have widely communicated their intentions to field AI for military advantage,” the strategy asserts. “Accelerating adoption of data, analytics, and AI technologies will enable enduring decision advantage, allowing [Defense Department] leaders to…deploy continuous advancements in technological capabilities to creatively address complex national security challenges in this decisive decade.”
To ensure that the U.S. military will continue to lead China and other competitors in applying AI to warfare, the updated strategy calls for the decentralization of AI product acquisition and utilization by defense agencies and the military services. Rather than having all decisions regarding the procurement of AI software be made by a central office in the Pentagon, they can now be made by designated officials at the command or agency level, as long as these officials abide by safety and ethical guidelines now being developed by a new Pentagon group called Task Force Lima.
Such decentralization will accelerate the military’s utilization of commercial advances in AI by allowing for local initiative and reducing the risk of bureaucratic inertia at the top, explained the Pentagon’s chief digital and AI officer, Craig Martell, at the Nov. 2 press briefing.
“Our view now,” he said, is to “let any component use whichever [AI program] pipeline they need, as long as they’re abiding by the patterns of behavior that we need them to abide by.”
But some senior Pentagon officials acknowledge that decentralization on this scale will diminish their ability to ensure that products acquired for military use meet the department’s standards for safety and ethics.
“Candidly, most commercially available systems enabled by large language models aren’t yet technically mature enough to comply with our ethical AI principles, which is required for responsible operational use,” Hicks said. But she insisted that they could be made compliant over time through rigorous testing, examination, and oversight.
Overall responsibility for ensuring compliance with the department’s safety and ethical standards has been assigned to Task Force Lima, a team of some 400 specialists working under Martell’s supervision.
The task force was established to “develop, evaluate, recommend, and monitor the implementation of generative AI technologies across [the Defense Department] to ensure the department is able to design, deploy, and use generative AI technologies responsibly and securely,” Hicks said on Aug. 2 when announcing its launch.
As she and other senior officials explained, the task force’s primary initial mission will be to formulate the guidelines within which the various military commands can employ commercial AI tools for military use.
Navy Capt. Manuel Xavier Lugo, the task force commander, said the project will examine various generative AI models “in order for us to find the actual areas of [potential] employment of the technology so that we can go ahead and then start writing specific frameworks and guardrails for those particular areas of employment.”