Biden Sets AI Rules for National Security

December 2024
By Xiaodon Liang

As he prepares to leave office, U.S. President Joe Biden issued a policy memorandum on the use of artificial intelligence (AI) for national security purposes and reiterated his administration’s stance that a human should always remain “in the loop” for informing and executing decisions on nuclear weapons use.

U.S. President Joe Biden, seen at an Oval Office briefing on artificial intelligence (AI) in July, recently issued an AI policy memorandum and reaffirmed that humans always should be “in the loop” for decisions on nuclear weapons use.  (Official White House photo by Adam Schultz)

The national security memorandum, published Oct. 24, addresses not only the use of AI by executive branch agencies involved in national security, but also expands on the outgoing administration’s policy of promoting U.S. research into leading-edge AI models.

“We have to be faster in deploying AI in our national security enterprise than America’s rivals are in theirs. They are in a persistent quest to leapfrog our military and intelligence capabilities,” National Security Advisor Jake Sullivan said in a speech introducing the memorandum at the National Defense University.

The memorandum sets forth immigration, energy, resource-sharing, and rules-setting policies and practices designed to promote U.S. leadership in what the Biden administration calls “safe, secure, and trustworthy AI.”

The memorandum, prepared by the National Security Council and signed by Biden, calls for the publication of a subsidiary document, titled “Framework to Advance AI Governance and Risk Management in National Security” by the council’s deputies committee. A first version of this framework was released concurrently with the memorandum.

The framework creates three regulated categories of cases for AI use: prohibited, high impact, and those affecting federal personnel. Agencies will be required to adopt minimum risk management practices for high-impact and federal personnel-impacting cases or seek an annual waiver to avoid this requirement when it would “increase risks to privacy, civil liberties, or safety, or would create an unacceptable impediment to critical agency operations or exceptionally grave damage to national security.”

“The policy imposes few substantive safeguards on a wide range of AI-driven activities, by and large allowing agencies to decide for themselves how to mitigate the risks posed by national security systems,” the American Civil Liberties Union warned in an Oct. 24 press release. Patrick Toomey, deputy director of the advocacy organization’s National Security Project, criticized the Biden administration’s approach for lacking “transparency, independent oversight, and built-in mechanisms for individuals to obtain accountability.”

Waivers will not be available for prohibited cases of AI use. The list of prohibitions includes violations of certain civil liberties, as well as a general ban on using AI to “[r]emove a human ‘in the loop’ for actions critical to informing and executing decisions by the President to initiate or terminate nuclear weapons employment.”

The framework requires that oversight and “rigorous testing and assurance” accompany the use of AI in determining collateral damage and estimating casualties before kinetic actions. Intelligence analysis and reports based “solely” on AI outputs also are permitted as long as the reader is provided with sufficient warning.

The Commerce Department will play a significant role through its AI Safety Institute in assessing the national security risks of leading-edge AI models. The institute will serve as the “primary port of call for U.S. AI developers,” according to an unnamed senior administration official speaking to journalists on Oct. 24 regarding the memorandum.

The memorandum also addresses the nonproliferation implications of AI by instructing the National Nuclear Security Administration to develop within 120 days, in partnership with the AI Safety Institute and the National Security Agency, “the capability to perform rapid systematic testing of AI models’ capacity to generate or exacerbate nuclear and radiological risks.”

By empowering and creating an AI risk assessment structure within the government, the White House hopes to accelerate 
AI research by making clear the rules of the road. “Ensuring security and trustworthiness will actually enable us to move faster, not slow us down. Put simply, uncertainty breeds caution,” Sullivan said in his speech.

“We know that China is building its own technological ecosystem with digital infrastructure that won’t protect sensitive data, that can enable mass surveillance and censorship, that can spread misinformation, and that can make countries vulnerable to coercion,” Sullivan said. Nonetheless, the United States should be “willing to engage in dialogue about this technology with [Beijing] and with others to better understand risks and counter misperceptions,” he said.

Biden and Chinese President Xi Jinping renewed their commitment to the human-in-the-loop principle during a Nov. 16 meeting on the sidelines of the Asia-Pacific Economic Cooperation summit in Lima, according to Bloomberg.

During the U.S. presidential campaign, the Republican Party adopted a policy platform calling for the revocation of Executive Order 14110, which laid out in October 2023 the Biden administration’s AI policies and preceded the publication of the recent national security memorandum. Instead, the platform promised to “support AI Development rooted in Free Speech and Human Flourishing.”

Some parts of the Biden framework may survive a partisan transition in the White House. The AI Safety Institute has bipartisan support, and bills granting it a basis in law have passed out of House and Senate committees.