Biden Issues Executive Order on AI Safety


December 2023
By Michael T. Klare

Responding to growing public anxiety over the potential dangers posed by the expanding use of artificial intelligence (AI), President Joe Biden issued an executive order on Oct. 30 intended to ensure the “safe, secure, and trustworthy” application of the powerful technology.

With Vice President Kamala Harris (R) looking on, U.S. President Joe Biden signs an executive order on advancing the safe, secure, and trustworthy development and use of artificial intelligence at the White House on October 30.  (Photo by Brendan Smialowski/AFP via Getty Images)The order followed the public release of ChatGPT and other generative AI programs that are able to create text, images, and computer code comparable to that produced by humans. On occasion, these programs have suffused those materials with false and fabricated content, provoking widespread unease about their safety and reliability.

Other AI-enabled products used to identify possible criminal suspects also have been shown to produce inaccurate outcomes, raising concerns about racial and gender biases introduced when the systems were being “trained” by computer technicians.

To overcome such anxieties, the executive order mandates a wide variety of measures intended to bolster governmental oversight of the computer technology industry and to better protect workers, consumers, and minority groups against the misuse of AI. Most of these apply to domestic industries and institutions, but some have a significant bearing on national security and arms control.

One of the order’s most consequential measures is a requirement that major tech firms such as Google, Microsoft, and OpenAI notify the federal government when developing any “foundational model”—a complex AI program such as the one powering ChatGPT—“that poses a serious risk to national security, national economic security, or national public health.” They must also share the results of all “red team” tests, programs designed to probe newly developed AI products and identify any hidden flaws or weaknesses, conducted by those firms.

Although the Oct. 30 order does not empower the government to block the commercialization of programs found to be deeply flawed in these tests, it might deter major institutional clients, including the U.S. Defense Department, from procuring such products, thereby prompting industry to place greater emphasis on safety and reliability.

Along similar lines, the order calls on the National Institute of Standards and Technology to establish rigorous standards for red-team testing of major AI programs before their release to the public. Compliance is not obligatory, but such standards are likely to be widely adopted within the industry. The same standards also will be applied by the departments of Energy and Homeland Security in overcoming potential AI system contributions to “chemical, biological, radiological, nuclear, and cybersecurity risks.”

More closely related to national security and arms control is a measure intended to prevent the use of AI in engineering dangerous biological materials, a significant concern for those who fear the utilization of AI in the production of new, more potent biological weapons. Under the Biden order, strong new standards will be established for biological synthesis screening, and any agency that conducts life science research will have to abide by them as a condition of future federal funding.

Several other key provisions bear on national security in one way or another, but in recognition of the issue’s complexity, the order defers full consideration of AI’s impact on these issues to a separate national security memorandum to be developed by the White House National Security Council staff in the coming months. Once completed, this document will dictate how the U.S. military and intelligence communities “use AI safely, ethically, and effectively in their missions.”