The U.S. Department of Commerce’s National Institute of Standards and Technology (“NIST”) recently announced the publication of three AI guidelines as well as its release of a software package aimed at helping organizations measure the impact of adversarial attacks on AI system performance. These actions are all in response to President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, published on October 23, 2023.
NIST & President Biden’s AI Executive Order
President Biden’s AI Executive Order was published with an accompanying Fact Sheet, which included action items for the various departments and agencies falling under the executive branch. Fact Sheet spotlighted, among other things, the need to create new standards for AI safety and security, and it did so specifically in relation to NIST:
- Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy;
- Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content.
Since then, NIST has announced its continued and ongoing efforts to work with private and public stakeholders to fulfill these obligations. As stated by Under Secretary of Commerce for Standards and Technology and NIST Director Laurie Locascio: “We are committed to developing meaningful evaluation guidelines, testing environments, and information resources to help organizations develop, deploy, and use AI technologies that are safe and secure, and that enhance AI trustworthiness.”
For example, NIST kicked off these efforts by hosting a workshop in November 2023 to facilitate collaboration efforts, whereby inviting private and public stakeholders to begin the process of identifying working groups for the various deliverables required under the AI Executive Order. These meetings served to path mark the development of NIST’s recently published guidelines.
NIST’s New AI Guidelines
Building on the AI Risk-Management Framework (“AI RMF”) published by NIST in July 2024, NIST collaborated with private and public stakeholders, including an open call for comments, to publish final versions of the following AI-related guidelines:
- AI RMF Generative AI Profile (NIST AI 600-1): A companion resource to the AI RMF, this publication provides guidance on issues specific to generative AI;
- Secure Software Development Practices for Generative AI and Dual-Use Foundation Models (NIST Special Publication (SP) 800-218A): A companion resource to NIST’s Secure Software Development Framework (SSDF), this guideline is focused on developing well-secured AI systems to (1) reduce the number of vulnerabilities in released software, (2) mitigate the potential impact of the exploitation of undetected or unaddressed vulnerabilities, and (3) address root causes of vulnerabilities to prevent future recurrences;
- A Plan for Global Engagement on AI Standards (NIST AI 100-5): This set of standards was published by NIST to drive the development and implementation of AI-related consensus standards, cooperation and coordination, and information sharing.
In addition to the above final publications, NIST released an initial public draft of Managing Misuse Risk for Dual-Risk Foundation Models, which serves to identify best practices for developers of foundational models to best manage the risks that their models may be deliberately misused to cause harm. NIST has also issued a call for public comments for this draft through September 9, 2024, which will be used to help inform the final version of this document.
NIST’s AI Software Release
In January 2024, NIST published details about a type of cyberattack unique to AI systems: adversarial machine learning. Threat actors can “corrupt” or “poison” data that might be used by AI systems for training, thereby causing those AI systems to malfunction.
NIST aims to assist organizations through the release of its own open source software tool, Dioptra, which tests the effects of adversarial attacks on AI systems. In doing so, users will be able to select various adversarial tactics that a threat actor might use to make the model perform less effectively and thereby track performance reduction so as to learn how often and under what circumstances the AI system would fail.
“For all its potentially transformational benefits, generative AI also brings risks that are significantly different from those we see with traditional software. These guidance documents and testing platform will inform software creators about these unique risks and help them develop ways to mitigate those risks while supporting innovation.”
-- Laurie E. Locascio, Under Secretary of Commerce for Standards and Technology and NIST Director
Upcoming NIST AI Deliverables
The diagram below illustrates the effort of NIST as the agency builds out guidelines and standards for the safe, secure, and trustworthy development and use of AI continues in the coming months, with additional key benchmarks set through January 2025.
This client alert is part of an ongoing series of client alerts focused on the EU AI Act. King & Spalding will continue to vigilantly monitor developments related to this and other AI-related legislation.