On March 15, 2024, FDA published a white paper titled “Artificial Intelligence & Medical Products: How CBER, CDER, CDRH, and OCP are Working Together” (the AI White Paper) on the use of artificial intelligence (AI) across the product life cycle of “medical products,” defined to include biological products, drugs, devices, and combination products.1 FDA, Artificial Intelligence & Medical Products: How CBER, CDER, CDRH, and OCP are Working Together (Mar. 2024), https://www.fda.gov/media/177030/download [hereinafter “AI White Paper”]. The Center for Biologics Evaluation and Research (CBER), the Center for Drug Evaluation and Research (CDER), the Center for Devices and Radiological Health (CDRH), and the Office of Combination Products (OCP) jointly issued the AI White Paper.
The AI White Paper appears to be FDA’s preliminary response to Executive Order 14110 on AI, issued in October 2023.2See Exec. Order No. 14110, 88 Fed. Reg. 75,191 (Nov. 1, 2023), https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence. As summarized in a prior Client Alert, this Executive Order directed the Department of Health & Human Services to develop a strategic plan with policies and frameworks on responsible deployment and use of AI and AI-enabled technologies in the health and human services sector, including “drug and device safety” and “public health.”3Id. at 75,214. The AI White Paper adopts the definition of AI set forth in the Executive Order (and in footnote 4).4See AI White Paper at 1 n.1 (“AI is a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action. AI includes machine learning, which is a set of techniques that can be used to train AI algorithms to improve performance of a task based on data.”); Exec. Order 14110 § 3(b), (t).
The AI White Paper notes that FDA intends to employ a risk-based regulatory framework in evaluating AI in medical products, which can be tailored to the relevant category of product. The paper outlines four “areas of focus” for CBER, CDER, CDRH, and OCP in regulating AI:
- Fostering collaboration to safeguard public health;
- Advancing the development of regulatory approaches that support innovation;
- Promoting the development of harmonized standards, guidelines, best practices and tools; and
- Supporting research related to the evaluation and monitoring of AI performance.5AI White Paper at 2.
Focus Area 1: Fostering Collaboration
FDA explains in the AI White Paper that it intends to engage in “collaborative partnerships” with a variety of stakeholders (e.g., AI developers, patient groups, international regulators) to formulate a “patient-centered” regulatory approach for AI.6Id. at 2-3. FDA will solicit input on specific topics such as cybersecurity or quality assurance. Although the paper does not state how FDA will solicit input, we anticipate that the Agency will use its typical tools and hold public workshops and/or solicit public comment on draft guidance documents, discussion papers, or proposed rules (e.g., new special controls for software devices).
FDA will develop educational initiatives for regulatory bodies, healthcare professionals, patients, researchers, and industry for the responsible use of AI. The AI White Paper states that FDA will also continue to work with “global collaborators” on standards, guidelines, and other best practices for evaluation of AI. Such global collaborators could include international regulators, but may also include independent standards organizations, such as the International Organization for Standardization (ISO).
Focus Area 2: Advancing Innovation
The AI White Paper notes that FDA’s medical product Centers intend to develop policies that will increase regulatory predictability and clarity, which will in turn advance AI innovation.7Id. at 3. Specifically, FDA will monitor trends in AI development, to allow for “timely adaptations” in FDA’s evaluation of premarket regulatory submissions. Additionally, FDA plans to support efforts to develop new methodologies for evaluating AI algorithms, including identification of bias in algorithm development and training. We note that FDA has already taken steps in this direction through its participation in the Coalition for Health AI (CHAI), which focuses much of its effort on the establishment of AI assurance labs. Troy Tazbaz (Director of CDRH’s Digital Health Center of Excellence) is Co-Chair of CHAI’s Government Advisory Board.
The AI White Paper states that FDA plans to build on existing initiatives for evaluation and regulation of AI in medical products, including in manufacturing. These existing initiatives include FDA’s 2023 discussion paper on “Using Artificial Intelligence & Machine Learning in the Development of Drug & Biological Products,” CDER’s “Framework for Regulatory Advanced Manufacturing Evaluation” (FRAME), and the “Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan.”8See FDA, Discussion Paper and Request for Feedback, Using Artificial Intelligence and Machine Learning in the Development of Drug and Biological Products (May 2023), https://www.fda.gov/media/167973/download; FDA, Framework for Regulatory Advanced Manufacturing Evaluation (FRAME), https://www.fda.gov/about-fda/center-drug-evaluation-and-research-cder/cders-framework-regulatory-advanced-manufacturing-evaluation-frame-initiative; FDA, Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan (Jan. 2021), https://www.fda.gov/media/145022/download.
FDA also plans to issue guidance on the use of AI in medical product development and medical products. CDRH and CDER have committed to issuing the following final and draft guidance documents regarding AI in 2024:
- Final guidance on predetermined change control plans9Section 3308 of the Food and Drug Omnibus Reform Act of 2022 (FDORA), enacted on December 29, 2022, added section 515C to the Federal Food, Drug, and Cosmetic Act (FDCA) regarding predetermined change control plans (PCCPs) for medical devices. A PCCP is a plan included by a sponsor in a device premarket submission that, once reviewed and accepted by FDA, allows for device modifications within the scope of the PCCP without a new device clearance or approval. See also FDA, Draft Guidance for Industry and FDA Staff: Marketing Submission Recommendations for a Predetermined Change Control Plan for Artificial Intelligence/Machine Learning (AI/ML)-Enabled Device Software Functions (Apr. 2023), https://www.fda.gov/media/166704/download. King & Spalding summarized this guidance document in an earlier Client Alert. in marketing submissions for AI-enabled device software functions;10FDA, CDRH Proposed Guidances for Fiscal Year 2024 (FY2024), https://www.fda.gov/medical-devices/guidance-documents-medical-devices-and-radiation-emitting-products/cdrh-proposed-guidances-fiscal-year-2024fy2024 (last visited Mar. 20, 2024).
- Draft guidance on life cycle management considerations and premarket submission recommendations for AI-enabled device software functions;11Id. and
- Draft guidance on considerations for the use of AI to support regulatory decision-making for drugs and biological products.12FDA, CDER Guidance Agenda, New, Revised Draft and Immediately in Effect Guidances Planned for Publication in Calendar Year 2024 (Jan. 2024), https://www.fda.gov/media/134778/download.
Focus Area 3: Harmonized Standards and Guidelines
FDA plans to build upon the existing “Guiding Principles” on Good Machine Learning Practice (GMLP) for medical device development, which were jointly developed by FDA, Health Canada, and the UK’s Medicines and Healthcare products Regulatory Agency (MHRA) in October 2021.13See FDA, Health Canada, MHRA, Good Machine Learning Practice for Medical Device Development: Guiding Principles (Oct. 2021), https://www.fda.gov/media/153486/download. FDA intends to refine and develop considerations for evaluating AI in the medical product life cycle, such as cybersecurity concerns; promote best practices for long-term safety and performance monitoring for AI‑enabled medical products; create best practices for ensuring that data used to train and test AI models are fit for use; and develop a framework for quality assurance of AI-enabled tools.
Focus Area 4: Supporting AI Research
FDA’s medical product Centers also plan to support research projects related to AI’s impact on medical product safety and effectiveness.14AI White Paper at 4. Such projects would include evaluation of bias in AI models, identification of health inequities associated with AI in medical product development, and ongoing monitoring of AI tools to ensure that they adhere to applicable standards and maintain performance and reliability.
Conclusion – More Details to Come
The AI White Paper outlines FDA’s high-level principles and commitments in evaluating AI in medical products, both in the pre-market and post-market context. The stated goal of the paper is to “reaffirm [FDA’s] commitment to promoting the responsible and ethical development, deployment, use, and maintenance of medical products that incorporate or are developed with AI,” but the AI White Paper does not include any specific regulatory or policy details on how the Agency intends to address the four focus areas.15Id.
We expect that additional details on each of these four focus areas will primarily come in the form of new draft and final guidance documents, such as those listed under Focus Area 3 above. These guidance documents will likely include important information on CDRH’s, CDER’s, and CBER’s current expectations for the content of premarket submissions for AI-enabled medical products or products developed using AI. Sponsors of such products will need to consider the content of these guidance documents, once released, in designing verification and validation studies to support regulatory clearance or approval.