News & Insights

Client Alert

October 4, 2024

Department of Justice Continues to Focus on AI This Week


On September 23, 2024, the U.S. Department of Justice (Department) announced that it had updated its guidance to prosecutors on how to evaluate the effectiveness of a corporation’s compliance program. The guidance, known as the Evaluation of Corporate Compliance Programs (ECCP), provides prosecutors with factors to consider and questions to ask when determining the adequacy and effectiveness of a corporation’s compliance program at the time of the offense and at the time of a charging decision or resolution with the Department.

According to accompanying remarks given by the head of the Department’s Criminal Division, Principal Deputy Assistant Attorney General (PDAAG) Nicole M. Argentieri, the additions to the ECCP were in three main areas: (1) emerging technologies, including artificial intelligence; (2) whistleblowers; and (3) access to data, including third-party vendor data. The specific focus on companies’ use of artificial intelligence (AI) was particularly noteworthy.  

This week, nine days after announcing the ECCP updates, PDAAG Argentieri spoke again about AI and how the “promises and perils” of AI are “top of mind” for the Criminal Division and for the Department more broadly. Referencing the need for robust detection of AI vulnerabilities, discriminatory impacts, and bias, PDAAG Argentieri announced that the Criminal Division would be updating its 2017 Vulnerability Disclosure Framework to facilitate reporting consistent with the Computer Fraud and Abuse Act and intellectual property laws and urged companies to implement a vulnerability disclosure program to detect these issues within their AI systems.

This was just the most recent indication that the Department is focused on the challenges and risks posed by AI. In February 2024, U.S. Attorney General Merrick Garland announced the designation of the Department’s first Chief AI Officer. That same month, U.S. Deputy Attorney General (DAG) Lisa Monaco gave remarks where she announced the creation of “Justice AI,” a convening of stakeholders from civil society, academia, science, and industry to better understand and prepare for the risks of AI. As part of Justice AI, the Department’s Criminal Division convened corporate compliance executives to help inform the Department on how to update the ECCP to address the risks and uses of AI by companies and their compliance departments. 

Last month’s additions to the ECCP instruct prosecutors to ask a series of questions about AI (including generative AI) and emerging technologies to determine whether a corporation’s compliance program is well-designed—a critical factor in deciding how to resolve criminal investigations of corporate conduct. The additions instruct prosecutors to consider whether a company has conducted a risk assessment regarding the use of new technologies, including AI, and whether the company has taken appropriate steps to mitigate the risk associated with the use of that new technology. Questions for prosecutors and companies include:

  • Does the company have a process for identifying and managing emerging internal and external risks that could potentially impact the company’s ability to comply with the law, including risks related to the use of new technologies?
  • How does the company assess the potential impact of new technologies, such as artificial intelligence (AI), on its ability to comply with criminal laws?
  • Is management of risks related to use of AI and other new technologies integrated into broader enterprise risk management (ERM) strategies?
  • What is the company’s approach to governance regarding the use of new technologies such as AI in its commercial business and in its compliance program?
  • How is the company curbing any potential negative or unintended consequences resulting from the use of technologies, both in its commercial business and in its compliance program?
  • How is the company mitigating the potential for deliberate or reckless misuse of technologies, including by company insiders?
  • To the extent the company uses AI and similar technologies in its business or as part of its compliance program, are controls in place to monitor and ensure its trustworthiness, reliability, and use in compliance with applicable law and the company’s code of conduct?
  • Do controls exist to ensure that the technology is used only for its intended purposes?
  • What baseline of human decision-making is used to assess AI?
  • How is accountability over use of AI monitored and enforced?
  • How does the company train its employees on the use of emerging technologies such as AI?
  • Is there a process for updating policies and procedures to address emerging risks, including those associated with the use of new technologies?
  • What efforts has the company made to monitor and implement policies and procedures that reflect and deal with the spectrum of risks it faces, including changes to the legal and regulatory landscape and the use of new technologies?

The misuse of AI—like the creation of false approvals and documents—can have serious legal, financial, and reputational consequences for companies. Department leadership has repeatedly warned that where misconduct is made significantly more dangerous by the misuse of AI, prosecutors will seek stiffer sentences. DAG Monaco has also said that if the Department determines that existing sentencing enhancements do not adequately address the harms caused by the misuse of AI, the Department “will seek reforms to those enhancements to close that gap.”    

The Department’s enforcement efforts relating to AI have already resulted in criminal actions. For example, last month, the U.S. Attorney’s Office for the Southern District of New York secured a guilty plea from a former Chief Executive Officer and Board of Directors Chairman of a publicly traded digital advertising technology company for committing securities fraud by making material misrepresentations about the efficacy of the company’s proprietary AI fraud detection tool. According to the Department’s charging document, the securities fraud scheme included the creation of fake documents in order to mislead the independent certified public accountants who were engaged to audit the company’s financial statements. Sentencing in that case is set for December 2024.   

Civil and state enforcement authorities are also focused on AI. For example, last month, the Texas Attorney General announced an Assurance of Voluntary Compliance settlement with an AI healthcare technology company. Also in September, the California Attorney General sent a letter to social media and AI companies that urged better identification and reporting of the use of AI to create deceptive content related to elections. In March 2024, the Securities and Exchange Commission announced charges against two investment advisors for “AI washing” by making false and misleading statements about their purported use of AI in their investment processes. 

Taken together, these recent state and federal actions underline the need for care when it comes to adopting and implementing AI, including performing diligence of AI system providers. King & Spalding’s team of experts can help companies assess risks and review and revise their internal compliance policies with a keen eye towards their use or their employees’ potential use of AI in their commercial businesses and compliance programs.