News & Insights

Client Alert

July 12, 2024

EU's Official Journal Publishes EU AI Act


The European Union Artificial Intelligence Act (2024/1689) was published in the Official Journal of the European Union on July 7, 2024 and will come into effect on August 1, 2024, the 20th day following its publication in the Official Journal.

Application of the EU AI Act: A Phased Approach

Once it is in force, the EU AI Act will be applicable in a phased approach over a 3-year period, as outlined in the below table.

AI Systems, Classified by the EU AI Act

The EU AI Act is a legislative framework built on defined tiers of risks associated with AI systems, with varying degrees of obligations and requirements associated with those risks. Definitions from the EU AI Act related to the tiered application schedule are provided below.

Prohibited AI Systems (Unacceptable Risk)

AI Systems which create risk or harm for individuals are prohibited under the EU AI Act.  Chapter II of the EU AI Act, defines prohibited AI systems as:

  • AI systems that deploy subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques.
    • An example of a subliminal technique referenced in the EU AI Act includes virtual reality systems, given their ability to have a higher degree of control of stimuli in that environment.
  • AI systems that exploit any of the vulnerabilities of a person or a specific group of persons due to their age, disability or a specific social or economic situation.
  • AI systems for social scoring that may lead to discriminatory outcomes, exclusion of certain groups, or violate the right to dignity and non-discrimination.
  • AI systems that predict the risk of a person committing a criminal offence, based solely on the profiling of the individual or on assessing their personality traits and characteristics.
  • AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage.
  • AI systems that infer emotions of individuals in the workplace and education institutions.
  • Biometric categorization systems that categorize individuals on their biometric data to infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation.
    • There is an exclusion for this type of Prohibited AI system for law enforcement.
General Purpose AI Systems

General Purpose AI systems are defined by the EU AI Act as AI models that display significant generality and are capable of competently performing a wide range of distinct tasks, regardless of the way the model is placed on the market, and which can be integrated into a variety of downstream systems or applications.

This definition excludes AI models that are used for research, development, or prototyping activities before they are placed on the market.

High Risk AI Systems, Annex III

Annex III High Risk Systems are defined as the below categories of AI systems.

  • Biometrics, if permitted by the European Union or relevant law.
  • Critical infrastructure, including critical digital infrastructure, road traffic, or related to the supply of water, gas, heating, or electricity.
  • Educational and vocational training, including those systems used to determine admissions, learning outcomes, and detecting prohibited behavior.
  • Employment, workers’ management, and access to self-employment, including those systems used for recruitment or to make decisions related to promotions or terminations.
  • Access to essential private services and essential public services, including systems used to evaluate a person’s eligibility for public assistance, evaluation of credit scores, life and health insurance assessments, and dispatching of emergency services.
  • Law enforcement uses of AI systems to assess the potential risk of an individual becoming a victim or criminal, to assist the administration of polygraph tests, evaluating the reliability of evidence, and in criminal profiling.
  • Migration, asylum, and border control management.
  • Administration of justice and democratic processes.

Given the potential risk associated with this category, the EU AI Act has established various requirements for developers of High Risk AI systems. The implementation of a risk management system for each High Risk AI system will be required and should be maintained for its entire life-cycle. Those systems that rely on training its AI model with data shall establish data governance and management practices for the training, validation, and testing of that data. Technical documentation about the High Risk AI system must be prepared before the system is placed on the market and should be kept up-to-date. These systems must also be sufficiently transparent to their deployers to allow for their appropriate use(s). Finally, High Risk AI systems should be developed with the expectation that human oversight will be part of its operation in order to minimize or prevent risk, and that appropriate levels of accuracy, robustness, and cybersecurity will be achieved and maintained during the product’s entire lifecycle.

Additional obligations exist for providers of High-Risk AI systems (including the implementation of a quality management system), as well as importers, distributors, and deployers of these systems.

High Risk AI Systems, Annex I

The final category of High Risk AI systems, identified in Annex I, is given an additional 12 month period before the obligations of the EU AI Act come into effect. Annex I provides a list of already-existing EU legislation the EU AI Act must be harmonized with as required by EU’s New Legislative Network.

Examples of the type of products covered by legislation in Annex I include:

  • Machinery
  • Toy safety
  • Recreational craft and personal watercraft
  • Medical devices

 

This client alert is part of an ongoing series of client alerts focused on the EU AI Act. King & Spalding will continue to vigilantly monitor developments related to this and other AI-related legislation.