News & Insights

Client Alert

July 22, 2024

EU Artificial Intelligence Regulation Published - What You Need to Know


Friday, July 12, 2024, the EU published the new EU Artificial Intelligence Regulation, which is first EU AI Regulation worldwide setting requirements concerning safety and free movement of goods and services using AI technologies. The Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonized rules on artificial intelligence, amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (EU AI Regulation), has been published in the Official Journal of the EU. The regulation introduces a comprehensive framework to govern the development, deployment, and the use of AI technologies across various industries.

Aiming to improve the functioning of the internal market by laying down a uniform legal framework for the development, the placing on the market, the putting into service and the use of AI systems, the EU AI Regulation is intended to promote the introduction of human-centered and trustworthy AI while ensuring a high level of protection in terms of health, safety, and the Charta of Fundamental Rights of the European Union.

The regulation establishes a risk-based classification system, stringent transparency and safety requirements, and robust data governance measures.

Risk-Based Classification

Classifying AI according to its risk, the regulatory framework defines four levels of risk for AI systems:

  • Unacceptable risk (prohibited AI practices)
  • High risk (high-risk AI systems)
  • Limited risk (AI systems with specific transparency obligations)
  • Minimal risk or no risk (all other AI systems which are not subject to additional requirements provided by the EU AI Regulation) 
Prohibited Ai pRACTICES

According to Art. 5 of the EU AI Regulation, certain AI-based practices are to be prohibited in the EU. The list of prohibited AI practices includes applications that, violate European values, for example by offending fundamental rights, and would therefore pose an unacceptable risk to the individuals concerned.

High Risk Ai systems

Most of the provisions of the EU AI Regulation deal with AI systems that pose a high risk to the health and safety or fundamental rights of natural persons (so called high-risk Ai systems, Art. 6 et seq.). These can be divided into two categories.

The first category includes AI systems that are intended to be used as safety components of products or are themselves products that are subject to third-party conformity assessment in accordance with the EU legal act listed in Annex I of the EU AI Regulation.

Impact on Medical Devices:

This includes, for example, AI systems used as safety components in medical devices (Regulation (EU) 2017/745) and on in vitro diagnostic medical devices (Regulation (EU) 2017/746, cf. Art. 6 (1) and Annex I No. 11, 12 of the EU AI Regulation.

Regarding medical devices and in vitro diagnostic medical devices, recitals 46, 47, 64, 84 and 147 of the EU AI Regulation empathize that the hazards of AI systems covered by the requirements of the EU AI Regulation concern different aspects than the existing Union harmonization legislation and therefore the requirements of the EU AI Regulation would complement the existing body of the Union harmonization legislation. Recital 64 states that, for example, medical devices with an AI component may present risks that are not covered by the essential health and safety requirements set out in the relevant Union harmonized legislation, as that sectoral law does not deal with risks specific to Ai systems. This calls for a simultaneous and complementary application of the various legislative acts.

Impact on system affecting fundamental rights:

Another category includes autonomous AI systems that have an impact on fundamental rights. The list of such AI systems is contained in Annex III of the EU AI Regulation (e.g., Ai systems to be used as security components for the management and operation of certain critical infrastructures, AI systems intended to be used for the credit scoring and credit score assessment of natural persons, except for AI systems used for the detection of financial fraud).

Exceptions:

However, the AI systems listed in Annex III are not classified as high-risk systems if a specific exception applies (e.g., if the AI system is intended to perform a narrowly defined procedural task or to improve the outcome of a previously completed human activity). This assessment must be documented, and the AI system must be registered in the relevant EU database.

High risk AI providers must:

  • Establish a risk management system through the high-risk AI systems’ lifecycle.
  • Conduct data governance, ensuring that training, validation, and testing datasets are relevant, sufficiently representative and, to the best extent possible, free of errors and complete according to the intended purpose.
  • Draw up technical documentation to demonstrate compliance and provide authorities with the information to assess that compliance.
  • Design their high-risk AI system for record-keeping to enable it to automatically record events relevant for identifying national level risks and substantial modifications throughout the system’s lifecycle.
  • Provide instructions for use to downstream deployers to enable the latter’s compliance.
  • Design their high-risk AI system to allow deployers to implement human oversight.
  • Design their high-risk AI system to achieve appropriate levels of accuracy, robustness, and cybersecurity.
  • Establish a quality management system to ensure compliance.
General purpose ai models

The EU AI Regulation defines a ‘general-purpose AI model’ as an AI model, including when trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is places on the market and that can be integrated into a variety of downstream systems or applications. This does not cover AI models that are used before release on the market for research, development, or prototyping activities. A typical example for a general-purpose AI model are large generative AI models, given that they allow for flexible generation of content, such as the form of text, audio, images or video, that can readily accommodate a wide range of distinctive tasks.

All providers of such general-purpose AI models must:

  • Draw up technical documentation, including training and testing process and evaluation results.
  • Draw up information and documentation to supply to downstream providers that intend to integrate the general-purpose AI model into their own AI system.
  • Establish a policy to respect the Copyright Directive.
  • Publish a sufficiently detailed summary about the content used for training the general-purpose AI model.
General purpose AI models with systemic risk

Additional obligations apply to providers of general-purpose AI models with systemic risk, cf., Art. 55, 53, 54 of the EU AI Regulation. The EU AI Regulation defines the term ‘systemic risk’ as a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society, that can be propagated at scale across the value chain.

  • Perform model evaluation, including conducting and documenting adversarial testing to identify and mitigate systemic risk.
  • Assess and mitigate possible systemic risks, including their sources.
  • Track, document and report serious incidents and possible corrective measures to the AI Office and relevant national competent authorities without undue delay.
  • Ensure adequate level of cybersecurity protection.

All providers of such general-purpose AI models may demonstrate compliance with their obligations if they voluntarily adhere to a code of practice until European harmonized standards are published, compliance with which will lead to presumption of conformity. Providers that don’t adhere to a code of practice must demonstrate alternative adequate means of compliance for Commission approval. 

Transparency obligations for systems intended for interaction with persons

According to Art. 50, the EU AI Regulation introduces transparency obligations for certain systems that interact with natural persons. This concerns four types of systems in particular:

  • Providers should design systems intended for interaction with natural persons, such as AI-based chatbots, in such a way that they are informed that they are interacting with an AI system, unless this is obvious due to the circumstances and context of use.
  • Providers of AI systems, including general-purpose AI systems that generate synthetic audio, image, video or text content, should ensure that the results of the AI system are marked in a machine-readable format and are recognizable as artificially generated or manipulated. This does not apply if, for example, the AI systems only perform a supporting function for standard processing or do not significantly change the input data provided by the operator or its semantics.
  • Operators of emotion recognition systems or systems for biometric categorization must inform data subjects about the operation of the system.
  • Operators of AI systems that create so-called deepfakes must disclose that the content has been artificially generated or manipulated.

If a system listed here meets the criteria for a high-risk AI system, it must also meet the requirements for high-risk AI systems in addition to the transparency obligations set out in this section.

Enforcement

According to the EU AI Regulation, each EU member state is to appoint a national supervisory authority to monitor the application and implementation of the AI Regulation, cf. Art. 70 EU AI Regulation.

All national supervisory authorities will be represented in the European Artificial Intelligence Committee, which will act as a coordination platform and advisory body for the European Commission.

In addition, an Artificial Intelligence Office (AI Office) has been set up within the European Commission, which will be tasked with monitoring the enforcement of general-purpose AI models.

Regarding medical devices with AI components, Art. 74 (3) provides that for such high-risk AI systems to which the Union harmonization legislation listed in Annex I Section A applies, the market surveillance authority for the purposes of the AI Regulation shall be the authority designated for market surveillance in those acts.

TimelineS and next steps

Entering into force on August 20, 2024, the regulation is binding in its entirety and directly applicable in all Member states.

In principle, the EU AI Regulation is to apply from August 2, 2026. However, it should be noted that according to Art. 113, some parts will already apply before or after this date:

  • Due to the unacceptable risk associated with the use of AI in certain ways, the prohibitions and general provisions will already apply from February 2, 2025.
  • Moreover, the infrastructure related to the governance and the conformity assessment system should be operational before 2 August 2026, therefore the provision on notified bodies and governance structure should apply from 2 August 2025.
  • Obligations for providers of general-purpose AI models will already apply from 2 August 2025.
  • The provisions on penalties will apply from 2 August 2025.
  • 6 (1) and the corresponding obligations will apply from 2 August 2027.

Due to this tight schedule, we strongly recommend that providers of high-risk AI systems in particular start complying with the relevant obligations of the EU AI Regulation already during the transitional period.

For further guidance and assistance, please do not hesitate to reach out to us.