News & Insights

Client Alert

January 15, 2025

EU & UK AI Roundup – January 2025


The second half of 2024 saw a myriad of Artificial Intelligence (AI) related legal and regulatory developments for the EU and UK, critically with the EU AI Act coming into force on 1 August 2024. This has been followed by open consultations and regulatory guidance from the EU AI Office, the European Data Protection Board (EDPB), European supervisory authorities, the UK Information Commissioner’s Office (UK ICO) and the UK government. 

This January 2025 edition of the EU & UK AI Roundup is the first in a series of upcoming Client Alerts that focuses on key legal and regulatory milestones shaping the AI legal and regulatory framework in the UK and Europe. This edition covers the most important updates for early 2025, comprising: 

  1. EU AI Office Publishes 2nd Draft of the GPAI Code of Practice
  2. EDPB Releases Opinion on AI Models & Personal Data
  3. UK Government Launches New Consultation on Copyright Law and AI
  4. UK ICO Publishes Consultation Response to its Generative AI Series
  5. UK Government Launches Consultation on AI Management Essentials (AIME) tool
  6. UK Government Unveils AI Opportunities Action Plan

To finish, this paper provides a glimpse of what is on the horizon for EU and UK AI regulation in the coming months – see: ‘Looking Ahead In 2025’.

EU AI Office Publishes 2nd Draft of the GPAI Code of Practice

On 19 December 2024, members of the EU AI Office Code of Practice Plenary published the 2nd draft of the General Purpose AI (GPAI) Code of Practice. The GPAI Code of Practice relates to Article 56 of the EU AI Act, which creates a mechanism for providers of general-purpose AI models to demonstrate compliance with their EU AI Act obligations along the full life cycle of those models.

The GPAI Code of Practice 2nd Draft builds on the 1st Draft GPAI Code of Practice published on 14 November 2024 which laid the foundation for the GPAI Code of Practice by incorporating feedback from the four Working Groups supporting the Plenary and providers of general-purpose AI models, while also taking into account international considerations.  The Chairs and Co-Chairs note that this 2nd draft is “work-in-progress” given the short review and publication timeline between the 1st and 2nd drafts, opting to focus this draft on clarifying existing text, adding details, and incorporating concepts of proportionality.

Stakeholders and members of the four Plenary Working Groups (Transparency and Copyright-Related Rules, Risk Assessment for Systemic Risk, Technical Risk Mitigation for System Risk, and Governance Risk Mitigation for System Risk) are invited to submit feedback by 15 January 2025 for the anticipated 3rd Draft, which is scheduled to be published the week of 17 February 2025. A final version of the Code of Practice is expected to be presented in a Closing Plenary in April 2025.

EDPB Releases Opinion on AI Models & Personal Data

On 17 December 2024, the EDPB issued Opinion 28/2024 on the processing of personal data in the context of AI models (Opinion).

The Opinion considers key data protection questions on the processing of personal data in the context of the development and deployment phases of AI models. In particular, the Opinion examines: (i) when and how an AI model can be considered anonymous; (ii) how controllers can demonstrate the appropriateness of legitimate interest as a legal basis in the development and deployment phases of an AI model; and (iii) the impact of unlawful processing during the development stage on the lawfulness of subsequent deployment of the AI model. We discuss the first two points in further detail below. 

Anonymity of AI models

Although an AI model may not necessarily be used to produce personal data outputs, such information will still be embedded and can potentially be extracted, directly or indirectly, from the AI model. For that reason, the EDPB considers that AI models trained with personal data cannot, in all cases, be considered anonymous for GDPR purposes.

The Opinion states that supervisory authorities should assess the claims of developers and deployers of an AI model’s anonymity on a case-by-case basis. In a nutshell, it states that for an AI model to be considered anonymous, controllers, who bear the burden of proof, must be able to demonstrate that: (i) personal data, related to the training data, cannot be extracted out of the AI model; and (ii) any output produced when querying the model does not relate to the data subjects, whose personal data was used to train the AI model. Further, the EDPB provides some additional factors for assessing the residual risk of identification, which include evaluating the AI model design, AI model analysis, and testing and resistance to attacks.

Legitimate interest as a legal basis for processing

The EDPB reiterates the conditions under which processing in the context of development and deployment of AI models may be justified on the basis of a legitimate interest. The EDPB elaborates on key factors to be considered as part of the three-part test of the legitimate interest assessment (LIA), i.e.: (i) the necessity to limit the scope of the training data used in line with the data minimisation principle; (ii) considering potential AI ‘model shift’ or misuse of output data and when personal data have been web scraped; and (iii) the assessment of the AI model against the reasonable expectations of the in-scope individuals.

Regarding the ‘balancing test’ element of the LIA, the Opinion states that where the interests of the data subjects are likely to override the legitimate interest of the controller, the controller should introduce mitigating measures to limit the impact of the processing on individuals. The measures are to be applied on a case-by-case basis, and a non-exhaustive list is set out in relation to processing during the development and deployment phases, which includes technical safeguards (e.g., pseudonymisation) and enhanced transparency. Interestingly, measures are also listed in the context of web-scraping such as:

  • Excluding content from publications that could entail risks for individuals if their data is released publicly.
  • Excluding collection from websites which object to web scraping and the reuse of their content for the purpose of building AI training databases.
  • Excluding collection of certain data categories or from certain sources, such as from websites whose subject matter is particularly intrusive due to the sensitivity of their subject matter. The EDPB further shares some examples of processing that may constitute a legitimate interest such as: (i) creating a conversational agent to assist users; (ii) developing an AI system to identify fraudulent content or behaviour; and (iii) improving threat detection in an information system.

The Opinion emphasises to businesses acting as controllers, the necessity of maintaining good practices when documenting legitimate interest as the legal basis for processing, i.e., the requirement to conduct and maintain legitimate interest assessments in light of the accountability principle, ensuring such assessments are regularly reviewed and updated to cater for any new use cases.

UK Government Launches New Consultation on Copyright Law and AI

Turning now to the UK and in an attempt to settle – at least for now – the heated debate relating to copyright law and AI, on 17 December 2024, the UK government launched its much-awaited public consultation seeking views on its latest plans for the UK copyright law framework (Copyright Consultation). Readers may recall the scrutiny received by the former UK government’s plans in 2022 which proposed far-reaching amendments to the Copyright Designs and Patents Act 1988 (CDPA). Under those plans, the UK government sought to introduce a broad text and data mining (TDM) exception (including for commercial use) – that is wider than the one currently provided for under the CDPA – without granting the ability for rightsholders to “opt out” and reserve their rights. As perhaps expected, the creative sector pushed back and in response the UK government scrapped the initial plans and introduced a code of practice instead.

As part of a further effort to adapt the law to keep pace with the rapid development of AI models, the UK government is requesting feedback from all relevant stakeholders on the best way to address the copyright law conundrum. The (now Labour) UK government is proposing updates to UK copyright law including a more nuanced TDM exception than previously presented, which is based on the following main features:

  • An exception that may apply to all purposes (including commercial use).
  • Enabling rightsholders to seek compensation from AI developers at the point of access.
  • Enabling rightsholders to reserve their rights by “opting out” of TDM use. In such circumstances an AI developer would require a licence to lawfully use such content for their AI model. While this broadly aligns with the EU, the UK government expressed concerns over the EU “opt-out” model, citing inconsistent practical application and lack of technical standardisation.

The UK government also stresses the importance of transparency including the need for AI developers to share the sources of training material used to develop their models as well as the content generated by their models. Such transparency measures will underpin any new TDM exception. As well as seeking feedback on these requirements, the UK government has also asked for comments on the approach to transparency under the EU AI Act.

As well as seeking feedback on the novel intellectual property questions that continue to emerge out of AI development and adoption, the Copyright Consultation also touches upon the importance of considering the data privacy aspects of such innovation. Read on for the latest on this topic from the UK ICO. 

To conclude, the UK government acknowledges that as well as a robust legal framework, appropriate technologies will be necessary for an environment that can foster AI innovation while protecting and supporting rightsholders. The Copyright Consultation will remain open until 25 February 2025 and may lead to a pivotal moment for how AI and copyright law interact in the UK. 

UK ICO Publishes Consultation Response to its Generative AI Series

On 12 December 2024 the UK ICO published its outcomes report on data protection issues in generative AI (Report), following its consultation that ended in September 2024 (ICO Consultation) which received over 200 responses. The UK ICO has refined its position, as informed by the responses, in relation to: (i) the lawful basis for collecting web-scraped data; and (ii) the incorporation of individual rights into AI models.

To recap, the UK ICO Consultation explored five key areas:

  • The lawful basis for web scraping to train generative AI models.
  • Purpose limitation in the generative AI lifecycle.
  • Accuracy of training data and model outputs.
  • Engineering individual rights into generative AI models.
  • Allocating controllership across the generative AI supply chain.

When it comes to web scraping (considered a high-risk activity), the UK ICO retained its overall position that legitimate interest is the sole legal basis for developers to collect personal data for training generative AI models. The Report further clarifies and elaborates on why other legal bases are generally unsuitable. To rely on the lawful basis of legitimate interest, developers must satisfy the three-part test, i.e. identifying a legitimate interest for the specific data processing, determining if the processing is necessary for that purpose, and weighing up the legitimate interest with individuals’ rights and freedoms. Special category data was not considered in the ICO Consultation or the Report, but the UK ICO is currently investigating its use by AI developers.

Under UK GDPR, individuals have a right to access data held about them. If AI developers argue that they are not able to respond to access requests because they cannot identify the individuals (in the training data or anywhere else), they must explain this to the individual. The individual can provide additional information to help with identification if they wish to do so.

Turning back to the Report and the issue of individual’s rights and AI models, the UK ICO explains that controllers must prove they cannot identify a person, before relying on the above concession. They should examine this on a case-by-case basis and provide easy ways for the individual to give more information. This might help the organisation identify the person’s data.

With regard to purpose limitation, accuracy of training data and allocating controllership, the UK ICO’s position remains consistent with its original calls for evidence on each of these topics respectively. 

Looking to next steps, the UK ICO anticipates that it will update and consult on its guidance as needed following any changes to data protection law brought about by the Data (Use and Access) Bill. If passed into law, the legislation shall, among other things, include clarificatory tweaks to the UK's data protection regime and propose new regulatory powers for the UK ICO. 

UK Government Launches Consultation on AI Management Essentials (AIME) tool

On 6 November 2024, the UK government’s Department for Science, Innovation & Technology (DSIT) published a public consultation on the AIME tool, a self-assessment tool designed to help businesses establish robust governance and management practices for the development and use of AI systems. The tool is primarily intended for start-ups and SMEs wishing to evaluate their AI governance and to enable responsible development and use of AI products. The AIME tool can also be used by larger organisations to assess AI management systems for individual business divisions, operational departments or subsidiaries.

While not mandatory, the AIME tool is intended to show the UK government’s commitment towards helping organisations embed baseline good practices within their AI management systems. It is designed to provide clarity on what is needed to demonstrate responsible AI management systems and will help organisations identify the strengths and weaknesses of their internal processes.

The AIME tool draws inspiration from the EU AI Act, as well as renowned industry standards, such as ISO ISO/IEC 42001 and the NIST Risk Management Framework as well as the NIST AI Guidelines. The consultation remains open until 29 January 2025 for interested parties that wish to provide feedback. 

UK Government unveils ai opportunities action plan

On 13 January 2025 UK Prime Minister, Keir Starmer announced the launch of the AI Opportunities Action Plan which is designed to propel the UK towards becoming a global leader in AI (Action Plan). The Action Plan, developed by government advisor, Matt Clifford and published by DSIT, sets out a series of 50 recommendations for the UK government on how it can foster AI innovation, advance research and development, improve AI skills and literacy, push global collaboration and promote ethical and responsible AI development and deployment (48 of which were accepted by the UK government). The plan comprises three pillars highlighted below:

  • Lay the foundations to enable AI: The UK government plans to develop ‘AI Growth Zones’ to accelerate the development of AI data centres and plans increase sovereign compute capacity by at least 20 times by 2030. It will also provide details on the creation of a National Data Library to help AI innovators better leverage public sector data assets. The UK government also plans to consult with industry on a recommendation to create a copyright-cleared, British media asset training data set. This could be established by working with bodies that hold valuable cultural data such as the National Archives or British Library.
  • Change lives by embracing AI: This section focuses on the adoption of AI to improve public services. The UK government is encouraged by the idea of a “scan, pilot, scale” approach to public sector AI implementation. This would involve building an understanding of the latest AI capabilities, rapidly developing prototypes and evaluating results, and identifying successful pilots for rolling these out efficiently for public benefit.
  • Secure our future with homegrown AI: The UK government will create a new function to implement the Action Plan and partner with the private sector to, among other things, support top AI talent to relocate to the UK to work with UK-based partnered companies and help to build relationships between public-private AI companies and the UK’s national security community.

Further, the Action Plan cites the need for a TDM exception that is aligned with the EU approach, though as discussed above the UK government’s ongoing Copyright Consultation alludes to the fact that the EU exception has its shortcomings from a technical perspective – likely due to the fact it pre-dates the rise of generative AI, not to mention the latest innovations in agentic AI. The UK government is therefore already seeking to address this by consulting on various options and its response to this recommendation is that it will ensure the UK has “a competitive copyright regime that supports both our AI sector and the creative industries”.

Following this announcement, the UK government is expected to continue to develop its policy response to the Action Plan later this year. 

LOOKING AHEAD in 2025

It is clear that 2025 will mark the continued acceleration of AI investment, development and adoption as well as a crucial period for governments and regulators in how they contend with these new technologies.

In the EU, various supporting laws, guidance and standards are expected to be published to assist companies with their compliance efforts and certain rules set out in the EU AI Act will start to take effect from 2 February 2025. From this date, AI systems which the EU AI Act regards as posing an ‘unacceptable risk’ will be prohibited, and the requirements relating to AI literacy will apply. To assist with compliance, the EU AI Office is shortly expected to provide guidelines with examples of practical use cases (following its targeted consultation that closed on 11 December 2024) to clarify the legal concepts of an AI system and of prohibited AI practices.

In addition to the Data (Use and Access) Bill, the UK government is expected to introduce its own, highly anticipated, ‘AI Bill’. While details are thin on the ground, the UK government said it would “harness the power of artificial intelligence as we look to strengthen safety frameworks” and would “establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models”.

As well as monitoring how the EU and UK respective regulatory landscapes continue to take shape, it is worth keeping a close eye on upcoming milestone events. Following on from the first AI Safety Summit initiated by and held in the UK, where the Bletchley Declaration was published in November 2023, and subsequent AI Seol Summit in South Korea in May 2024, the next chapter of the ongoing series of international forums on AI, the AI Action Summit, will take place in Paris on 10-11 February 2025. It will bring together key stakeholders in the AI space, including governments, companies developing AI products, technology experts, academia, NGOs and civil society to advance global discussions on AI on the key themes of public interest in AI, the future of work, innovation and culture, trust in AI and global AI governance. 

Click here to subscribe to the King & Spalding – Data, Privacy and Security Newsletter for more updates.

Related
Technology