EU publishes draft Code for general-purpose AI models

Published on 10 December 2024

The question

 What measures are proposed by the EU AI Office to regulate general-purpose AI (GPAI) models?

The key takeaway

A draft code of practice for general-purpose AI (the Code) has been published. Providers of GPAI models will have until the implementation date of 2 May 2025 to ensure that their practices are compliant with the Code and therefore the EU AI Act (the AI Act).

The background

The AI Act, which came into force on 1 August 2024, sets out a risk-based framework that places requirements on AI technology depending on the risk posed to society. In addition to this general regime, “providers” of GPAI models have separate and more onerous obligations under the AI Act. “Providers” is defined as any party that develops a GPAI model or has a GPAI model developed and places it on the market or put the AI system into service under its own name or trademark. A “GPAI model” is one that has been trained on large amounts of data and can be used to perform a wide range of general tasks. Consequently, large model providers such as OpenAI (developers of GPT models used for ChatGPT), Google (developers of Gemini GPAIs), and Meta (developers of Llama) will most likely fall within the definition of a GPAI provider. In addition, GPAI models that present ‘systemic risk’ (based on a technical definition of computational power) are subject to additional requirements. For our previous discussion on the AI Act, see our Snapshots Summer 2024 article.

The Code was required to be drawn up under the AI Act to facilitate the implementation of these obligations. To do this, the AI Office put together four specialist working groups, led by Chairs and Vice-Chairs with expertise and experience in computer science, AI governance and law. In line with the AI Act’s encouragement of relevant stakeholder participation in the process (ie from civil society organisations, industry, and academia), a multi-stakeholder consultation opened in August 2024 which received almost 430 submissions.

The development

On 14 November 2024, the first draft of the Code was published. The working groups had six key principles in mind when drafting:

  1. alignment with EU principles and values
  2. alignment with the AI Act and international approaches
  3. proportionality to risks
  4. future proofing
  5. proportionality to the size of the GPAI model provider, and
  6. support and growth of the AI safety ecosystem.

Guidance for providers of GPAI models 

Transparency: transparency is the key requirement for GPAI models under the Code. Providers must keep up to date technical documentation for both the AI Office and downstream providers. This documentation should include information such as details of the GPAI model and provider, intended and restricted or prohibited tasks, the type of AI systems in which the model can be integrated into, the acceptable use policy, design specification, and training process (including the data used).

Copyright: measures to be taken include implementing a copyright policy, carrying out reasonable copyright due diligence before contracting with third parties, implementing reasonable downstream copyright measures to mitigate any risk, and lawful engagement in text and data mining. To satisfy the transparency requirement, providers must provide information on the measures they adopted to comply with EU law on copyright.

GPAI models that pose “systemic risks”

The Code provides further guidance on what will be considered a systemic risk; types of risks identified are cyber risks, chemical, biological, radiological and nuclear risks, loss of control, unpredicted developments as a result of using automated models for AI development, large-scale persuasion and manipulation including disinformation/misinformation risks to democratic values, and large-scale discrimination of individuals, communities or societies. This is a non-exhaustive list and further risks may be identified if, for example, they cause large-scale negative effects on public health, safety, public and economic security etc.

Whether or not a GPAI model would be put in this category will depend on its attributes such as whether it has dangerous model capabilities (ie weapon acquisition, self-replication, persuasion, manipulation, and deception) and dangerous model propensities (ie misalignment with human intent/values, bias, lack of reliability, and security). Further, specific inputs, configurations and contextual elements may increase risk such as any potential to remove guardrails, human oversight, number of business users and end-users. For these GPAI models, the Code proposes a Safety and Security Framework (SSF) detailing the risk analysis and management steps taken by providers, which should be “proportional to the severity of expected systemic risks”:

  • risk assessment: identification of systemic risks stemming from the model by continuous and thorough analysis of risks identified, mapping pathways to risks, developing triggers for any risk indicators, and collect evidence on the specific risks. Risk assessment must be carried out continuously during the full lifecycle of the development and deployment of the GPAI model (ie before and during training, during deployment and post deployment)
  • technical risk mitigation: systemic risks must be kept below an “intolerable level” by putting in place safety mitigation measures (ie behavioural modifications to a model, safeguards for deployment in a system, and other safety tools) and security mitigation measures, as well as identifying limitation to these mitigations. Safety Security Reports (SSR) must be created for each model at appropriate steps in the lifecycle, detailing the risk and mitigation assessments which can form the basis of any development and deployment decisions
  • governance risk mitigation: providers must ensure the ownership regarding systemic risks at each level of the organisation, including at executive and board levels, regularly assess the provider’s adherence to the SSF and engage independent experts to carry out systemic risk and mitigation assessments. Providers should have in place processes for reporting serious incidents to the AI Office as well as whistleblowing protections. SSFs and SSRs should be published to increase public transparency.

Why is this important?

The final version of the Code is expected to be published in Spring next year. Businesses that comply with the Code will be presumed to comply with the GPAI-related provisions under the AI Act. The Code, therefore, is a very helpful practical standard for businesses to follow. This will be important given the potentially significant fines under the AI Act (ie up to 35m or 7% of a company’s annual turnover), but also to align with the EU’s objective to increase transparency in the development and use of GPAI models. This will in turn increase public confidence in technology companies that demonstrate lawful and safe development of AI models.

Any practical tips?

The current version of the Code is very much a draft and contains open questions to stakeholders. Businesses should review the draft Code and consider to what extent they fall, first, within the definition of GPAI provider and, second, whether their model presents any “systemic risks”. These determinations will then drive the assessment of how the measures outlined may be implemented in practice and to what extent current practices must be updated to be in line with the principles set out in the Code, particularly in relation to the overarching theme of “transparency”. Relevant teams should monitor the progress of the Code, particularly the fact that, as noted by the AI Office, the current assumption is that there will only be a small number of GPAI models with systemic risks. The AI Office proposes that, if incorrect, future drafts will require a tiered system of measures focusing on models providing the largest systemic risks.

Winter 2024

Stay connected and subscribe to our latest insights and views 

Subscribe Here