EU AI Act is signed!

Published on 01 August 2024

The question

What are the core elements of the EU’s Artificial Intelligence Act and how does it impact the regulation of AI systems?

The key takeaway

On 13 June 2024, the President of the Council of the EU and President of the European Parliament signed the Artificial Intelligence Act (the AI Act) regulating the use of AI systems in the EU. 

The background

AI continues to be the hot topic of the day. Multiple jurisdictions have been considering how to regulate AI in a way which maximise the benefits of AI, such as increased efficiency, while minimising the risks it poses to society in terms of its safety, security and ethics. Different approaches have been considered. For example, the UK AI White Paper sets out a principles-based approach, whereas the EU has adopted a risk-based approach. None have committed pen to paper until now with the signing of the EU’s AI Act, which marks one of the first attempts at AI regulation in the world.

The development

On 13 June 2024, the President of the Council of the EU and President of the European Parliament officially signed the AI Act. The legislation seeks to promote EU investment in AI whilst ensuring AI systems are safe, trustworthy and transparent. 

Who does it apply to?

The AI Act applies to those involved in the production and proliferation of certain AI systems including providers, distributors, manufacturers and users. A particular distinction is drawn between deployers and providers, with obligations differing depending on the role. The AI Act captures businesses whose AI system is supplied, or its output is used, within the EU – whether or not the business itself is based in the EU. Notably, the AI Act includes exemptions for use of AI for military and research purposes.

What does it require?

The AI Act takes a risk-based approach, categorising AI technology according to the risk it poses to society: the higher the risk of potential societal harm from the technology, the stricter the legislation. 

High risk AI systems

High risk AI systems must comply with strict requirements designed to ensure safety and transparency regarding their use. This includes:

  • establishing risk management systems
  • establishing data governance and management practices
  • drawing up technical documentation for the system before it is put into circulation
  • record keeping ie ensuring that AI decision-making is automatically logged to ensure that decisions can be traced back
  • providing certain information to users of the AI system including information regarding the system’s capabilities, intended purpose and the data it is trained on to ensure transparency
  • Ensuring there is human oversight of the AI system
  • Designing the model to ensure it produces accurate outputs.

Public entities deploying high risk AI systems in the context of public services will need to carry out a fundamental rights impact assessment and register in the EU databases as using these systems. 

Where emotion recognition systems are being used, individuals must be informed if they are being subjected to these systems.
Certain AI practices are banned under the AI Act for being too high risk – for example, those AI systems which manipulate human behaviour, or which score individuals based on certain characteristics such as their race or sexual orientation.

General purpose AI models (or “Foundational Models”)

All general purpose AI models (eg ChatGPT) must adhere to various transparency requirements. This includes drawing up technical documentation to show how the AI model operates and providing information to the public about the datasets the model is trained on. Providers must also produce policies to ensure that EU copyright rules are followed. Additional rules apply where the general purpose AI model is considered to pose systemic risks. 

Who enforces these rules?

Several bodies have been established to monitor compliance and enforce this legislation: 

  • an AI Office in the EU Commission which will seek to enforce the AI Act and develop the EU’s AI expertise and capabilities
  • an AI Board consisting of representatives from Member States to advise on the implementation of the AI Act across Member States and facilitate the sharing of technical and regulatory expertise regarding AI among member states
  • an advisory forum to provide technical expertise to the AI Board and the EU Commission
  • a scientific panel of independent experts to support the AI Office with the enforcement of the legislation as regards general purpose AI models and systems.

When do I need to comply?

The AI Act comes into force across all 27 EU member states on 1 August 2024. However, most of the provisions will apply on 2 August 2026, with certain provisions applying earlier eg the prohibition on use of banned AI technologies effective from 2 February 2025.

What are the consequences of non-compliance?

Similar to the GDPR, penalties for non-compliance comprise either a fixed amount (up to 35,000,000 EUR) or a percentage of the offending company’s worldwide annual turnover for the preceding financial year (up to 7%), whichever is higher, depending on the provision which has not been complied with.

Why is this important? 

As the first significant piece of legislation of its kind in the world, the EU AI Act sets the tone for AI regulation moving forwards. It will be interesting to see how easy or difficult it will be to put this legislation into practice, especially in relation to the administrative obligations placed on providers under the AI Act (eg drawing up technical explanations of the models, and summarising what data it is trained on). For more discussion around the AI Act and AI issues in general, see RPC’s AI Guide here. 

Any practical tips?

Businesses providing AI systems in the EU should assess the intended purpose of their AI systems and therefore the level of risk their systems pose to society to determine their obligations under the legislation. 

Following this assessment, businesses should start preparing to comply with the various administrative requirements under the AI Act, including drawing up technical documentation and identifying and summarising the data sets that their AI models are trained on. 

 

Summer 2024

Stay connected and subscribe to our latest insights and views 

Subscribe Here