Part 5 – AI Regulation Globally

Published on 04 October 2024

This Part 5 of 'Regulation of AI'

On 30 October 2023 the G7 published its international guiding principles on AI, in addition to a voluntary code of conduct for AI developers. The G7 principles are a non-exhaustive list of guiding principles aimed at promoting safe, secure and trustworthy AI and are intended to build on the OECD's AI Principles, adopted back in May 2019.

On 1 and 2 November 2023 the UK Government hosted the AI Safety Summit. The Summit brought together representatives from governments, AI companies, research experts and civil society groups from across the globe, with the stated aims of considering the risk of AI and discussing how they can be mitigated through internationally co-ordinated action.

One output from the UK's AI Safety Summit was the 'Bletchley Declaration', made by the countries attending the summit, which in addition to the UK, included the USA, China, Brazil, India, France, Germany, Japan, Italy and Canada. A central theme of the declaration was the importance of international collaboration on identifying AI safety risks and creating risk-based policies to ensure safety in light of such risks. Another output was an agreement between senior government representatives from leading AI nations, and major AI developers and organisations (including Meta, Google DeepMind and OpenAI), to a plan for safety testing of frontier AI models. The plan involves testing models both pre-and post-deployment, and a role for governments in testing, particularly for critical national security, safety and societal harms. For example, the UK’s AI Safety Institute would be able to evaluate the safety of AI systems such as ChatGPT before they are released to businesses and consumers.

Also in November 2023, the UK's National Cyber Security Centre released its guidelines for developers on Secure AI System Development, developed with the US’s Cybersecurity and Infrastructure Security Agency. The guidelines are endorsed not only by the US but also 17 other countries. The guidelines help developers ensure that cyber security is both an essential pre-condition of AI system safety and integral to the development process from the outset and throughout, known as a ‘secure by design’ approach.

On AI standards, early in 2023, a new standard for AI risk management – ISO/IEC 23894 was published by the International Organisation for Standardisation (ISO) and the International Electrotechnical Commission (IE). ISO/IEC 23894 offers strategic guidance to organisations across all sectors for managing risks connected to the development and use of AI. It also provides guidance on how organisations can integrate risk management into their AI-driven activities and business functions. As an International Standard, ISO/IEC 23894:2023 provides a common framework that can be adopted by organisations globally. It followed ISO/IEC TR 24028:2020 that analyses the factors that can impact the trustworthiness of systems providing or using AI. It also discusses possible approaches to mitigating AI system vulnerabilities and ways to improving their trustworthiness. In its Response to the white paper, the UK government mentions specifically the importance of engaging with global standards development organisations such as the ISO and IEC.

At the end of 2023, global collaboration on AI was further bolstered by IBM and Meta's launch of the AI Alliance, a collaboration with more than 50 other organisations across academia, civil society, public bodies like NASA and major corporate operators like Oracle to "advance open, safe and responsible AI". It aims to develop and deploy benchmarks and evaluation standards, tools, and other resources that enable the responsible development and use of AI systems at global scale, including the creation of a catalogue of vetted safety, security and trust tools. 

Following through on commitments made at the UK's AI Safety Summit last November, in April 2024 the UK and US signed a Memorandum of Understanding which will see them work together to develop tests for the most advanced AI models. The UK and US AI Safety Institutes have laid out plans to build a common approach to AI safety testing and to share their capabilities to ensure these risks can be tackled effectively. They intend to perform at least one joint testing exercise on a publicly accessible model. This kind of international collaboration, between AI safety institutes, is being replicated elsewhere around the world. 

At the global AI summit in Seoul in May 2024, the institutes of 10 countries, and the EU, signed an agreement to work together on advancing research and improving efficiency. It has been suggested in commentary that the AI Action Summit in Paris in February 2025 will establish five working groups, including one bringing the world’s AI safety institutes together for the first time, with a focus on establishing standards.  At the Seoul AI summit, leaders of Australia, Canada, France, Germany, Italy, Japan, Singapore, South Korea, the UK, US and EU signed the Seoul Declaration with commitments to cooperate more between themselves and via organisations such as the UN, G7, G20 and OECD, while sixteen AI firms made voluntary safety commitments.

On 5 September 2024, the US, EU and UK signed the Council of Europe’s convention on AI – the first legally-binding international treaty creating a common framework for AI systems. Signatory countries must ratify the AI Convention for it to have effect in their jurisdiction.

The new agreement has three over-arching safeguards:

  • protecting human rights, including ensuring people’s data is used appropriately, their privacy is respected and AI does not discriminate against them
  • protecting democracy by ensuring countries take steps to prevent public institutions and processes being undermined
  • protecting the rule of law, by putting the onus on signatory countries to regulate AI-specific risks, protect its citizens from potential harms and ensure it is used safely

In November, the UK will host a conference in San Francisco for discussions with AI developers on how they can put into practice commitments made at the AI Seoul Summit. The event will feature a number of workshops and discussions focused on AI safety ahead of the Paris AI Action Summit.   

 


Discover more insights on the AI guide

Stay connected and subscribe to our latest insights and views 

Subscribe Here