Regulation of AI – introduction
AI providers and users will be operating in an AI market regulated on a territory by territory basis while dealing with a growing and complicated web of overlapping global standards and alliances. Read more
Part 1 – AI regulation in the UK
Existing regulators will, using context-specific approaches, take the lead in guiding the development and use of AI using the five principles outlined in the AI white paper published in March 2023. Read more
Part 2 – AI regulation in the EU
The EU AI Act has entered into force on 1 August 2024. The intention is to achieve proportionality by setting the level of regulation according to the potential risk the AI can generate to health, safety, fundamental rights or the environment. Read more
Part 3 – AI regulation in the US
The US President's Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence sets out eight "guiding principles and priorities", detailing how those principles and priorities should be put into effect, and reporting requirements. Read more
Part 4 – AI regulation in Asia
A light touch and voluntary approach to AI regulation is evident across much of Asia. Read more
Part 5 – AI regulation globally
A central theme is the importance of international collaboration on identifying AI safety risks and creating risk-based policies, guidelines and standards to ensure safety in light of such risks. Read more
Part 6 – practical considerations
AI focused actors and providers have been focusing on their forthcoming AI obligations and on governance for some time, but it is now prudent for the majority of organisations to turn their attention to the following practicalities. Read more