The November 2023 AI safety summit and the UK's direction of travel
The government has confirmed that the UK AI safety summit will be held at Bletchley Park on 1 and 2 November 2023.
At the summit, companies leading in AI research and in producing AI systems together with AI experts will be considering the risks of AI and how they can be mitigated. The UK is expecting an international presence and internationally coordinated action to follow.
The AI White Paper at a glance
The UK set out its proposal on AI regulation in its AI White Paper in March this year. The UK's approach, aimed at regulating the use of AI rather than the technology itself, focusses on the context in which AI is deployed rather than specific technologies.
The government proposed a lightly regulated, principles-based UK framework with no formal legislation. For this framework the government puts itself in a monitoring role, using test beds and sandbox initiatives, conducting and asking convening industry to conduct horizon scanning, and promoting interoperability with international regulatory frameworks. Acknowledging AI's adaptivity and lack of explainablity, the government has decided not to provide a legal definition of AI at this point.
In addition, the White Paper clarified that the framework is to be supplemented by assurance techniques, voluntary guidance and technical standards in collaboration with bodies such as the UK AI Standards Hub. There will be no AI regulator appointed, with the government favouring instead a system where existing sectoral regulators such as the Information Commissioner’s Office, the Health and Safety Executive and the Competition and Markets Authority will be required to create context specific rules and guidance based on the AI principles – tailored to the ways AI is used in their sectors.
The government's proposals, as set out in the AI White Paper, are covered in more detail in our previous article.
Since the UK's AI White paper was published in March
Generally, the government has moved from promoting the 'light touch' approach outlined in the AI White Paper to a position that focusses more on promoting "safety features and guardrails". It has declined to comment publicly on whether it will introduce AI legislation in the current parliament (which ends mid 2024), and it appears to maintain its line that early AI regulation doesn't necessarily need legislation.
Alongside the March 2023 Spring budget, the government published Sir Patrick Vallance's Pro-Innovation Regulation of Technologies Review (PIRT) setting out his recommendations, as well as publishing the government's response to support innovation in generative AI. The PIRT acknowledged that government engagement with stakeholders had shown that the relationship between intellectual property law and generative AI is unclear and there was a lack of regulatory clarity as to the direction of UK reforms, particularly for AI firms deploying text and data mining techniques to generate new content. In its response and in support of the PIRT recommendations, the government proposed that the Intellectual Property Office (IPO) will produce a code of practice "by the summer" (nothing has arrived yet) that will provide guidance to support AI firms in accessing copyright protected works as an input to their models, while ensuring there are protections (e.g. labelling) on generated output. We have published a detailed article on this.
In late March, the government launched a consultation (that closed on 21 June 2023) on the issues raised in the AI White Paper including: the statutory duty requiring regulators to have due regard to the cross-sectoral principles; the allocation of legal responsibility for AI throughout the value chain; suggested approaches to the regulation of foundation models; and having an AI regulatory sandbox. The government's response to the 406 responses is expected "after the summer".
In response to the AI White Paper, in May 2023 the CMA launched an initial review into AI foundation models and its report setting out its findings is expected as soon as September 2023. In its review, the CMA is looking at the evolving market for AI foundation models; opportunities and risks for competition and consumer protection; and which principles might best guide the development of these markets. As the UK is seeking a principles based non-statutory AI framework, the CMA's review findings and next steps are likely to be influential in shaping the approach of other UK regulators.
In June, the AI Council was dissolved, and the government’s Foundation Model Taskforce was established with £100m funding to lead on AI safety and develop international guardrails, such as shared safety and security standards and infrastructure.
In July, the Communications and Digital Committee launched an inquiry to examine large language models and what needs to happen over the next 1–3 years to ensure the UK can respond to their opportunities and risks. The inquiry, which closes for evidence in September 2023, is likely to seek input from Ofcom and the Information Commissioner's Office on how they plan to deal with AI. It will also examine how the UK's approach compares with that of other jurisdictions, notably the EU, US and China.
Also in July, the Ada Lovelace Institute issued a policy briefing examining the UK’s current plans for AI regulation and setting out 18 recommendations for the government and the Foundation Model Taskforce to help strengthen the proposed regulatory framework. These included: legislating to introduce rights and protections to further regulate biometric technologies and automated decision-making; establishing an AI ombudsman; introducing greater powers to request information of companies developing, deploying or using AI systems; increasing funding to regulators; and examining and strengthening the law surrounding AI liability in order to redistribute legal and financial liability for AI risk in AI value chains.
Issues facing businesses and consumers
AI has been around in various forms since the 1950s but it is in the last few months that there have been warnings, including from leading scientists and technical experts, about the dangers of AI technology.
Disregarding the more headline grabbing reports of existential threat from (in particular generative) AI, from a legal perspective there are a number of issues that we see as likely to arise when AI is used in day to day activities by businesses and consumers:
- Breaches of confidentiality or data protection laws – may occur if businesses provide confidential information or data either to the AI system supplier at the stage of training the AI model, or at the prompt stage when asking questions of a generative AI model
- Risk of professional negligence – when AI hallucinations are relied upon when providing advice or when used in making key business decisions
- Breach of equality laws – if a company implements decisions made by an AI model without checking for bias
- Contract disputes – arising out of the AI system's failure to perform, or, breaches of software licences for exceeding the amount of permitted data. The lack of explainablity of AI systems is likely to make these types of disputes more complex to run and less predictable in terms of calculating prospects of success
- An increase in product liability claims – products incorporating AI systems and used in high-risk areas may produce defects with catastrophic consequences for consumers
- Unintentional collusion – back to the issue of explainability, a business may not realise that its AI powered pricing algorithm is engaging in collusion and this may give rise to competition law issues
- Intellectual property (IP) issues – there is currently no clear answer on the authorship and ownership of IP contained in the output of generative AI models and on the question of whether it is lawful to use IP protected works to train AI models (see above and our earlier article)
What answers can we expect or at least hope for in the autumn?
Businesses are grappling with multiple impending issues connected to use of AI and are looking for answers – fast. Formal regulation and guidance have been slow to emerge from the government and regulators, who themselves have had little time to prepare to deal with his complex and fast moving area.
The autumn is promising to be a busy time for possible answers. The IPO's code of practice, the CMA's initial review into AI foundation models, the government's response to the issues raised in the AI White Paper and the report from the House of Lords Communications and Digital Committee inquiry into large language models, are all due in the coming weeks and months.
These are likely to provide a rich backdrop of information, ideas and recommendations for the government to feed into the dialogue at the global AI safety summit which in itself should begin to provide UK businesses and consumers with some answers to the many practical issues they are facing.
Stay connected and subscribe to our latest insights and views
Subscribe Here