RPC reacts to UK Government White Paper on AI
White Paper enables Government to retain agility as AI develops
Partner Jon Bartley, who specialises in data protection, said:
"The ICO has already spent the last 2-3 years focussing on new technologies including AI, which has included recruiting technology experts into the ICO, developing guidance on explainability and fairness in the use of AI, and investigating AI-driven discrimination in areas such as recruitment.
"This development bolsters the ICO's stated aim in its "ICO 25" strategy of working with other regulators to develop guidance and tools that businesses in different sectors can use, and builds on the work of the Digital Regulation Cooperation Forum.
"Personal data must be processed fairly and in a transparent manner, and relying on a lawful basis, such as "legitimate interests". Compliance with these principles can be challenging for some deployments of AI, such as generative AI (which can use images of real people to produce synthetic images or video – "deepfakes") or AI tools which determine recruitment applications or insurance premiums in ways which can be unfair or discriminatory. There are also specific rules in GDPR which restrict purely automated decision-making.
"The high level, principles-based approach, and the non-statutory architecture, enables the government to retain agility in regulation as new applications for AI are developed and greater understanding around harms is gained. The context-based approach, particularly the idea of joint guidance issued by regulators on the application of the principles to those they regulate, should also result in more practical guidance and reduced overlap or fragmentation.
" The approach also fits with the UK's pro-innovation and flexible approach to data protection post-Brexit, as seen in the Data Protection and Digital Information Bill, recently re-introduced to Parliament.
"It doesn't necessarily fit with the more prescriptive approach taken by other countries however, including the approach of the EU."
AI emphasises the need to secure data on which we increasingly rely
Partner Richard Breavington, head of cyber and tech insurance, said:
"The rapid emergence of AI reflects the ever increasing reliance on digital infrastructure across business sectors. Whilst the use of large amounts of data for AI learning creates opportunities, it also emphasises the importance of properly managing and securing the data on which we increasingly rely. Part of this will involve the right parties having plans in place, including appropriate insurance, for managing the inevitable residual risk to data.
"Part of the need to protect data, particularly personal data, in an AI context is to be as clear as possible as to accountability for the data used in AI processes. For example, in the event that data is misused or compromised, it is necessary to be clear about who was responsible for keeping that data secure. Particularly where the web of regulation seems to be increasing, it will be important to have clear regulatory standards and obligations around data management and security. It will also be important that any liabilities which could result can be appropriately insured.
"The White Paper provides that the UK's proposed regulation will be more flexible than that proposed in the EU and so provide cross-sectoral opportunities. This resonates with the apparent overall Government approach to reining back unnecessary obligations in using data – including in the recently published Data Protection and Digital Information Bill. But departing from the EU regulatory position can create complexities for international businesses operating across the UK and the rest of the EU."
AI White Paper provides a framework…but is very light touch on expectations
Technology Partner Helen Armstrong, said:
"The five key principles identified in the White Paper (safety, transparency, fairness, accountability and contestability) are all important considerations that the public will no doubt expect companies to have carefully considered before deploying AI, but it remains to be seen how – and indeed to what extent – regulators will ensure companies put these non-statutory principles into practice."
"While context-driven regulation may be more 'adaptable', the White Paper recognises that there is a real risk of gaps – both in terms of regulatory coverage and regulator capabilities.
"It is proposed that a central function (initially government) will play a role in monitoring and evaluating the framework, as well as educating regulators and encouraging strong cross-regulator collaboration. Companies that fall within the remit of multiple regulators will no doubt welcome this given the uncertainty created by the potential for divergences in guidance between regulators."
"The White Paper establishes a framework for regulators to have regard to when managing the risks that AI pose, but is very light touch on what will be expected of companies that develop and adopt AI.
"It remains to be seen how regulators will reflect the framework in practical guidance for regulated entities. For example, a risk assessment – akin to that required under US and Canadian law – could potentially be used to ensure that companies have considered both the risks and mitigating factors involved in the use of any given AI tool."
Balancing the needs of IP rightsholders and AI developers is no mean feat
Partner Ciara Cullen, an IP and technology litigation specialist, said:
"The current legislative framework for intellectual property is still very much playing catch-up with new technologies such as artificial intelligence (AI).
"Given the absence of a clear and robust legislative framework, there is considerable uncertainty for both IP rightsholders and AI developers and the scope for IP infringement is rife – balancing the competing interests of artists and creators (who want to be remunerated for use of protected works and derivative works) and AI developers (who some may say represent the future of the creative industries) is no mean feat.
"Where is the line between inspiration and copying? Undoubtedly both sides of the fence would benefit from new legislation to specifically deal with AI and to ensure that IP rightsholders will have their works adequately protected, but to allow for the development and commercialisation of AI systems in the UK, especially in relation to generative AI.
"Much like with other technologies, any legislation will have to evolve and change as AI gets better, so the Government will need to keep the regime under review and take a proactive stance rather than a reactive one."
Human oversight will still be critical to avoiding bias in AI
Employment Partner Kelly Thomson, said:
"AI technology can be very useful including as a potential way to disrupt the impacts of unconscious biases in human decision making. But there is clear evidence that bias from an underlying data set can become entrenched in an AI algorithm. The danger is creation of an apparently neutral decision maker which, in reality, has bias built into its DNA.
"The new AI framework doesn't change the legal obligations on AI users not to discriminate against individuals. Nor does it speak to the use of AI to proactively improve equity. Instead, it simply encourages sector regulators to ensure that regulated entities are complying with laws on discrimination by taking steps to avoid bias arising within AI.
"Human understanding and continuous oversight of AI technology and processes will be critical to doing this. But, at the same time, this emphasises the importance of proactively minimising the potential for the biases of those humans to find their way into the technology being created, reviewed and updated. "
Employment Partner Patrick Brodie, added:
"There's a moment to reflect when the Government's White Paper on AI lands on the same day that leading scientists and tech entrepreneurs suggest a pause to the development of new AI capabilities.
"Failing this Governments should intervene. Ultimately, the goal is to establish systems that are understandable, equitable and can be trusted."
AI can remove the need for humans to be exposed to risks…but it can also create risks
Legal Director Mamata Dutta, who specialises in Health and Safety, said:
"There is a clear benefit for AI in improving health and safety across industries. For example, dangerous tasks can be automated to remove the need for humans to be exposed to risks and AI can be used in a supervisory capacity to highlight discrepancies and potential problems in work practices.
"However, AI can itself create risks which will need to be monitored and addressed accordingly as has been proven with issues around automated vehicles, with indications that they were unable to detect wheelchair users and the uncertainty created by travelling amongst human drivers creating potential problems.
"The AI White paper emphasises the UK Government's approach not to bring in specific legislation too early so not to set unrealistic parameters when advances are being made so swiftly. As set out in Sir Patrick Vallance's Regulation for Innovation Report, the need for a flexible regulatory approach is considered likely to strike a better balance between providing clarity, building trust and enabling experimentation by innovators in the AI market.
"The requirements under both the current health and safety and product safety legislation are fairly broad. Health and safety legislation requires individuals and companies to ensure the health and safety of themselves and others affected by their undertakings, while product safety laws require manufacturers to ensure that products place goods on the market that are safe.
"However, until the regulators are asked to address issues in practice, it is difficult to assess how large any potential gaps may be and how they could be addressed.
"There can already be discrepancies between different individual regulators as to how current legislation is applied in different circumstances, and there is a risk that some may take a broader approach in applying the regulatory framework than others which could lead to different outcomes for similar perceived breaches.
"However, restrictive legislation that is brought in too early and which may soon be out of date can also create a risk in itself. It is therefore important for individuals and businesses to continue monitoring the impact of AI as well as the risks they create generally to ensure that safety remains of paramount importance."
Stay connected and subscribe to our latest insights and views
Subscribe Here