ICO Statement on Generative AI Model Training
The question
What position does the Information Commissioner’s Office (ICO) continue to take on Generative AI Model training?
The key takeaway
The ICO has issued a statement outlining the steps businesses need to take to remain compliant with data regulation when training generative AI models. These include: clear and comprehensible information to data subjects about the training; providing data subjects with real choice on whether their personal data will be used to train generative AI models; and ensuring that there is a robust and comprehensive Data Protection Impact Assessment (DPIA) justifying the approach taken.
The background
On 13 September 2024, the ICO published a statement setting out its position to businesses collecting user data to train generative AI models. This guidance also covers DPIAs which controllers are required to complete before conducting data processing which could pose a high risk to the rights and freedoms of data subjects under UK GDPR.
The ICO is becoming increasingly active in its regulation of how generative AI models are trained. On 20 September, the ICO commented on LinkedIn suspending its training of generative AI models using UK users’ data pending ‘further engagement’ with the company. A lack of a clear opt-out function for users who did not want their user generated data used to train LinkedIn’s generative AI models was a key concern for the ICO. Ongoing engagement with the ICO will continue before LinkedIn is likely to reboot its model training in the UK in a potentially altered form.
This sits against the backdrop of an increasingly collaborative regulatory approach between data protection authorities globally. For example, the Irish Data Protection Commission (DPC) has started a cross-border enquiry with its peer regulators on the continent into the production of a DPIA concerning a large tech company’s generative AI model training programme.
The development
The ICO made a statement highlighting the following key considerations for companies aiming to utilise user data to train AI models, including:
- making it simpler for users to object to data processing and ensuring that any opt-outs are clearly provided
- increasing transparency about the usage of individuals’ data in the model training process by using plain language to provide meaningful information about the training, and
- conducting a thorough and robust DPIA which fully highlights the risks posed to data subjects and the mitigations taken to lower this risk to an acceptable level, and where the risks cannot be reduced to an acceptable level, consulting with the ICO or relevant supervisory authority.
The ICO further emphasised the importance of businesses independently continuing to meet high regulatory standards. It highlights the ongoing and evolving nature of the regulatory process, against a broader regulatory backdrop that is growing worldwide towards collaborative investigation of generative AI model training.
Why is this important?
Businesses must take note of the ICO’s comments and the broader regulatory backdrop. Supervisory authorities have been clear that organisations must comply with data processing laws and regulations prior to processing personal data necessary for the training; a failure to do so could potentially result in a halt to the project for a regulatory review.
The ICO’s engagement with LinkedIn is a pertinent example of the need for proactive compliance with data processing laws; failure can mean a pause to AI model training, incurring cost and delay from an operational perspective. Furthermore, the external risk of regulatory enforcement action (including fines and investigations) remains, alongside the risk of reputational harm in an increasingly privacy-conscious public.
Businesses should keep up to date on guidance from the ICO and other regulatory bodies to demonstrate continual compliance with applicable laws. Companies should continue to be transparent about their data processing and give clear privacy-friendly exits for data subjects.
Compliance should be proactive and clearly evidenced, with co-operation with relevant regulatory bodies such as the ICO where appropriate. Proactive steps which businesses can take include: (i) conducting a DPIA and engaging with the ICO or relevant supervisory authority at an early stage in order to validate the model and implement any necessary safeguards; (ii) reviewing online user journeys to ensure that clear and easy to use opt-outs are available for users; and (iii) updating privacy notices to ensure that meaningful information about the training process is provided to users.
Winter 2024
Stay connected and subscribe to our latest insights and views
Subscribe Here