ICO updates its guidance on AI and data protection
The question
What are the key data protection principles which the Information Commissioner’s Office (ICO) expects organisations to follow when integrating AI into their product and service offerings?
The key takeaway
Given the ICO’s commitment to safeguarding vulnerable persons, and recent industry concerns in relation to the use of generative AI technology (eg ChatGPT, AlphaCode, Google Bard), the ICO believes these updates should provide clarity to the UK technology industry on how data protection can be appropriately embedded into those product and service offerings using AI. As such, the updated guidance provides a methodology for assessing AI applications, with a focus on processing personal data in a fair, lawful, and transparent manner.
The background
On 15 March 2023, the ICO published several updates to its “AI and data protection” guidance. These updates aim to deliver on the ICO’s commitment (under ICO25) to assist organisations in adopting new technology, while safeguarding people, especially the vulnerable. The updates also demonstrate the ICO’s support for the UK Government’s “pro-innovation” approach to AI, as outlined by the Government’s White Paper published on 29 March (see also our reaction to the UK government’s White Paper on AI).
The development
Below is a breakdown of the ICO’s key updates and the GDPR principles to which they relate:
- Accountability - Similar to many of the ICO’s recent updates to its guidance, new content has been included which provides further clarity about what organisations using AI should consider when performing a data protection impact assessment (DPIA). As before, a DPIA should be conducted where an organisation’s use of AI involves:
- systematic and extensive evaluation of individuals based on automated processing, including profiling, on which decisions that produce legal, or similarly significant effects, will be made
- large-scale processing of special category data
- systematic monitoring of publicly accessible areas (eg internet forums) on a large scale, and
- processing operations which are likely to result in a high risk to the rights and freedoms of data subjects (eg data matching, invisible tracking, or behaviour tracking).
Where the above conditions are met, the ICO now expects that an organisation’s DPIA will assess whether it is “more or less risky” for the organisation to use an AI system. This means that the DPIA should demonstrate that the organisation has considered: (i) using alternatives to the AI system (if any) which present less risk to individuals, individuals’ rights, the organisation, or wider society, and which achieve the same result; and (ii) why the organisation chose not to use any less risky alternatives which were identified. The ICO states that these considerations are particularly relevant where an organisation uses public task or legitimate interests as its lawful basis for processing personal data.
Additionally, when considering the impact of using a particular AI system to process personal data, the ICO has stressed that an organisation’s DPIA should consider:
- Allocative harms – Harms caused by decisions to allocate goods and opportunities eg favouring male candidates in a recruitment process.
- Representational harms – Harms caused by using an AI system which reinforces the subordination of groups based on identity factors eg an image recognition system which assigns labels reflecting racist stereotypes to pictures of a individuals from a minority group.
- Transparency – The ICO has added a new standalone chapter to its “Explaining Decisions Made with AI” guidance. This new chapter focuses on the importance of organisations being transparent with individuals where they process their personal data using AI systems. The key practical point under these updates is that, where an organisation collects data directly from certain individuals to train an AI model, or apply an AI model to those individuals, then the organisation must provide privacy information to them before their data can be used for that purpose. Further, where such data is collected from other sources, the organisation must provide privacy information to the individuals within a reasonable timeframe (no later than one month), or earlier, if the organisation contacts the individuals or provides their data to a third party.
- Lawfulness – Here, the ICO has added two new sections to its chapter on “What do we need to do to ensure lawfulness, fairness, and transparency in AI systems?”. In these new sections, the ICO focuses on:
- Using AI to make inferences – organisations may use AI to guess or predict details about individuals or groups, or use correlations between datasets to categorise, profile, or make predictions about such individuals or groups. The ICO states that such “inferences”, can constitute personal data, or special category data in and of themselves. To constitute personal data, it must be possible to relate the inferences to an identified or identifiable individual. To determine if an inference constitutes special category data (triggering Article 9 UK GDPR), organisations should assess whether the use of AI allows them to: (i) infer relevant information about an individual; or (ii) treat someone differently based on the inference.
- Relationship between inferences and affinity groups – where inferences permit organisations to: (i) make predictions about individuals; (ii) create affinity groups from those predictions; and then (iii) link the predictions to specific individuals, the ICO stresses that data protection law will apply. Specifically, it will apply to: (i) the development stage of a product or service offering ie using personal data to train an AI model; and (ii) the deployment stage of a product or service offering ie applying an AI model to other individuals outside of the training dataset. Additionally, organisations must consider whether such processing may cause damage to the individuals whose data is being processed, whether data protection by design has been appropriately implemented in the offering, and the impact on society the offering will have once it’s deployed.
- Using AI to make inferences – organisations may use AI to guess or predict details about individuals or groups, or use correlations between datasets to categorise, profile, or make predictions about such individuals or groups. The ICO states that such “inferences”, can constitute personal data, or special category data in and of themselves. To constitute personal data, it must be possible to relate the inferences to an identified or identifiable individual. To determine if an inference constitutes special category data (triggering Article 9 UK GDPR), organisations should assess whether the use of AI allows them to: (i) infer relevant information about an individual; or (ii) treat someone differently based on the inference.
- Fairness - The new content introduced by the ICO to its chapter on “Fairness in AI” states that organisations should only process personal data (including for an AI offering) in a manner which individuals would reasonably expect, and not use data in a way that would cause unjustified adverse effects on individuals. The guidance stresses that where organisations utilise AI, they should ensure that both the processing itself, and the decisions made based on that processing, are sufficiently statistically accurate such that they do not discriminate against individuals. In addition to highlighting that the fundamental principles of UK GDPR must be considered throughout the design and development of an AI offering, the guidance refers to the importance of data protection by design and default considerations, and of performing a comprehensive DPIA. Further, a new annex, “Fairness in the AI lifecycle”, details the fairness considerations which the ICO expects AI engineers and key decision-makers to keep in mind throughout the development and use of their AI products and services.
Why is this important?
These updates provide AI engineers and key decision-makers with important reference materials when considering, designing, developing and deploying their product or service offerings which make use of, or which will make use of, AI technology. By following and implementing the fundamental principles of UK GDPR, as well as the specific recommendations detailed by the ICO, organisations can help ensure they mitigate the risk of future enforcement actions.
Any practical tips?
While these updates provide additional clarity, they should be viewed as a supplement to, not a substitute for, the ICO’s original “AI and data protection” guidance, and the ICO’s recommendations in its “Explaining Decisions Made with AI” guidance.
For a practical, step-by-step guide on how organisations can reduce the risk of enforcement action being taken against their products and services, the ICO has developed an “AI and data protection risk toolkit”. This toolkit, when viewed together with the ICO’s AI guidance, provides a template against which organisations can compare their internal AI design and development processes. It helps them ensure they are meeting the key points which the ICO expects from a data protection and privacy perspective on the integration and utilisation of AI in their products or services.
Summer 2023Stay connected and subscribe to our latest insights and views
Subscribe Here