bto solicitors - Corporate & Commercial Business Lawyers Glasgow Edinburgh Scotland

  • "really fights your corner..."
    "really fights your corner..." Chambers UK
  • "Consistently high-quality work and client-friendly approach."
    "Consistently high-quality work and client-friendly approach." Chambers UK

New guidance from the ICO clarifies requirements for fairness in AI

12 April 2023

The Information Commissioner’s Office (“ICO”) has updated its guidance in relation to Artificial Intelligence (“AI”) after requests from UK industries to provide clearer guidelines on data protection compliant AI in the wake of the continued adoption of new technologies across various sectors.

When publishing the updated guidance, the ICO confirmed that enabling good practice in AI has been one of its regulatory priorities for some time. The guidance was drafted with that in mind and will continue to be updated as AI evolves.

Lauren McFarlane
Lauren McFarlane
Associate

AI is defined in the guidance as an umbrella term for a range of algorithm based technologies that solve complex tasks by carrying out functions that previously required human thinking. Decisions made using AI are either fully automated or with a ‘human in the loop’. The guidance clarifies that as with any other form of decision-making, those impacted by an AI supported decision should be able to hold someone accountable for it.

In order to support a framework for such accountability, the guidance builds on existing data protection principles including transparency, lawfulness and fairness. It also introduces a number of new definitions including algorithmic fairness, inductive bias, and post-processing bias mitigation.

The key updates are summarised below.

Transparency

The guidance clarifies that data processors need to be transparent about how they process personal data in AI systems. This means that as with any other use of personal data, data processors must be clear, open and honest with people about how the personal data is to be used. To do so, data processors must include the following in the privacy information:

  • their purposes for processing personal data;
  • their retention periods for personal data; and
  • with whom they will share the personal data.

Privacy information must be provided to individuals before their information is collected, used to train an AI model, or an AI model is applied to them.

The guidance suggests that explaining AI-assisted decisions has benefits to organisations and can help them comply with the law, build trust with customers, and improve internal governance. Explaining the process to individuals can help them understand the decision-making process and allow them to challenge and seek recourse where necessary.

There is no legal requirement to explain AI-assisted decisions, but with an eye on transparency, data processors should be mindful of the wording of any privacy notices so that people are clear about when and why their personal data is being processed and how that relates to the AI tool being used.

Lawfulness

The guidance clarifies that since the development of AI systems involves processing personal data in different ways for different purposes, then in order to comply with the lawfulness principle, data processors must break down and separate each distinct processing operation and identify the appropriate lawful basis for each one.

One example cited is the development of a facial recognition system trained to recognise faces. Such a system could be used for multiple purposes, such as preventing crime, authentication, and tagging friends in a social network. Each of these applications may require a different lawful basis.

The guidance also suggests that when determining the purpose and lawful basis for processing data, data processors should separate the research and development phase (including conceptualisation, design, training and model selection) of AI systems from the deployment phase. This is because there are distinct and separate purposes, with different circumstances and risks. There may, therefore, be different lawful bases for AI development and deployment.

The guidance considers in some detail the various lawful bases outlined in the UK GDPR as they relate to AI. As an example, it provides that consent may be an appropriate lawful basis in cases where the data processor has a direct relationship with the individual whose data is being processed. However, consent must be freely given, specific, informed and unambiguous, and must involve a clear affirmative act on the part of the individual.

Consent may also be an appropriate lawful basis for the use of an individual’s data during the deployment of an AI system (for example, for purposes such as personalising the service or making a prediction). However, the guidance points out that for consent to be valid, individuals must be able to withdraw it as easily as they gave it. A feature for withdrawal of consent must be built into the AI system.

Fairness

The updated guidance contains a particular focus on fairness, which is a key principle of data protection and particularly applicable to AI systems that infer data about people.

The guidance explains that any processing of personal data using AI that leads to unjust discrimination between people will violate the fairness principle. In particular, the guidance notes that because AI systems learn from data which may be unbalanced and / or reflect discrimination, they may produce outputs which have discriminatory effects on people based on their gender, race, age, health, religion, sexual orientation or other characteristics. 

As a result of the potential unfairness that may stem from potentially discriminatory AI systems, the guidance introduces a new definition to the UK data protection lexicon, ‘algorithmic fairness’, which refers to an emerging field of study by computer scientists who are developing mathematical techniques to measure if AI models treat individuals from different groups in potentially discriminatory ways. The guidance recommends that companies using AI to process data should use algorithmic fairness metrics to identify and mitigate risks of unfair outcomes. This should be part of a holistic approach, which also includes thinking about:

  • the power and information imbalance between the data processor and the individuals whose data is being processed;
  • the nature and scale of any potential harm to individuals resulting from the processing of their data; and
  • the underlying structures and dynamics of the environment in which the AI will be deployed.

The guidance also introduces a number of new concepts, including AI prediction as a service, automation bias, inductive bias, and post processing bias mitigation. The glossary containing these new concepts and their definitions can be found here: Glossary | ICO.

The updated guidance in its entirety can be found here: Guidance on AI and data protection | ICO.

Lauren McFarlane, Associate: lmf@bto.co.uk / 0131 222 2939

“The level of service has always been excellent, with properly experienced solicitors dealing with appropriate cases" Legal 500

Contact BTO

Glasgow

  • 48 St. Vincent Street
  • Glasgow
  • G2 5HS
  • T:+44 (0)141 221 8012
  • F:+44 (0)141 221 7803

Edinburgh

  • One Edinburgh Quay
  • Edinburgh
  • EH3 9QG
  • T:+44 (0)131 222 2939
  • F:+44 (0)131 222 2949

Sectors

Services