Skip to main content

About this guidance

Contents

At a glance

This guidance covers what we think is best practice for data protection-compliant AI, as well as how we interpret data protection law as it applies to AI systems that process personal data. The guidance is not a statutory code. It contains advice on how to interpret relevant data protection law as it applies to AI, and recommendations on good practice for organisational and technical measures to mitigate the risks to individuals that AI may cause or exacerbate.

In detail

Why have you produced this guidance?

We see new uses of artificial intelligence (AI) everyday, from healthcare to recruitment, to commerce and beyond.

We understand the benefits that AI can bring to organisations and individuals, but there are risks too. We have set out some of these risks, such as AI-driven discrimination in ICO25, our strategic plan for the next two years. Enabling good practice in AI has been one of our regulatory priorities for some time, and we developed this guidance on AI and data protection to help organisations comply with their data protection obligations.

The guidance:

  • gives us a clear methodology to audit AI applications and ensure they process personal data fairly, lawfully and transparently;
  • ensures that the necessary measures are in place to assess and manage risks to rights and freedoms that arise from AI; and
  • supports the work of our investigation and assurance teams when assessing the compliance of organisations using AI.

As well as using the guidance to support our own audit and enforcement activity, we also wanted to share our thinking behind it. The framework therefore has three distinct outputs:

  1. Auditing tools and procedures which our investigation and assurance teams will use when assessing the compliance of organisations using AI. The specific auditing and investigation activities they undertake vary, but can include off-site checks, on-site tests and interviews, and in some cases the recovery and analysis of evidence, including AI systems themselves.
  2. This detailed guidance on AI and data protection for organisations, which outlines our thinking.
  3. A toolkit designed to provide further practical support to organisations auditing the compliance of their own AI systems.

This guidance covers what we think is best practice for data protection-compliant AI, as well as how we interpret data protection law as it applies to AI systems that process personal data.

This guidance is not a statutory code. It contains advice on how to interpret relevant data protection law as it applies to AI, and recommendations on good practice for organisational and technical measures to mitigate the risks to individuals that AI may cause or exacerbate. There is no penalty if you fail to adopt good practice recommendations, as long as you find another way to comply with the law.

This guidance is restricted to data protection law. There are other legal frameworks and obligations relevant to organisations developing and deploying AI that will need to be considered, including the Equality Act 2010 as well as sector specific law and regulations. You will need to consider these obligations in addition to this guidance.

What do you mean by ‘AI’?

Data protection law does not use the term ‘AI’, so none of your legal obligations depend on exactly how it is defined. However, it is useful to understand broadly what we mean by AI in the context of this guidance. AI has a variety of meanings, including:

We use the umbrella term ‘AI’ because it has become a standard industry term for a range of technologies. One prominent area of AI is ‘machine learning’ (ML), which is the use of computational techniques to create (often complex) statistical models using (typically) large quantities of data. Those models can be used to make classifications or predictions about new data points.

While not all AI involves ML, most of the recent interest in AI is driven by ML in some way, whether in image recognition, speech-to-text, or classifying credit risk. This guidance therefore focuses on the data protection challenges that ML-based AI may present, while acknowledging that other kinds of AI may give rise to other data protection challenges.

You may already process personal data in the context of creating statistical models, and using those models to make predictions about people. Much of this guidance will still be relevant to you even if you do not class these activities as ML or AI. Where there are important differences between types of AI, for example, simple regression models and deep neural networks, we will refer to these explicitly.

How does this guidance relate to other ICO work on AI?

This guidance is designed to complement existing ICO resources, including:

The Big Data report provided a strong foundation for understanding the data protection implications of these technologies. As noted in the Commissioner’s foreword to the 2017 edition, this is a complicated and fast-developing area. New considerations have arisen since, both in terms of the risks AI poses to individuals, and the organisational and technical measures that can be taken to address those risks. Through our engagement with stakeholders, we gained additional insights into how organisations are using AI on the ground, which go beyond those presented in the 2017 report.

Another significant challenge raised by AI is explainability. As part of the government’s AI Sector Deal, in collaboration with the Alan Turing Institute (The Turing) we have produced guidance on how organisations can best explain their use of AI to individuals. This resulted in the ‘Explaining decisions made with AI’  guidance, which was published in May 2020.

While the Explaining decisions made with AI guidance already covers the challenge of AI explainability for individuals in substantial detail, this guidance includes some additional considerations about AI explainability within the organisation, eg for internal oversight and compliance. The two pieces of guidance are complementary, and we recommend reading them together. 

What is a risk-based approach to AI?

Taking a risk-based approach means:

  • assessing the risks to the rights and freedoms of individuals that may arise when you use AI; and
  • implementing appropriate and proportionate technical and organisational measures to mitigate these risks.

These are general requirements in data protection law. They do not mean you can ignore the law if the risks are low, and they may mean you have to stop a planned AI project if you cannot sufficiently mitigate those risks.

To help you integrate this guidance into your existing risk management process, we have organised it into several major risk areas. For each risk area, we describe:

  • the risks involved;
  • how AI may increase their likelihood and/or impact; and
  • some possible measures which you could use to identify, evaluate, minimise, monitor and control those risks.

The technical and organisational measures included are those we consider good practice in a wide variety of contexts. However, since many of the risk controls that you may need to adopt are context-specific, we cannot include an exhaustive or definitive list.

This guidance covers both the AI-and-data-protection-specific risks, and the implications of those risks for governance and accountability. Regardless of whether you are using AI, you should have accountability measures in place.

However, adopting AI applications may require you to re-assess your existing governance and risk management practices. AI applications can exacerbate existing risks, introduce new ones, or generally make risks more difficult to assess or manage. Decision-makers in your organisation should therefore reconsider your organisation’s risk appetite in light of any existing or proposed AI applications.

Each of the sections of this guidance deep-dives into one of the AI challenge areas and explores the associated risks, processes, and controls.

Is this guidance a set of AI principles?

This guidance does not provide generic ethical or design principles for the use of AI. While there may be overlaps between ‘AI ethics’ and data protection (with some proposed ethics principles already reflected in data protection law), this guidance is focused on data protection compliance.

Although data protection does not dictate how AI developers should do their jobs, if you use AI to process personal data, you need to comply with the principles of data protection by design and by default.

Certain design choices are more likely to result in AI systems which infringe data protection in one way or other. This guidance will help developers and engineers understand those choices better, so you can design high-performing systems whilst still protecting the rights and freedoms of individuals.

It is worth noting that our work focuses exclusively on the data protection challenges introduced or heightened by AI. Therefore, more general data protection considerations, are not addressed in this guidance, except in so far as they relate to and are challenged by AI. Neither does it cover AI-related challenges which are outside the remit of data protection.

What legislation applies?

This guidance deals with the challenges that AI raises for data protection. The most relevant piece of UK legislation is the Data Protection Act 2018 (DPA 2018).

The DPA 2018, together with the UK General Data Protection Regulation (UK GDPR), set out the UK’s data protection regime. Please note that from January 2021, you should read references to the GDPR as references to the equivalent articles in the UK GDPR. The DPA 2018 comprises the following data protection regimes:

  • Part 2 – covers general processing and supplements and tailors the UK GDPR;
  • Part 3 – sets out a separate regime for law enforcement authorities; and
  • Part 4 – sets out a separate regime for the three intelligence services.

Most of this guidance will apply regardless of which part of the DPA applies to your processing. However, where there are relevant differences between the requirements of the regimes, these are explained in the text.

You should also review our guidance on how the end of the transition period impacts data protection law.

The impacts of AI on areas of ICO competence other than data protection, notably Freedom of Information, are not considered here.

Further reading outside this guidance

How is this guidance structured?

This guidance is divided into several parts covering different data protection principles and rights.

The general structure is based on the foundational principles of data protection:

  • lawfulness, fairness, and transparency;
  • purpose limitation;
  • data minimisation;
  • accuracy;
  • storage limitation; and
  • security and accountability.

It also provides more in-depth analysis of measures to comply with people’s individual rights.

In order to provide more guidance to AI developers and AI risk managers, we have also created an AI and data protection risk toolkit and technical guidance on how data protection approaches fairness in AI that follows the AI lifecycle in Annex A. A glossary at the end of this guidance provides more background information for non-technical professionals who want to understand the more technical aspects of the existing guidance.

As the technology evolves and legislation changes, we are likely to update this guidance.

Who is this guidance for?

This guidance covers best practices for data protection-compliant AI. There are two broad audiences.

First, those with a compliance focus, including:

  • data protection officers (DPOs);
  • general counsel;
  • risk managers;
  • senior management; and
  • the ICO’s own auditors – in other words, we will use this guidance as a basis to inform our audit functions under the data protection legislation.

Second, technology specialists, including:

  • machine learning developers and data scientists;
  • software developers / engineers; and
  • cybersecurity and IT risk managers.

The guidance is split into four sections that cover areas of data protection legislation that you need to consider.

While this guidance is written to be accessible to both audiences, some parts are aimed primarily at those in either compliance or technology roles and are signposted accordingly at the start of each section as well as in the text.

How should we use this guidance?

In each section, we discuss what you must do to comply with data protection law as well as what you should do as good practice. This distinction is generally marked using ‘must’ when it relates to compliance with data protection law and using ‘should’ where we consider it good practice but not essential to comply with the law. Discussion of good practice is designed to help you if you are not sure what to do, but it is not prescriptive. It should give you enough flexibility to develop AI systems which conform to data protection law in your own way, taking a proportionate and risk-based approach.

The guidance assumes familiarity with key data protection terms and concepts. We also discuss in more detail data protection-related terms and concepts where it helps to explain the risks that AI creates and exacerbates.

The guidance also assumes familiarity with AI-related terms and concepts. These are further explained in the glossary at the end of this guidance.

The guidance focuses on specific risks and controls to ensure your AI system is compliant with data protection law and provides safeguards for individuals’ rights and freedoms. It is not intended as an exhaustive guide to data protection compliance. You need to make sure you are aware of all your obligations and you should read this guidance alongside our other guidance. Your DPIA process should incorporate measures to comply with your data protection obligations generally, as well as conform to the specific standards in this guidance.