This section outlines the main data protection principles and requirements that you must take into account in the context of age assurance. If you are implementing age assurance systems, you must:
- consider the risks to children that arise from your platform or service;
- determine whether age assurance of users is necessary; and
- select an approach that is appropriate and proportionate to the risk.
You must embed data protection into the design of your products, services and applications.
When assessing the age of your users, you are likely to be processing both adults’ and children’s personal information. Data protection law requires you to protect everyone's personal information. This section therefore applies to the processing for all users when you are assessing their age.
-
6.1 Principles
The UK GDPR sets out seven key principles which lie at the heart of data protection. You must follow these when processing personal information. The principles are interlinked, and you may find that complying with one principle helps you to comply with another.
6.1.1 Lawfulness
You must identify a lawful basis before you start processing personal information for age assurance purposes. There are six lawful bases to choose from. Lawfulness also means not doing anything with the personal information that is unlawful in a more general sense.
The two lawful basis that you are most likely to consider for age assurance processing are legitimate interests or legal obligation.
Legitimate interests involves a three part-test which includes demonstrating necessity and balancing the rights and freedoms of people. It places particular emphasis on the need to protect the interests and fundamental freedoms of children.
Legal obligation applies to processing that you are legally obliged to do. This requires you to demonstrate necessity. For example, it might be appropriate for age assurance required by online safety legislation or gambling licencing conditions.
Some age assurance techniques rely on biometric data which can uniquely identify someone. This is more sensitive personal information, categorised as special category data under UK GDPR and is given additional protections.
Further reading
Guidance on the principles is available here: Data protection principles - guidance and resources.
Further guidance on lawful bases is available here:
6.1.2 Fairness
If you use people’s information for age assurance, you must be fair. Fairness means that you must only handle it in ways people would reasonably expect and it does not have an unjustified adverse impact on them. You could use market research or user testing to help establish what users’ reasonable expectations in this context are.
The code requires that you should not process children’s personal information in ways that are obviously, or have been shown to be, detrimental to their health or wellbeing. To do so would not be fair.
Fairness in data protection law is broader than fair treatment and non-discrimination. When using age assurance, you should scrutinise and minimise any potential bias in your approach.
You must provide tools so that people can challenge inaccurate age assurance decisions. You should make these tools accessible and prominent, so people can exercise their rights easily.
Where you determine age through solely automated decision-making, Article 22 of the UK GDPR has additional rules to protect people and ensure that processing is fair.
Further reading
Further guidance is available here:
6.1.3 Transparency
Transparency is fundamentally linked to fairness. If you are not clear and transparent about how you will process people’s information for age assurance, it is unlikely that your processing will be fair.
People have the right to be informed about your processing of their personal information. You must be clear, open and honest about how you use people’s information for age assurance purposes, and how you make decisions.
Standard 4 of the code provides advice on how you should present this type of information to children. You should consider how age assurance fits into your user journey and experience to determine how and when it is best to provide this type of information.
Regardless of the method used for age assurance, you must explain clearly to people:
- why you are using age assurance;
- what personal information you need for the age assurance check;
- whether you will use a third party to carry out the age assurance check;
- how you use the personal information and how it will affect the user’s experience of the platform or service;
- whether you keep personal information you collect for age assurance and how, why and for how long; and
- the rights available to people, including how they can challenge an incorrect age assurance decision.
You must be able to explain how you arrived at the decision, in a way that people can understand.
If you are relying on solely automated decision-making, depending on the impact of that decision on the person, there may be additional data protection requirements.
People have the right to be informed. Children have the same rights as adults, including the right to rectification and the right to be forgotten. Even if a child is too young to understand the implications of their rights, they are still their rights rather than anyone else’s, such as a parent or guardian. In Scotland there is a presumption that a child of 12 or over has sufficient understanding to be able to exercise their rights. There is no equivalent presumption elsewhere in the UK.
You should only allow parents to exercise these rights on behalf of a child if:
- the child authorises them to do so;
- the child does not have sufficient understanding to exercise the rights themselves; or
- it is evident that this is in the best interests of the child.
Further reading
Further guidance is available here:
6.1.4 Purpose limitation
You must only process personal information for specific and legitimate purposes, and not further process it in a manner incompatible with those purposes. Purpose limitation is closely linked to transparency, fairness, and data protection by design.
If you are implementing an age assurance system, you must:
- be clear about what personal information you process;
- be clear about why you want to process it;
- ensure you only collect the minimum amount of personal information you need to establish an appropriate level of certainty about the age of your users; and
- ensure you do not use personal information collected for age assurance for any other purpose, unless the new purpose is compatible with age assurance.
If you are a developer of age assurance systems, you must build your systems with data protection in mind.
You must not re-use personal information collected for age assurance for purposes such as profiling for advertising, or in other ways that are incompatible with the purposes you collected it for.
Information that you have collected during your normal course of providing a service may be relevant for age assurance purposes. You may re-use this information to assess someone’s age, but only if:
- the age assurance process is compatible with your original purpose for collecting information;
- you have the appropriate level of consent; or
- you have a clear obligation or function set out in law.
You must ensure that the new use of personal information is fair, lawful and transparent.
Purpose limitation also applies to sharing personal information. Standard 9 of the code notes that you should not share children’s information, such as children’s age assurance information, unless you can demonstrate a compelling reason to do so, taking account of the best interests of the child. Where you have to share children’s age assurance information, you should demonstrate and document why it is necessary to do so. In your privacy notice, you must clearly state circumstances when you might need to share this information.
Further reading
Further guidance is available here:
6.1.5 Data minimisation
You must ensure that the personal information you collect is adequate, relevant and limited to what is necessary for the purpose.
Age assurance may require you to process personal information beyond what is involved in delivering your core service. You must apply data minimisation to your chosen age assurance approach. This means that you must make sure that the personal information you process for age assurance purposes:
- is sufficient to properly achieve the stated purpose of the age assurance (adequate);
- has a rational link to that purpose (relevant); and
- is no more than you need for that purpose (limited to what is necessary).
The data minimisation principle means that the personal information you collect must be adequate to achieve your purpose. In the context of age assurance, self-declaration can be easily circumvented, which means the information you collect is likely to be insufficient for high-risk scenarios. Therefore, you may require more personal information to achieve your purpose. In most cases, as long as you limit your processing to what is necessary and proportionate, it is likely to be appropriate to use age assurance to reduce the risk of harm to children while complying with data minimisation.
You must only use personal information necessary to undertake age assurance. What is necessary is linked to what is proportionate for the circumstances. A service or platform that does not pose a high risk to children is likely to need to process less information to assess or verify the age of users than one that poses a high risk to children.
In many cases it may be excessive to see an official document (eg a passport or driving licence). This is because you can use an age assurance method that processes less personal information whilst still being proportionate to the risks faced by children. You may only need to record a yes or no output that a person meets the age threshold.
Further reading
Please see our guidance on data minimisation for further information.
6.1.6 Accuracy
This section refers to accuracy in the context of data protection, however section 6.3.4 refers to the statistical accuracy of algorithms.
You must ensure that the personal information you process for the purpose is accurate.
The accuracy principle applies to all personal information, whether it is about a person used as an input or output to an AI system. This does not mean that an AI system needs to be 100% statistically accurate to comply with the accuracy principle.
You must have methods in place to mitigate the risks that the personal information you collect may be inaccurate. When using age estimation methods, you should record age assurance returns as an estimation rather than a matter of fact. People have the right to correct inaccuracies in their information which means you must consider any challenges to the accuracy.
If you are developing age assurance solutions, you should test them for accuracy. If you are using an external solution, you should seek evidence from your suppliers, such as certification.
Incorrect outcomes for age assurance are likely to be:
- an adult wrongly identified as a child, or a child wrongly identified as younger than they are, is denied access to a platform or service that is suitable for them to access;
- a child who is wrongly identified as an adult or older than they are, is able to access a product or service that is restricted to adults or children of an older age; or
- an adult wrongly identified as a child gains access to child-only services with a maximum age limit which may result in risks to the child users.
Inaccuracy presents risks. For example, a child who is attributed an incorrect age may access services intended for adults or older children. They may unwittingly consent to further processing of their personal information that leads to inappropriate profiling. In that situation, it is unlawful to process information of children under 13 if there is no evidence of consent from someone with parental responsibility. This is because only children aged 13 or over are able to provide their own consent in these circumstances. Conversely, adults may suffer detriment or harm if they are denied access to services they need.
No system is fool proof. You should consider how likely it is that age checks may be bypassed or spoofed (how a system might be deceived into thinking an individual is a different age) and the associated impact. For example, the potential harms which can happen if inaccurate age information is collected about your users. For any age assurance approach, you should also consider:
- how an adult or older child wrongly denied access to part or all of a platform or service can challenge a decision;
- how a child wrongly identified as an adult or older than they are (or someone with parental responsibility), can rectify this outcome; and
- whether the potential harm to children accessing an inappropriate platform or service is sufficient to justify ongoing monitoring of all users. For example, to identify children that may have wrongly gained access.
In addition, you should consider whether further checks are required when a child reaches age 13 (the age at which they are able to provide their own consent as outlined in Article 8 UK GDPR) and 18 (the point at which they are recognised as an adult). This will ensure that users on the service are only able to access parts of the service which are appropriate to them.
Further reading
Please see our guidance on accuracy and the right to rectification for further information.
6.1.7 Storage limitation
You must not keep people’s information for longer than you need it. You should be able to justify how long you keep personal information collected for age assurance purposes and you should have a policy that sets out retention periods.
You should be proportionate in how frequently you carry out age checks compared to the risks on your service. It may be necessary to implement age checks at suitable intervals to ensure the personal information you collect remains accurate. In this case, you should erase personal information which you have obtained through previous checks that is no longer required. This ensures that you do not hold age assurance information for longer than necessary.
You must retain only the minimum amount of personal information necessary for the purpose. If you use a hard identifier to assess age, you may only need to retain a yes or no output once you’ve completed the check.
People have the right to have their information erased in certain circumstances. You must consider challenges to your retention of personal information you collected for age assurance.
Further reading
Please refer to our guidance on the right to erasure and Principle (e): Storage Limitation for further information.
6.1.8 Integrity and confidentiality (security)
You must process people’s information securely when you use it for age assurance purposes. You must consider how the system collects or shares information, as well as the personal information involved. You should include this as part of your data protection by design approach and address considerations about risk analysis, organisational policies, and physical and technical measures.
You must consider the state of the art and costs of implementation when deciding which security methods to use. You must put in place methods that are appropriate both to the circumstances and the risk the processing poses.
If you use a third-party supplier, you must ensure appropriate data security methods are in place through due diligence checks.
If using AI, you should consider the balance between transparency and security. For example, you should ensure that a malicious actor cannot re-identify people given sufficient technical information.
Further reading
Please see our guidance on AI and security for further information.
Our general guidance on Principle (f): Integrity and confidentiality (security) is available here.
6.1.9 Accountability
The accountability principle means that you must be able to demonstrate how your age assurance activities comply with data protection law.
There are a number of accountability measures that you must take (where applicable), including:
- adopt and implement data protection policies;
- take a data protection by design and default approach to age assurance;
- put written contracts in place with third party age assurance services that process information on your behalf (these may be processors or joint controllers depending on the exact circumstances of the relationship);
- maintain documentation of your age assurance processing activities;
- implement appropriate security measures for your age assurance processing; and
- record and, where necessary, report personal data breaches.
You must take a data protection by design approach to age assurance. You must put in place appropriate technical and organisational measures to implement the data protection principles effectively and safeguard people’s rights. This means integrating data protection into your age assurance activities from the design stage right through the lifecycle.
You must be able to demonstrate that your approach to age assurance is proportionate to the risks to children associated with a platform or service.
A DPIA is a key accountability tool that you must implement if your processing is likely to result in a high risk to people’s rights and freedoms. You should carry out a DPIA at an early stage in the design of any product or service that involves processing personal information (even if it is not a requirement). This applies for age assurance. Standard 2 of the code explains how DPIAs fit into the wider context of the children’s code.
In some cases, age assurance may be unnecessary. For example:
- where you demonstrate that the risks to children are not high;
- where the service is unlikely to be accessed by a significant number of children; or
- if all the content or services you provide to all your users conform to the code.
You should assess whether a significant number of children are likely to access your service. You should consider this in your DPIA to justify which age assurance method to apply, if any. This helps demonstrate compliance with accountability requirements.
Further reading
Further guidance is available here:
6.2 ICO certification schemes
You could use the ICO’s approved and published certification schemes to demonstrate accountability. Certification provides a framework for you to follow, helping ensure compliance and offering assurance that specific standards are met.
Certification allows people to assess the data protection compliance of an organisation’s age assurance product, process or service. This provides transparency both for people and in business-to-business relationships.
Applying for certification is voluntary. However, if there is an approved certification scheme that covers your processing activity, you could consider working towards it as a way of demonstrating compliance with the UK GDPR.
For example, in 2021 we approved and published the Age check certification scheme (ACCS) which tests that age assurance products work. The scheme includes data protection criteria (ACCS 2:2021) for those organisations operating or using age assurance products.
If you use age verification systems that are not certified, you should still be able to provide other evidence that the checks you use are effective.
6.3 Age assurance and AI
Artificial Intelligence (AI) has become a standard industry term for a range of technologies. In this section, we outline a number of data protection considerations that may arise when you implement age assurance methods.
6.3.1 Biometric data
Age assurance methods may use biometric data, depending on the type of technology deployed.
Some age verification approaches may use biometric recognition technologies to match an image of someone to the photograph on their official documentation to prove their age (eg a passport or a driver’s license).
Some age estimation approaches may use biometrics for face or voice analysis and classification to provide an estimate of a person’s age.
Both recognition and classification approaches use AI or machine learning (ML). However, from a data protection compliance perspective, the information they process, and the associated obligations on organisations, may differ.
Biometric recognition technologies process biometric data for the purpose of unique identification. In an age verification scenario, an image of the person requesting verification is captured and turned into a biometric template. This template is then compared with another, generated from the image on the official photo ID.
The purpose of the comparison is to find a match between the two images (recognise the person). This means that the age verification solution can be confident that the person presenting is the same person pictured on the official ID. This provides proof (verification) of the person’s age (or that their age is over a set threshold). Whenever you use biometric data for the purpose of uniquely identifying someone, it is special category biometric data. Special category data requires further protection due to its sensitive nature.
Before processing special category biometric data, or if the solution you are using is AI-driven, you must complete a DPIA. This documents your purpose for processing this information, and assesses and manages any risks which may arise.
To process special category biometric data, you must identify a valid Article 9 condition for processing.
Assuming it is proportionate for your service to use biometric data for age assurance, then it is likely that you can apply the condition for substantial public interest. This is because the processing is likely to be necessary to safeguard children and people at risk (Article 9(2)(g) schedule 1, paragraph, 18 of the DPA).
Further reading
Please see our guidance on special category data and biometric data for further information.
6.3.2 Age assurance and profiling
Profiling refers to any form of automated processing of information that is used to evaluate or predict someone’s behaviour or characteristics. Profiling can involve the use of AI and ML techniques to either inform decision-making or make decisions automatically. AI-based profiling can make inferences about people by making predictions based on patterns that an AI model observes. These systems can classify people into different groups or sectors. This analysis identifies links between different behaviours and characteristics to create profiles of people.
Profiling can be used for age assurance, for example, through monitoring aspects of a user’s vocabulary and interests to identify potentially under-age users. You can also use profiling as an age estimation method in itself. However, you must consider the confidence you can have in the age inferences gathered, and the fairness and accuracy of any AI system you use to make them. You must show that it is proportionate to the risks to children that it is being used to mitigate.
Profiling data gathered for age assurance must not be used for any incompatible purpose. If profiling for age assurance relies on cookies, such cookies are permissible under the “strictly necessary” exemption found in the Privacy and Electronic Communications Regulations 2003 (PECR). Your use of profiling must be transparent, and you should make sure it is within a person’s reasonable expectations.
Further reading
Please see our guidance on automated decision making and profiling.
Further information about cookies and similar technologies is available here.
6.3.3 Age assurance and discrimination
Age assurance may produce discriminatory outcomes. The risk of discrimination may be heightened for people with protected characteristics, such as age, race and disability in a way that would impact the fairness of the processing. If you fail to address bias, you may breach the fairness principle.
Age verification usually depends on the user having ready access to official documents or a credit history. Young adults and people from disadvantaged backgrounds (in which disabled people or those from ethnic minority backgrounds are over-represented) may have lower rates of access to a driver’s licence or passport, and so be unable to access an ISS using only age verification.
Age estimation may carry risks from algorithmic bias. Systems based on biometrics, such as voice or facial structure, may not perform as well for people of darker skin tones, or those with medical conditions or disabilities that affect physical appearance. These systems may have discrimination and bias risks. Age estimation technology is advancing rapidly, allowing some providers to significantly reduce the bias in their systems. You must review the efficacy and accuracy rates when planning to use age assurance.
Discriminatory outcomes may also be in breach of both the Equality Act 2010, the applicable equality legislation in Northern Ireland and UK GDPR, since processing with discriminatory outcomes is unlikely to be fair. You must consider these risks. You must ensure that your age assurance solution incorporates reasonable adjustments for disabled people, such as offering alternative methods for age assurance. You should have an accessible process for users to challenge an incorrect age assurance decision.
Further reading
Please see our guidance about fairness, bias and discrimination for further information.
6.3.4 Statistical accuracy
In general, the output of AI processing amounts to a statistically informed guess rather than a confirmed fact. In age estimation solutions, an algorithm provides an estimate of age within a range. While in an age verification solution, an algorithm may make a decision that links someone to an official source that verifies their age. It is important to remember that no algorithm is 100% statistically accurate all the time.
You must ensure that any age assurance system is sufficiently statistically accurate and avoids unjust discrimination. You should decide and document what your minimum success criteria are for statistical accuracy at the initial business requirements and design phase. Different age assurance methods perform with varying levels of statistical accuracy for different age groups. These due diligence measures include systems provided or operated by third parties.
You should test your AI system against these criteria at each stage of the lifecycle. This includes post-deployment monitoring, including for emergent bias.
You may require trade-offs in the design of the AI system. To use a simplified example, there is a balance between precision (“how sure we are that someone has been correctly classified as under 18”) and recall (“how sure we are that we have identified all of the under 18s trying to use a platform or service”). Increasing precision means a greater risk of missing some underage users, whereas increasing recall means more adults will be wrongly classified as underage. The correct balance depends on the circumstances, risks and harms you identify.
Further reading
Further information is available here:
6.3.5 Algorithmic fairness
Algorithmic fairness is a term for a range of techniques that can address the risks of an AI model treating people in way that could be discriminatory.
An AI system is only as good as the information used to train or tune it. There are numerous real-world examples where discriminatory outcomes result from algorithms that are trained on information that does not properly represent the population they will be applied to. Usually, the worst effects of such discrimination fall on groups who are already marginalised or at greater risk of harm.
We have said in our AI guidance that AI systems are less accurate for outliers, as by definition they represent a minority in the training data, making them more vulnerable to risks. When choosing an AI system, you should ensure that algorithms are trained using high-quality, diverse and relevant data sets. Our guidance on AI and data protection sets out ways in which developers can mitigate biased, discriminatory, or otherwise unfair outcomes resulting from automated decision-making.
You should consider capture bias. This is where the device that observes information does so inaccurately. For example, a camera used in poor lighting conditions may produce a photograph of the user that is not of good enough quality for accurate age estimation.
You should consider what kind of algorithmic fairness measures would be appropriate for your chosen system. While a statistical approach to fairness can be helpful in identifying discriminatory impacts, it will only address some of the issues you must consider to comply with the fairness principle. This is because the concept of data protection fairness covers issues beyond statistical accuracy.
Further reading
Further guidance is available here: