Skip to main content

What is content moderation and how does it use personal information?

Contents

In detail

What do you mean by content moderation?

We use the term ‘content moderation’ to describe:

  • the analysis of user-generated content to assess whether it meets certain standards; and
  • any action a service takes as a result of this analysis. For example, removing the content or banning a user from accessing the service.

You may carry out content moderation for a range of purposes, including meeting your obligations under the OSA or enforcing your terms of service.

In this guidance, we use the term ‘content policies’ to describe the rules you set out in your terms of service that specify:

  • what content you do not allow on your service; and
  • how you deal with certain types of content.

Content policies usually prohibit content that is illegal, as well as content that you deem to be harmful or undesirable.

When we discuss ‘moderation action’ in this guidance, we are referring to action you take on a piece of content or a user’s account after you’ve analysed the content. This may be because you are:

  • required to take action to comply with your duties under the OSA; or
  • enforcing your content policies.

For example:

  • Content removal – you may remove content from your service (or prevent it from being published, if moderation is taking place pre-publication).
  • Service bans – you may ban users from accessing your service, either temporarily or permanently. You may operate a ‘strike’ system that records content policy violations by a user, and enforces a ban when a user reaches a certain number of strikes.
  • Feature blocking – you may restrict a user’s access to certain features of your service, either temporarily or permanently. For example, you may block users from posting content, or from commenting on content posted by others, while still giving them access to other features of the service normally.
  • Visibility reduction – a range of actions you may take to reduce the visibility of content. For example, you may prevent content from being recommended or make content appear less prominently in users’ news feeds.

What are the key stages in content moderation?

These are the key stages a content moderation workflow may involve:

  • Database matching – an automated analysis of content to check whether it matches an internal or external database of known prohibited content. For example, hash matching is a type of database matching technology that is commonly used for detecting exact or close matches of known child sexual abuse material (CSAM).
  • Content classification – an automated analysis of content to assess whether it is likely to breach a service’s content policies. This often uses artificial intelligence (AI) based technologies. Classification systems may assign a degree of confidence to their assessment of a piece of content. Services may decide that if content reaches a particular certainty threshold, it can be automatically actioned (eg removed or queued for human review).
  • Human review – human moderators reviewing content against a service’s content policies. This includes content that has been flagged by an automated tool and content that has been reported by other people, such as other users or third parties.
  • Moderation action – taking action on a piece of content (eg removing it from the service) or a user’s account (eg banning the user from the service). (See the section on ‘What do you mean by content moderation?’ for more information.)
  • Appeals and restorations – the processes that allow users to challenge moderation decisions. Typically this involves human review of the content and the moderation decision.

However, there are different approaches to content moderation and the stages may differ. This means you may have configured your systems to:

  • only follow some of these stages. For example, you may not human review all content before deciding to take moderation action;
  • rely on users or third parties to report content for human review, rather than using database matching or content classification;
  • take moderation action before or after you publish the content; or
  • make decisions automatically using content moderation technology, or by a human moderator, or both. (See the section on ‘What if we use automated decision-making in our content moderation?’ for more information.)

Sometimes you may be able to make decisions about content based solely on the content itself. However, you may also need to look at other personal information associated with a user’s account. (See the section below on ‘What personal information does content moderation involve?’ for more details.)

How might third parties be involved in content moderation?

You may use third-party content moderation providers to assist you with a range of processes:

  • Third-party technology – they can provide you with a range of automated moderation technologies. They may have expertise in moderating specific formats or categories of content.
  • Third-party human moderation – they can provide human moderators to review content that has been flagged as potentially violating your content policies. They may also manage other processes, such as user appeals.

If you use third-party providers, you must make sure that all parties clearly understand whether they are acting as a data controller, joint controller, or processor. (See the section on ‘Who is the controller in our content moderation systems?’ for more information.)

What personal information does content moderation involve?

Personal information means information that relates to an identified or identifiable person.

This doesn’t just mean someone’s name. If you can distinguish a person from other people, then they are “identified” or “identifiable”. Examples of information that may identify someone include an IP address or an online identifier.

Content moderation systems involve processing people’s personal information at all stages of the workflow.

In most cases, user-generated content is likely to be personal information in your moderation systems. This can be because:

  • it’s obviously about someone. For example, if the content contains information that is clearly about a particular user of your service; or
  • it’s connected to other information, making someone identifiable. For example, the account profile of the user who uploaded it, which may include information such as their name, online username and registration information (eg email address).

Content moderation may also involve using personal information that is linked to the content or a user’s account. For example, a user’s age, location, previous activity on the service, or a profile of their interests and interactions.

You must be able to demonstrate that using this kind of personal information in content moderation:

What do we do if we use pseudonymised personal information in our content moderation?

In some content moderation systems, you may be able to separate the content from information you hold about the user who uploaded it. You may do this to analyse the content at the detection and classification stages.

The information you analyse is not truly anonymised in these circumstances. Instead, you are processing pseudonymised personal information.

Pseudonymisation has a specific meaning in data protection law. This may differ from how the term is used in other circumstances, industries or sectors.

The UK GDPR defines pseudonymisation as:

“…processing of personal data in such a manner that the personal data can no longer be attributed to a specific data subject without the use of additional information, provided that such additional information is kept separately and is subject to technical and organisational measures to ensure that the personal data are not attributed to an identified or identifiable natural person.”

Pseudonymisation therefore refers to techniques that replace, remove or transform information that identifies a person. For example, replacing one or more identifiers which are easily attributed to a person (eg names) with a pseudonym (eg a reference number or job ID).

While you can link that pseudonym back to the person if you have access to the additional information, you must hold this information separately. You must have technical and organisational measures in place to ensure you can do so.

Although you cannot directly identify people from pseudonymous information, you can identify them by referring to other information you hold separately. Therefore any information you have pseudonymised remains personal information and you must comply with data protection law when processing it.

Example

An online video gaming service hosts a forum where users can discuss video games and share pictures and videos. The content uploaded by users includes discussion about their experience playing games and images and videos of their gameplay.

The service employs a team of moderators to ensure that the forum posts don’t violate its content policies. This involves the moderation team processing the forum users’ personal information.

The service has another team that is responsible for scanning the content that users upload to the forum. This team doesn’t need to identify individual users, as its purpose relates to the content specifically.

The service ensures that this second team doesn’t receive information that can identify individual users. It pseudonymises this information by replacing the users’ names and account handles with a reference number. This helps to reduce risks and improve security, while still enabling the service to fulfil this particular purpose.

The service is still processing personal information in both cases, even if it applies technical and organisational measures to ensure the second team can’t identify people without the additional information. This is because the organisation can, as the controller, link that material back to the identified users.

Do content moderation systems use special category information?

Content moderation systems may process special category information about people. This means personal information about a person’s:

  • race;
  • ethnic origin;
  • political opinions;
  • religious or philosophical beliefs;
  • trade union membership;
  • genetic data;
  • biometric data (where this is used for identification purposes);
  • health data;
  • sex life; or
  • sexual orientation.

The UK GDPR is clear that special category information includes not only personal information that specifies relevant details, but also personal information ‘revealing’ or ‘concerning’ these details.

Content moderation may involve processing special category information both directly and by inference. This could be if:

  • you use special category information about users to support the decisions you make about their content (eg because it provides additional context);
  • you are intentionally inferring details that fall within the special categories of information to inform the outcome of your content moderation (see the example in the box below); or
  • the content you are moderating includes special category information about users. For example, posts on an online forum where identifiable users are directly expressing their political views. If you are moderating user-generated content that contains this information, then you are processing special category information regardless of whether you intend to.

You must ask yourself whether you are likely to use any special category information (including inferences) to influence or support your activities in any way. If so, then you are processing special category information. Special category information needs more protection because it is sensitive. You must identify a condition for processing, in addition to a lawful basis, if you are processing it, either because you’ve planned to or because it’s contained within the content. (See the section on ‘How do we carry out content moderation lawfully?’ for more information.)

In some cases, you may be able to surmise details about someone that fall in to the special categories of information, even though you do not intend to make those inferences. For example, a human moderator reviewing images and videos of people wearing certain clothing may be able to infer that they belong to a particular religious group, even if the content does not specify that information directly.

You are not processing special category information, if you do not:

  • process the content with the purpose of inferring special category information; nor
  • intend to treat people differently on the basis of an inference linked to one of the special categories of information.

However, as above, you are processing special category information if you intentionally use those inferences to inform your moderation. This is the case regardless of whether that inference is correct.

Example

A service deploys a content moderation system that analyses user-generated pictures and videos when they are uploaded, in order to detect content that is promoting self-harm or suicide.

If the analysis finds the content is likely to be promoting suicide or self-harm, it is referred to a human moderator for review. In some cases, where the moderator deems the user to be in immediate danger, the service shares the user’s information with the emergency services.

This system is processing special category information (ie health related information indicating whether the person is at risk of self-harm or suicide). It is making a special category inference at the analysis stage, and sharing that special category personal information with the emergency services.

The service therefore needs to identify a lawful basis and a valid condition for processing, at both the analysis and referral stages.

Further reading

Is criminal offence information a relevant consideration?

The UK GDPR gives extra protection to personal information relating to criminal convictions and offences or related security measures.

This includes personal information ‘relating to’ criminal convictions and offences. For example, it can cover suspicion or allegations of criminal activity. In this guidance, we refer to this information collectively as ‘criminal offence information’, although this is not a term used in the UK GDPR.

You must assess whether you are processing criminal offence information as part of your content moderation.

Section 192 of the OSA requires services to take down content where they judge there to be “reasonable grounds to infer” it is illegal, using “reasonably available” information to make this judgment. Ofcom has produced draft guidance on how services can make illegal content judgements for the purposes of the takedown duty, the risk assessment duty and the safety duties more generally.

It depends on the specific circumstances of your processing as to whether you are processing criminal offence information about a person when you make an illegal content judgment under the OSA.

If you are carrying out content moderation that involves processing criminal offence information, you must identify a condition for processing, as well as your lawful basis. (See the section on ‘How do we carry out content moderation lawfully?’ for more information.)

We plan to publish further data protection guidance on reporting CSEA content to the NCA under section 66 of the OSA.