Skip to main content

Definitions

Contents

At a glance

Artificial Intelligence (AI) can be defined in many ways. However, within this guidance, we define it as an umbrella term for a range of algorithm-based technologies that solve complex tasks by carrying out functions that previously required human thinking. Decisions made using AI are either fully automated, or with a ‘human in the loop’. As with any other form of decision-making, those impacted by an AI supported decision should be able to hold someone accountable for it.

In more detail

What is AI?

AI is an umbrella term for a range of technologies and approaches that often attempt to mimic human thought to solve complex tasks. Things that humans have traditionally done by thinking and reasoning are increasingly being done by, or with the help of, AI.

While AI has existed for some time, recent advances in computing power, coupled with the increasing availability of vast swathes of data, mean that AI designers are able to build systems capable of undertaking these complex tasks.

As information processing power has dramatically increased, it has become possible to expand the number of calculations AI models complete to effectively map a set of inputs into a set of outputs. This means that the correlations that AI models identify and use to produce classifications and predictions have also become more complex and less intrinsically understandable to human thinking. It is therefore important to consider how and why these systems create the outputs they do.

There are several ways to build AI systems. Each involves the creation of an algorithm that uses data to model some aspect of the world, and then applies this model to new data in order to make predictions about it.

Historically, the creation of these models required incorporating considerable amounts of hand-coded expert input. These ‘expert systems’ applied large numbers of rules, which were taken from domain specialists, to draw inferences from that knowledge base. Though they tended to become more accurate as more rules were added, these systems were expensive to scale, labour intensive, and required significant upkeep. They also often responded poorly to complex situations where the formal rules upon which they generated their inferences were not flexible enough.

More recently data-driven, machine learning (ML) models have emerged as the dominant AI technology. These kinds of models may be constructed using a few different learning approaches that build from the past information contained in collected data to identify patterns and hone classificatory and predictive performance. The three main ML approaches are supervised, unsupervised, and reinforcement learning:

Supervised learning models are trained on a dataset which contains labelled data. ‘Learning’ occurs in these models when numerous examples are used to train an algorithm to map input variables (often called features) onto desired outputs (also called target variables or labels). On the basis of these examples, the ML model is able to identify patterns that link inputs to outputs. ML models are then able to reproduce these patterns by employing the rules honed during training to transform new inputs received into classifications or predictions.

Unsupervised learning models are trained on a dataset without explicit instructions or labelled data. These models identify patterns and structures by measuring the densities or similarities of data points in the dataset. Such algorithmic models can be used to:

  • cluster data (grouping similar data together);
  • detect anomalies (flagging inputs that are outliers compared to the rest of the dataset); and
  • associate a data point with other attributes that are typically seen together.

Reinforcement learning models learn on the basis of their interactions with a virtual or real environment rather than existing data. Reinforcement learning ‘agents’ search for an optimal way to complete a task by taking a series of steps that maximise the probability of achieving that task. Depending on the steps they take, they are rewarded or punished. These ‘agents’ are encouraged to choose their steps to maximise their reward. They ‘learn’ from past experiences, improve with multiple iterations of trial and error, and may have long-term strategies to maximise their reward overall rather than looking only at their next step.

While this guidance is applicable to all three of these ML methods, it mainly focuses on supervised learning, the most widely used of the approaches.

What is an AI output or an AI-assisted decision?

The output of an AI model varies depending on what type of model is used and what its purpose is. Generally, there are three main types of outputs:

• a prediction (eg you will not default on a loan);
• a recommendation (eg you would like this news article); or
• a classification (eg this email is spam).

In some cases, an AI system can be fully automated when deployed, if its output and any action taken as a result (the decision) are implemented without any human involvement or oversight.

"" arrow pointing right ""
AI model output   Decision

In other cases, the outputs can be used as part of a wider process in which a human considers the output of the AI model, as well as other information available to them, and then acts (makes a decision) based on this. This is often referred to as having a ‘human in the loop'.

"" "" "" "" "" "" ""
AI model output   Other information   Human consideration   Decision

We use the term ‘AI decision’ broadly, incorporating all the above. So, an AI decision can be based on a prediction, a recommendation or a classification. It can also refer to a solely automated process, or one in which a human is involved.

Further reading

For more information on what constitutes meaningful human involvement in an AI-assisted decision process, read our guidance on automated decision-making and profiling in the Guide to the GDPR, and advice on this topic in our draft AI auditing framework

How is an AI- assisted decision different to one made only by a human?

One of the key differences is who an individual can hold accountable for the decision made about them. When it is a decision made directly by a human, it is clear who the individual can go to in order to get an explanation about why they made that decision. Where an AI system is involved, the responsibility for the decision can be less clear.

There should be no loss of accountability when a decision is made with the help of, or by, an AI system, rather than solely by a human. Where an individual would expect an explanation from a human, they should instead expect an explanation from those accountable for an AI system.