Skip to main content

Task 6: Consider how to build and present your explanation

Contents

At a glance

  • To build an explanation, you should start by gathering together the information gained when implementing Tasks 1-4. You should review the information, and determine how this provides an evidence base for process-based or outcome-based explanations.
  • You should then revisit the contextual factors to establish which explanation types you should prioritise.
  • How you present your explanation depends on the way you make AI-assisted decisions, and on how people might expect you to deliver explanations you make without using AI.
  • You can ‘layer’ your explanation by proactively providing individuals first with the explanations you have prioritised, and making additional explanations available in further layers. This helps to avoid information (or explanation) overload.
  • You should think of delivering your explanation as a conversation, rather than a one-way process. People should be able to discuss a decision with a competent human being.
  • Providing your explanation at the right time is also important.
  • To increase trust and awareness of your use of AI, you can proactively engage with your customers by making information available about how you use AI systems to help you make decisions.

Checklist

 We have gathered the information collected in Tasks 1-4 and reviewed how these fit within the process-based and outcome-based explanations introduced in Part 1.

 We have considered the contextual factors and how this will impact the order in which we deliver the explanation types, and how this will affect our delivery method.

 We have presented our explanation in a layered way, giving the most relevant explanation type(s) upfront, and providing the other types in additional layers.

 We have made it clear how decision recipients can contact us if they would like to discuss the AI-assisted decision with a human being.

 We have provided the decision recipient with the process-based and relevant outcome-based explanation for each explanation type, in advance of making a decision.

 We have proactively made information about our use of AI available in order to build trust with our customers and stakeholders.

In more detail

Introduction

Before you are able to provide an explanation to an individual, you need to consider how to build and present the information in a clear, accessible manner.

You should start by considering the information you obtained when completing Tasks 1-4, and determine how much of this information is required by the decision recipient. You should consider both process-based and outcome-based explanations as part of this step. You should also consider what explanation types you should prioritise. You could revisit the contextual factors introduced in Part 1 of this guidance to help you with this.

You should determine the most appropriate method of delivery based on the way you make AI-assisted decisions about people, and how they might expect you to deliver explanations of decisions you make without using AI. This might be verbally, face to face, in hard-copy or electronic format. Think about any reasonable adjustments you might need to make for people under the Equality Act 2010. The timing for delivery of explanations will also affect the way you deliver the explanation.

If you deliver explanations in hard-copy or electronic form, you may also wish to consider whether there are design choices that can help make what you’re telling people more clear and easy to understand. For example, in addition to text, simple graphs and diagrams may help with certain explanations such as rationale and safety and performance. Depending on the size and resources of your organisation, you may be able to draw on the expertise of user experience and user interface designers.

Gather relevant information for each explanation type

When completing Tasks 1-4 above, you should have documented the steps you took and any other information you require to deliver an explanation to a decision recipient. You can then use this information to build your explanation ready for delivery.

Under Task 1 you should have identified the most relevant explanation types for your use case, taking account of the contextual factors. Part 1 of this guidance sets out the kinds of information that you need to extract to support each explanation type, including information about the process (eg the data used to train the model) and the outcome (eg how other people in a similar position were treated in comparison to the decision recipient).

Task 2 discusses how to take account of explanation types in collecting and pre-processing the data.

Task 3 sets out key issues related to how to extract the information needed for the relevant explanation types (especially rationale explanation) from your AI model.

Task 4 focusses on translating the rationale of the AI system into plain-language, context-sensitive, and understandable terms but this can also yield information to support other explanation types.

Having followed these tasks you should then be ready to consider how to present your explanation to the decision recipient.

Consider contextual factors in delivering an explanation

When building an explanation, you should revisit the contextual factors introduced in Part 1 of this guidance:  

  • Domain factor
  • Impact factor
  • Data factor
  • Urgency factor
  • Audience factor

Although these are relevant throughout the design, development and deployment of the system, you should consider them in detail when you are deciding how to build and present your explanation.

The domain factor is important because the domain, or sector that you are operating in will affect the type of explanation your decision recipients want to receive. There also may be legislation specific to your sector that dictates how you deliver an explanation.

The impact factor is important, because the impact a decision has on an individual or society will determine the level of information required in an explanation. For example, if the decision has a significant impact on an individual, you may need to provide a more detailed explanation than if the impact was low.

The data factor helps you and the decision recipient understand both how the model has been trained and what data has been used to make a decision. The type of data that was processed may affect how you deliver the explanation, as there may be some circumstances where the explanation provided gives the decision recipient information on how to affect the outcome in the future. 

The urgency factor helps you determine how quickly to provide an explanation, and in what order you should be providing the different explanation types.

The audience factor is about the level of understanding you would expect the decision recipient to have about what the decision is about. This will affect the type of language you use when delivering your explanation.

Layer explanations

Based on the guidance we’ve provided above, and engagement with industry, we think it makes sense to build a ‘layered’ explanation.

By layered we mean providing individuals with the prioritised explanations (the first layer), and making the additional explanations available on a second, and possibly third, layer. If you deliver your explanation on a website, you can use expanding sections, tabs, or simply link to webpages with the additional explanations.

The purpose of this layered approach is to avoid information (or explanation) fatigue. It means you won’t overload people. Instead, they are provided with what is likely to be the most relevant and important information, while still having clear and easy access to other explanatory information, should they wish to know more about the AI decision.

Explanation as a dialogue

However you choose to deliver your explanations to individuals, it is important to think of it as a conversation as opposed to a one-way process. By providing the priority explanations, you are then initiating a conversation, not ending it. Individuals should not only have easy access to additional explanatory information (hence layered explanations), but they should also be able to discuss the AI-assisted decision with a human being. This ties in with the responsibility explanation and having a human reviewer. However, as well as being able to contest decisions, it’s important to provide a way for people to talk about and clarify explanations with a competent human being.

Explanation timing

It is important to provide explanations of AI-assisted decisions to individuals at the right time.

Delivering an explanation is not just about telling people the result of an AI decision. It is equally about telling people how decisions are made in advance.

What explanation can we provide in advance?

In Part 1 we provided two categories for each type of explanation: process-based and outcome-based.

You can provide the process-based explanations in advance of a specific decision. In addition, there will be some outcome-based explanations that you can provide in advance, particularly those related to:

  • Responsibility - who is responsible for taking the decision that is supported by the result of the AI system, for reviewing and for implementing it;
  • Impact – how you have assessed the potential impact of the model on the individual and the wider community; and
  • Data - what data was input into the AI system to train, test, and validate it.

There will also be some situations when you can provide the same explanation in advance of a decision as you would afterwards. This is because in some sectors it is possible to run a simulation of the model’s output. For example, if you applied for a loan some organisations could explain the computation and tell you which factors matter in determining whether or not your application will be accepted. In cases like this, the distinction between explanations before and after a decision is less important. However, in many situations this won’t be the case.

What should we do?

After you have prioritised the explanations (see Step 1), you should provide the relevant process-based explanations before the decision, and the outcome-based explanations if you are able to.

What explanation can we provide after a decision?

You can provide the full explanation after the decision, however there are some specific outcome-based explanations that you will not have been able to explain in advance. For example, rationale, fairness, and safety and performance of the system, which are specific to a particular decision and are likely to be queried after the decision has been made. These explain the underlying logic of the system that led to the specific decision or output, whether the decision recipient was treated fairly compared with others who were similar, and whether the system functioned properly in that particular instance.

Example

In this example, clinicians are using an AI system to help them detect cancer.


Example: Explanations in health care - cancer diagnosis
Before decision Process-based explanation

Responsibility – who is responsible for ensuring the AI system used in detecting cancer works in the intended way.

Rationale – what steps you have taken to ensure that the components or measurements used in the model make sense for detecting cancer and can be made understandable to affected patients.

Fairness – what measures you have taken to ensure the model is fair, prevents discrimination and mitigates bias (in this case, this may include measures taken to mitigate unbalanced, unrepresentative datasets or possible selection biases).

Safety and performance – what measures you have taken to ensure the model chosen to detect cancer is secure, accurate, reliable and robust, and how it has been tested, verified and validated.

Impact – what measures you have taken to ensure that the AI model does not negatively impact the patient in how it has been designed or used.

Data – how you have ensured that the source(s), quantity, and quality of the data used to train the system is appropriate for the type(s) of cancer detection for which you are utilising your model.

  Outcome-based explanation

Responsibility – who is responsible for taking the diagnosis resulting from the AI system’s output, implementing it, and providing an explanation for how the diagnosis came about, and who the patient can go to in order to query the diagnosis.

Impact – how the design and use of the AI system in the particular case of the patient will impact the patient. For example, if the system detects cancer but the result is a false positive, this could have a significant impact on the mental health of the patient.

Data – the patient’s data that will be used in this particular instance.

After decision Outcome-based explanation

Rationale – whether the AI system’s output (ie what it has detected as being cancerous or not) makes sense in the case of the patient, given the doctor’s domain knowledge.

Fairness – whether the model has produced results consistent with those it has produced for other patients with similar characteristics.

Safety and performance – how secure, accurate, reliable and robust the AI model has been in the patient’s particular case, and which safety and performance measures were used to test this.

Why is this important?

Not only is this a good way to provide an explanation to an individual when they might need it, it is also a way to comply with the law.

Articles 13-14 of the GDPR require that you proactively provide individuals with ‘…meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject…’ in the case of solely automated decisions with a legal or similarly significant effect.

Article 15 of the GDPR also gives individuals a right to obtain this information at any time on request.

This is also good practice for systems where there is a ‘human in the loop’.

The process-based and outcome-based explanations about the rationale of the AI system, and the outcome-based explanation about the AI system’s impact on the individual, fulfil this requirement of the GDPR.

It is up to you to determine the most appropriate way to deliver the explanations you choose to provide.

However, you might consider what the most direct and helpful way would be to deliver explanations that you can provide in advance of a decision. You should consider where individuals are most likely to go to find an explanation or information on how you make decisions with support of AI systems.

You should think about using the same platform for providing an advance explanation that you will use to provide the ultimate decision. This means that the information that an individual needs is in one place. You should also ensure that the explanation is prominent, to make it easier for individuals to find it.

Proactive engagement

How can we build trust?

Proactively making information available about how you use AI systems to help you make decisions is a good way to increase awareness among your customers. This will help them know more about when and why you use an AI system and how it works.

By being open and inclusive in how you share this information, you can increase the trust your customers have in how you operate, and build confidence in your organisation using AI to help them get a better service.

In the primary research we conducted, we found that the public is looking for more engagement from organisations and awareness raising about how they use AI for decision-making. By being proactive, you can use this engagement to help you fulfil the principle of being transparent.

What should we proactively share?

Among the things you could consider sharing are the following:

  • What is AI?

This helps to demystify the technologies involved. It might be useful to outline these technologies, and provide a couple of examples of where AI is used in your sector.

Further reading

A good example is this animation about machine learning produced by researchers at the University of Oxford. ‘What is Machine Learning?’ animation

  • How can it be used for decision-making?

This should outline the different ways AI is useful for supporting decision-making – this tells people what the tools do. You could provide some examples of how you use it to help you make decisions.

  • What are the benefits?

This should lay out how AI can be beneficial, specifically for the individuals that are affected by the decisions you make. For example, if you are a service provider, you can outline how it can personalise your services so that your customers can get a better experience. The benefits you cover could also explore ways that the AI tools available can be better than more traditional decision-support tools. Examples could help you to make this clear.

  • What are the risks?

You should be honest about how AI can go wrong in your sector, for example how it can lead to discrimination or misinformation, and how you will mitigate this. This helps to set people’s expectations about what AI can do in their situation, and helps them understand what you will do to look after them.

You should also provide information about people’s rights under the GDPR, for example the right to object or challenge the use of AI, and the right to obtain human review or intervention.

  • Why do we use AI for decisions?

This should clearly and comprehensively explain why you have chosen to use AI systems in your particular organisation. It should expand on the more general examples you have provided above for how it improves the service you offer compared with other approaches (if applicable), and what the benefits are for your customers.

  • Where/ when do we do this?

Here you can describe which parts of your organisation and in which parts of the decision-making process you are using AI. You should make this as informative as possible. You could also outline what measures you have put in place to ensure that the AI system you are using in each of these areas is designed in a way to maximise the benefits and minimise the risks. In particular, you should be clear about whether there is a ‘human in the loop’ or whether the AI is solely automated. In addition, it might be helpful to show how you are managing the system’s use to make sure it is maximising the interests of your customers.

  • Who can individuals speak to about it?

You could provide an email address or helpline for interested members of the public to contact in order to get more information on how you are using AI. Those answering these queries should have good knowledge of AI and how you are using it, and be able to explain it in a clear, open and accessible way. The amount of detail you provide should be proportionate to the information people ask for.

How should we share this?

There are many different ways you could proactively share information with your customers and stakeholders:

  • Your usual communications to customers and stakeholders, such as regular newsletters or customer information.
  • Providing a link to a dedicated part of your website outlining the sections above.
  • Flyers and leaflets distributed in your offices and to those of other relevant or partner organisations.
  • An information campaign or other initiative in partnership with other organisations.
  • Information you distribute through trade bodies.

Your communications team will have an important role to play in making sure the information is targeted and relevant to your customers.

Further reading

The ICO has written guidance on the right to be informed, which will help you with this communication task.

Guidance on the right to be informed (GDPR)

Example

In Annexe 1 we provide an example showing how all of the above tasks could relate to a particular case in the health sector.