Skip to main content

Summary of the tasks to undertake

Contents

We have set out a number of tasks both to help you design and deploy appropriately explainable AI systems and to assist you in providing clarification of the results these systems produce to a range of affected individuals (from operators, implementers, and auditors to decision recipients).

These tasks offer a systematic approach to:

  • developing AI models in an explanation-aware fashion; and
  • selecting, extracting and delivering explanations that are differentiated according to the needs and skills of the different audiences they are directed at.

They should help you navigate the detailed technical recommendations in this part. However, we recognise that in practice some tasks may be concurrent, cyclical, and iterative rather than consecutive or linear, and you may wish to develop your own plan for doing this.

In Annexe 1 we give an example of how these tasks may be carried out in a particular case in the health sector.

1. Select priority explanations by considering the domain, use case and impact on the individual

Start by getting to know the different types of explanation in Part 1 of this guidance. This should help you to separate out the different aspects of an AI-assisted decision that people may want you to explain. While we have identified what we think are the key types of explanation that people will need, there may be additional relevant explanations in the context of your organisation, and the way you use, or plan to use, AI to make decisions about people. Or perhaps some of the explanations we identify are not particularly relevant to your organisation and the people you make decisions about.

That’s absolutely fine. The explanation types we identify are intended to underline the fact that there are many different aspects to explanations, and to get you thinking about what those aspects are, and whether or not they are relevant to your customers. You may think the list we have created works for your organisation or you might want to create your own.

Either way, we recommend that your approach to explaining AI-assisted decisions should be informed by the importance of putting the principles of transparency and accountability into practice, and of paying close attention to context and impact.

Next, think about the specifics of the context you are deploying your AI decision-support system in. Considering the domain you work in, the particular use case and possible impacts of your system on individuals and wider society will further help you choose the relevant explanations. In most cases, it will be useful for you to include rationale and responsibility in your priority explanations.

It is likely that you will identify multiple explanations to prioritise for the AI-assisted decisions you make. Make a list of these and document the justification for your choices.

While you have identified the explanations that are most important in the context of your AI decision-support system, this does not mean that you should discard the remaining explanations.

Choosing what to prioritise is not an exact science, and while your choices may reflect what the majority of the people you make decisions about want to know, it’s likely that other individuals will still want and benefit from the explanations you have not prioritised. These will probably also be useful for your own accountability or auditing purposes.

It therefore makes sense to make all the explanations you have identified as relevant available to the people subject to your AI-assisted decisions. You should consider how to prioritise the remaining explanations based on the contextual factors you identified, and how useful they might be for people.

Speak with colleagues involved in the design/ procurement, testing and deployment of AI decision-systems to get their views. If possible, speak with your customers.

2. Collect and pre-process your data in an explanation-aware manner

How you collect and pre-process the data you use in your chosen model has a bearing on the quality of the explanation you can offer to decision recipients. This task therefore emphasises some of the things you should think about when you are at these stages of your design process, and how this can contribute to the information you provide to individuals for each explanation type.

3. Build your system to ensure you are able to extract relevant information for a range of explanation types

It will be useful to understand the inner workings of your AI system, particularly to be able to comply with certain parts of the GDPR. The model you choose should be at the right level of interpretability for your use case and for the impact it will have on the decision recipient. If you use a ‘black box’ model, make sure the supplementary explanation techniques you use provide a reliable and accurate representation of the system’s behaviour.

4. Translate the rationale of your system’s results into useable and easily understandable reasons

You should determine how you are going to convey your model’s statistical results to users and decision recipients as understandable reasons.

A central part of delivering an explanation is communicating how the statistical inferences, which were the basis for your model’s output, played a part in your thinking. This involves translating the mathematical rationale of the explanation extraction tools into easily understandable language to justify the outcome.

For example, if your extracted rationale explanation provides you with:

  • information about the relative importance of features that influence your model’s results; and
  • a more global understanding of how this specific decision fits with the model’s linear and monotonic constraints.

You should then translate these factors into simple, everyday language that can be understood by non-technical stakeholders. Transforming your model’s logic from quantitative rationale into intuitive reasons should lead you to present information as clearly and meaningfully as possible. You could do this through textual clarification, visualisation media, graphical representations, summary tables, or any combination of these.

The main thing is to make sure that there is a simple way to describe or explain the result to an individual. If the decision is fully automated, you may use software to do this. Otherwise this will be through a person who is responsible for translating the result (the implementer – see below).

5. Prepare implementers to deploy your AI system

When human decision-makers are meaningfully involved in an AI-assisted outcome they must be appropriately trained and prepared to use your model’s results responsibly and fairly.

Training should include conveying basic knowledge about the nature of machine learning, and about the limitations of AI and automated decision-support technologies. It should also encourage users (the implementers) to view the benefits and risks of deploying these systems in terms of their role in helping humans to come to judgements, rather than replacing that judgement.

If the system is wholly automated and provides a result directly to the decision recipient, it should be set up to provide understandable explanations to them.

6. Consider how to build and present your explanation

Finally, you should think about how you will build and present your explanation to an individual, whether you are doing this through a website or app, in writing or in person.

Considerations of context should be the cornerstone of building and presenting your explanation. You should consider contextual factors (domain, impact, data, urgency, audience) to help you decide how you should deliver appropriate information to the individual.

Considering context should also help you to customise what sort of (and how much) information to provide to decision recipients. The way you explain the results of your model to decision subjects may be quite different from how you provide information to other relevant audiences like auditors, who may also need explanations, though with different degrees of depth and technical detail. Differentiating the way you are providing information in an audience-responsive manner can help you avoid creating explanation fatigue in your customers (by saying too much) and at the same time allow you to protect your intellectual property and safeguard your system from being gamed.

When delivering an explanation to decision recipients, a layered approach can be helpful because it presents people with the most relevant information about the decision, while making further explanations easily accessible if they are required. The explanations you have identified as priorities can go in the first layer, while the others can go into a second layer.

You should also think about what information to provide in advance of a decision, and what information to provide to individuals about a decision in their particular case.