Skip to main content

Task 1: Select priority explanations by considering the domain, use case and impact on the individual

Contents

At a glance

  • Getting to know the different types of explanation will help you identify the dimensions of an explanation that decision recipients will find useful.
  • In most cases, explaining AI-assisted decisions involves identifying what is happening in your AI system and who is responsible. This means you should prioritise the rationale and responsibility explanation types.
  • The setting and sector you are working in is important in working out what kinds of explanation you should be able to provide. You should therefore consider domain context and use case.
  • In addition, consider the potential impacts of your use of AI to decide which other types of explanation you should provide. This will also help you think about how much information is required, and how comprehensive it should be.
  • Choosing what to prioritise is not an exact science, and while your choices may reflect what the majority of the people you make decisions about want to know, it’s likely that other individuals will still benefit from the explanations you have not prioritised. These will probably also be useful for your own accountability or auditing purposes.

Checklist

☐ We have prioritised rationale and responsibility explanations. We have therefore put in place and documented processes that optimise the end-to-end transparency and accountability of our AI model.

☐  We have considered the setting and sector in which our AI model will be used, and how this affects the types of explanation we provide.

☐  We have considered the potential impacts of our system, and how these affect the scope and depth of the explanation we provide.

In more detail

Introduction

You should consider what types of explanation you need before you start the design process for your AI system, or procurement of a system if you are outsourcing it. You can think of this as ‘explanation-by-design’. It involves operationalising the principles we set out in ‘The basics of explaining AI’. The following considerations will help you to decide which explanation types you should choose.

Familiarise yourself with the different types of explanation

We introduced the different types of explanation in Part 1 of this guidance, ‘The basics of explaining AI’. Making sure you are aware of the range of explanations will provide you with the foundations for considering the different dimensions of an explanation that decision recipients will find useful.

Prioritise rationale and responsibility explanation

It is likely that most explanations of AI-assisted decisions will involve knowing both what your system is doing and who is responsible. In other words, they are likely to involve both rationale and responsibility explanations.

To set up your AI use case to cover these explanations, it is important to consider how you are going to put in place and document processes that:

  • optimise the end-to-end transparency and accountability of your AI model. This means making sure your organisation’s policies, protocols and procedures are lined up to ensure you can provide clear and accessible process-based explanations when you design and deploy your AI system; and
  • ensure that the intelligibility and interpretability of your AI model is prioritised from the outset. This also means that the explanation you offer to affected individuals appropriately covers the other types of explanation, given the use case and possible impacts of your system.

Considering how to address these explanation types at the beginning of your process should provide you with a reasonable understanding of how your system works and who is responsible at each stage of the process. This will also mean the information is available for decision recipients when an explanation is provided.

Please note that although we recommend you prioritise documenting information about the rationale and responsibility explanations at the start of the process, you may not wish to provide this in the first layer of explanation to the decision subject (Task 6). However, we do recommend that this information is provided as part of the explanation where practical.

Consider domain or sector context and use case

When you are trying to work out what kinds of explanation you provide, a good starting point is to consider the setting and sector it will be used in.

In certain safety-critical/ high-stakes and highly regulated domains, sector-specific standards for explanations may largely dictate the sort of information you need to provide to affected individuals.

For instance, AI applications that are employed in safety-critical domains like medicine will have to be set up to provide the safety and performance explanation in line with the established standards and expectations of that sector. Likewise, in a high-stakes setting like criminal justice, where biased decision-making is a significant concern, the fairness explanation will play an important and necessary role.

By understanding your AI application’s domain context and setting, you may also gain insight into people’s expectations of the content and scope of similar explanations previously offered. Doing due diligence and researching these sorts of sector-specific expectations will help you to draw on background knowledge as you decide which types of AI explanation to include as part of your model’s design and implementation processes.

Consider potential impacts

Paying attention to the setting in which your model will be deployed will also put you in a good position to consider its potential impacts. This will be especially useful for selecting your explanations, because it will key you in to the relevance of impact-specific explanations that you should include as part of the more general explanation of your AI system.

Assessing the potential impact of your AI model on the basis of its use case will help you to determine the extent to which you need to include fairness, safety and performance and more general impact explanations, together with the scope and depth of these types of explanation.

Assessing your AI model’s potential impact will also help you understand how comprehensive your explanation needs to be. This includes the risks of deploying the system, and the risks for the person receiving the AI-assisted decision. It will allow you to make sure that the scope and depth of the explanations you are going to be able to offer line up with the real-world impacts of the specific case. For example, an AI system that triages customer service complainants in a luxury goods retailer will have a different (and much lower) explanatory burden than one that triages patients in a hospital critical care unit.

Once you have worked through these considerations, you should choose the most appropriate explanations for your use case (in addition to the rationale and responsibility explanations you have already prioritised). You should document these choices and why you made them.

Prioritise remaining explanations

Once you have identified the other explanations that are relevant to your use case, you should make these available to the people subject to your AI-assisted decisions. You should also document why you made these choices.

Further reading

See more on the types of explanation in the link for ‘The basics of explaining AI’ .

Examples for choosing suitable explanation types

AI-assisted recruitment

An AI system is deployed as a job application filtering tool for a company that is looking to hire someone for a vacancy. This system classifies decision recipients (who receive either a rejection or an invitation to interview) by processing social or demographic data related to individual human attributes and social patterns that are implied in the CVs that have been submitted. A resulting concern might be that bias is ‘baked into’ the dataset, and that discriminatory features or their proxies might have been used in the model’s training and processing. For example, the strong correlation in a dataset between ‘all-male’ secondary schools attended and successful executive placement in higher paying positions might lead a model trained on this data to discriminate against non-male applicants when it renders recommendations about granting job interviews to positions of a certain higher paying and executive-level profile.

Which explanation types should you choose in this case?

  • Prioritise rationale and responsibility explanations: it is highly likely that you will need to include the responsibility and rationale explanations, to tell the individual affected by the AI-assisted hiring decision who is responsible for the decision, and why the decision was reached.
  • Consider domain or sector context and use case: the recruitment and human resources domain context suggests that bias should be a primary concern in this case.
  • Consider potential impacts: considering the impact of the AI system on the applicant relates to whether they think the decision was justified, and whether they were treated fairly. Your explanation should be comprehensive enough for the applicant to understand the risks involved in your use of the AI system, and how you have mitigated these risks.
  • Prioritise other explanation types: this example demonstrates how understanding the specific area of use (the domain) and the particular nature of the data is important for knowing which type of explanation is required for the decision recipient. In this case, a fairness explanation is required because the decision recipient wants to know that they have not been discriminated against. This discrimination could be due to the legacies of discrimination and historical patterns of inequity that may have influenced an AI system trained on biased social and demographic data. In addition, the individual may want an impact explanation to understand how the recruiter thought about the AI tool’s impact on the individual whose data it was processing. A data explanation might also be helpful to understand what data was used to determine whether the candidate would be invited to interview.

AI-assisted medical diagnosis

An AI system utilises image recognition algorithms to support a radiologist to identify cancer in scans. It is trained on a dataset containing millions of images from patient MRI scans and learns by processing billions of corresponding pixels. It is possible that the system may fail unexpectedly when confronted with unfamiliar data patterns or unforeseen environmental anomalies (objects it does not recognise). Such a system failure might lead to catastrophic physical harm being done to an affected patient.

Which explanation types should you choose in this case?

  • Prioritise rationale and responsibility explanations: it is highly likely that you will need to include the responsibility and rationale explanations, to tell the individual affected by the AI-assisted diagnostic decision who is responsible for the decision, and why the decision was reached. An accurate and reliable rationale explanation will also better support the evidence-based judgement of the medical professionals involved.
  • Consider domain or sector context and use case: the medical domain context suggests that demonstrating the safety and optimum performance of the AI system should be a primary concern in this case. Developers should consult domain-specific requirements and standards to determine the scope, depth, and types of explanation that are reasonable expected.
  • Consider potential impacts: the impact of the AI system on the patient is high if the system makes an incorrect diagnosis. Your explanation should be comprehensive enough for the patient to understand the risks involved in your use of the AI system, and how you have mitigated these risks.
  • Prioritise other explanation types: the safety and performance explanation provides justification, when possible, that an AI system is sufficiently robust, accurate, secure and reliable, and that codified procedures of testing and validation have been able to certify these attributes. Depending on the specific use case, the fairness explanation might also be important to prioritise, for instance, to demonstrate that the data collected is of sufficient quality, quantity, and representativeness for the system to perform accurately for different demographic groups.