Skip to main content

Task 5: Prepare implementers to deploy your AI system

Contents

At a glance

  • In cases where decisions are not fully automated, implementers need to be meaningfully involved.

  • This means that they need to be appropriately trained to use the model’s results responsibly and fairly.

  • Their training should cover:
    • the basics of how machine learning works;
    • the limitations of AI and automated decision-support technologies;
    • the benefits and risks of using these systems to assist decision-making, particularly how they help humans come to judgements rather than replacing that judgement; and
    • how to manage cognitive biases, including both decision-automation bias and automation-distrust bias.

Checklist

☐ Where there is a ‘human in the loop’ we have trained our implementers to:

☐ Understand the associations and correlations that link the input data to the model’s prediction or classification.

☐ Interpret which correlations are consequential for providing a meaningful explanation by drawing on their domain knowledge or the decision recipient’s specific circumstances.

☐ Combine the chosen correlations and outcome determinants with what they know of the individual affected to come to their conclusion.

☐ Apply the AI model’s results to the individual case at hand, rather than uniformly across decision recipients.

☐ Recognise situations where decision-automation bias and automation-distrust bias can occur and mitigate against this.

☐ Understand the strengths and limitations of the system.

In more detail

Introduction

When human decision-makers are meaningfully involved in deploying an AI-assisted outcome (ie where the decision is not fully automated), you should make sure that you have appropriately trained and prepared them to use your model’s results responsibly and fairly.

Your implementer training should therefore include conveying basic knowledge about the statistical and probabilistic character of machine learning, and about the limitations of AI and automated decision-support technologies. Your training should avoid any anthropomorphic (or human-like) portrayals of AI systems. You should also encourage the implementers to view the benefits and risks of deploying these systems in terms of their role in helping humans come to judgements, rather than replacing that judgement.

Further, your training should address any cognitive or judgemental biases that may occur when implementers use AI systems in different settings. This should be based on the use-case, highlighting, for example, where over-reliance or over-compliance with the results of computer-based system can occur (known as decision-automation bias), or where under-reliance or under-compliance with the results can occur (automation-distrust bias). Cognitive biases may include overconfidence in a prediction based on the historical consistency of data, illusions that any clustering of data points necessarily indicates significant insights, and discounting social patterns that exist beyond the statistical result. These can also include situations where the implementer may disregard the outcome of the system due to scepticism or distrust of the technology.

Individuals are likely to expect that decisions produced about them do not treat them in terms of demographic probabilities and statistics. You should therefore apply inferences that are drawn from a model’s results to the particular circumstances of the decision recipient.

Basics of implementer training

Educate implementers about cognitive biases

Good implementer preparation begins with anticipating the pitfalls of potential bias-in-use which AI decision support systems tend to give rise to.

Your training about responsible implementation should therefore start with educating users about the two main types of AI-related bias: decision-automation bias and automation-distrust bias.

  • Decision-automation bias: Users of AI decision-support systems may become hampered in their critical judgment and situational awareness as a result of an overconfidence in the objectivity, or certainty of the AI system.

    This may lead to an over-reliance on the automated systems results. Implementers may lose the capacity to identify and respond to the faults, errors, or deficiencies, because they become complacent and defer to its directions and cues.

    Decision-automation bias may also lead to a tendency to over-comply with the system’s results. Implementers may defer to the perceived infallibility of the system and become unable to detect problems emerging from its use because they fail to hold the results against available information. This may be exacerbated by underlying fears or concerns about how ‘disagreeing with’ or ‘going against’ a system’s results might create accountability or liability issues in wider organisational or legal contexts.

    Both over-reliance and over-compliance may lead to what is known as ‘out-of-loop syndrome’. This is where the degradation of the role of human reason and the deskilling of critical thinking hampers the user’s ability to complete the tasks that have been automated. This may reduce their ability to respond to system failure and may lead both to safety hazards and dangers of discriminatory harm. 
  • Automation-Distrust Bias: At the other extreme, users of an automated decision-support system may tend to disregard its contributions to evidence-based reasoning as a result of their distrust or scepticism about AI technologies in general. They may also over-prioritise the importance of prudence, common sense, and human expertise, failing to see how AI-decision support may help them to reduce implicit cognitive biases and understand complex patterns in data otherwise unavailable to human-scale reasoning.

    If users have an aversion to the non-human and amoral character of automated systems, this could also lead decision subjects to mistrust these technologies in high impact contexts such as healthcare, transportation, and law.

To combat risks of decision-automation bias and automation-distrust bias, there are certain actions you should take:

  • Build a comprehensive training and preparation program for all implementers and users that explores both AI-related judgment biases (decision-automation and automation-distrust biases) and human-based cognitive biases.
  • Educate on this spectrum of biases including examples of particular misjudgements that may occur when people weigh statistical evidence. Examples of the latter may include:
    • overconfidence in prediction based on the historical consistency of data;
    • illusions that any clustering of data points necessarily indicates significant insights; and
    • discounting of societal patterns that exist beyond the statistical results.
  • Make explicit and operationalise strong regimes of accountability when systems are deployed, in order to steer human decision-makers to act on the basis of good reasons, solid inferences, and critical judgment, even as they are supported by AI-generated results.

Educate implementers about the strengths and limitations of AI decision-support systems

Your training about responsible deployment should also include a balanced and comprehensive technical view of the possible limitations and advantages of AI decision-support systems.

This means, first and foremost, giving users a working knowledge of the statistical and probabilistic methods behind the operation of these systems. Continuing education and professional development in this area is crucial to ensure that people using and implementing these systems have sufficient understanding. It is also crucial for providing users with a realistic and demystified picture of what these computation-based models are and what they can and cannot do.

A central component of this training should be to identify the limitations of statistical and probabilistic generalisation. Your training materials and trainers should stress the aspect of uncertainty that underlies all statistical and probabilistic reasoning. This will help users and implementers to approach AI-generated results with an appropriately critical eye and a clear understanding of indicators of uncertainty like confidence intervals and error bars.

Your training should also stress the variety of performance and error metrics available to measure statistical results and the ways that these metrics may sometimes conflict or be at cross-purposes with each other, depending on the metrics chosen.

When educating users about the advantages of these AI systems, your training should involve example-based demonstrations of the capacities of applied data-science. This will show how useful and informative patterns and inferences can be drawn from large amounts of data, that may have otherwise escaped human insight, given time pressures as well as sensory and cognitive limitations. Educating users on the advantages of AI systems should also involve example-based demonstrations of how responsible and bias-aware model design can support and improve the objectivity of human decision-making through equitable information processing.

Train implementers to use statistical results as support for evidence-based reasoning

Another crucial part of your training about responsible implementation is preparing users to be able to see the results of AI decision-support as assisting evidence-based reasoning rather than replacing it. As a general rule, we use the results of statistical and probabilistic analysis to help guide our actions. When done properly, this kind of analysis offers a solid basis of empirically derived evidence that assists us in exercising sound and well-supported judgment about the matters it informs.

Having a good understanding of the factors that produce the result of a particular AI decision-support system means that we are able to see how these factors (for instance, input features that weigh heavily in determining a given algorithmically generated output) mean that the result is rationally acceptable.

You should train your implementers to understand how the output of a particular AI system can support their reasoning. You should train them to grasp how they can optimally draw on the determining factors that lie behind the logic of this output to exercise sound judgment about the instance under consideration. Your training should emphasise the critical function played by rational justification in meeting the reasonable expectations of decision recipients who desire, or require, explanations. Carrying out this function demands that users and implementers offer well-founded arguments justifying the outcome of concern. Arguments that make sense, are expressed in clear and understandable terms, and are accessible enough to be rationally assessed by all affected individuals, especially the most vulnerable or disadvantaged.

Train implementers to think contextually and holistically

The results of AI decision-support systems are based on population-level correlations that are derived from training data and that therefore do not refer specifically to the actual circumstances, background, and abilities of the individual decision recipient. They are statistical generalisations that have picked up relationships between the decision recipient’s input data and patterns or trends that the AI model has extracted from the underlying distribution of that model’s original dataset. For this reason, you should train your implementers to think contextually and holistically about how these statistical generalisations apply to the specific situation of the decision recipient.

This training should involve preparing implementers to work with an active awareness of the socio-technical aspect of implementing AI decision-assistance technologies from an integrative and human-centred point of view. You should train implementers to apply the statistical results to each particular case with appropriate context-sensitivity and ‘big picture’ sensibility. This means that the dignity they show to decision subjects can be supported by interpretive understanding, reasonableness, and empathy.

Example-based training materials should illustrate how applying contextually-sensitive judgment can help implementers weigh the AI system’s results against the unique circumstances of the decision recipient’s life situation. In this way, your implementers can integrate the translation the system’s rationale into useable and easily understandable reasons (Task 4) into more holistic considerations about how those reasons actually apply to a particular decision subject. Training implementers to integrate Task 4 into context-sensitive reasoning will enable them to treat the inferences drawn from the results of the model’s computation as evidence that supports a broader, more rounded, and coherent understanding of the individual situations of the decision subject and other affected individuals.