Skip to main content

Annexe 5: Argument-based assurance cases

Contents

An assurance case is a set of structured claims, arguments, and evidence which gives confidence that an AI system will possess the particular qualities or properties that need to be assured. Take, for example, a ‘safety and performance’ assurance case. This would involve providing an argument, supported by evidence, that a system possesses the properties that will allow it to function safely, securely, reliably, etc given the challenges of its operational context.

Though argument-based assurance cases have historically arisen in safety-critical domains (as ‘safety cases’ for software-based technologies), the methodology is widely used. This is because it is a reasonable way to structure and document an anticipatory, goal-based, and procedural approach to innovation governance. This stands in contrast to the older, more reactive and prescriptive methods that are now increasingly challenged by the complex and rapidly evolving character of emerging AI technologies.

This older prescriptive approach stressed the application of one-size-fits-all general standards that specified how systems should be built and often treated governance as a retrospective check-box exercise. However, argument-based assurance takes a different tack. It starts at the inception of an AI project and plays an active role at all stages of the design and use lifecycle. It begins with high-level normative goals derived from context-anchored impact and risk-based assessment for each specific AI application and then sets out structured arguments demonstrating:

  1. how such normative requirements address the impacts and risks associated with the system’s use in its specified operating environment;
  2. how activities undertaken across the design and deployment workflow assure the properties of the system that are needed for the realisation of these goals; and
  3. how appropriate monitoring and re-assessment measures have been set up to ensure the effectiveness of the implemented controls.

We have included a list of background reading and resources to help you in the area of governance and standards that relate to argument-based assurance cases. This includes consolidated standards for system and software assurance from the International Standards Organisation, the International Electrotechnical Commission, and the Institute of Electrical and Electronics Engineers (ISO/IEC/IEEE 15026 series) as well as the Object Management Group’s Structured Assurance Case Metamodel (SACM). This also includes references for several of the main assurance platforms like Goal Structuring Notation (GSN), the Claims, Arguments and Evidence Notation (CAE), NOR-STA Argumentation, and Dynamic Safety Cases (DSC).

Main components of argument-based assurance cases

While it is beyond the scope of this guidance to cover details of the different methods of building assurance cases, it may be useful to provide a broad view of how the main components of any comprehensive assurance case fit together.

Top-level normative goals

These are the high-level aims or goals of the system that address the risks and potential harms that may be caused by the use of that system in its defined operating environment and are therefore in need of assurance. In the context of process-based explanation, these include fairness and bias-mitigation, responsibility, safety and optimal performance, and beneficial and non-harmful impact. Starting from each of these normative goals, building an assurance case then involves identifying the properties and qualities that a given system has to possess to ensure that it achieves the specified goal in light of the risks and challenges it faces in its operational context.

Claims

These are the properties, qualities, traits, or attributes that need to be assured in order for the top-level normative goals to be realised. For instance, in a fairness and bias-mitigation assurance case, the property, ‘target variables or their measurable proxies do not reflect underlying structural biases or discrimination,’ is one of several such claims. As a central component of structured argumentation, it needs to be backed both by appropriate supporting arguments about how the relevant activities behind the system’s design and development process ensured that structural biases were, in fact, not incorporated into target variables and by corresponding evidence that documented these activities.

In some methods of structured argumentation like GSN, claims are qualified by context components, which:

  • clarify the scope of a given claim;
  • provide definitions and background information;
  • make relevant assumptions about the system and environment explicit; and
  • spell out risks and risk-mitigation needs associated with the claim across the system’s design and operation lifecycle.

Claims may also be qualified by justification components, that is, clarifications of:

  • why the claims have been chosen; and
  • how they provide a solution or means of realisation for the specified normative goal.

In general, the addition of context and justification components reinforces the accuracy and completeness of claims, allowing them to support the top-level goals of the system. This focus on precision, clarification, and thoroughness is crucial to establishing confidence through the development of an effective assurance case.

Arguments

These support claims by linking them with evidence and other supporting claims through reasoning. Arguments provide warrants for claims by establishing an inferential relationship that connects the proposed property with a body of evidence and argumentative backing sufficient to establish its rational acceptability or truth. For example, in a safety and performance assurance case, which had ‘system is sufficiently robust’ as one of its claims, a possible argument might be ‘training processes included an augmentation element where adversarial examples and perturbations were employed to model harsh real-world conditions’. Such a claim would then be backed by evidentiary support that this actually happened during the design and development of the AI model.

While justified arguments are always backed by some body of evidence, they may also be supported by subordinate claims and assumptions (ie claims without further backing that are taken as self-evident or true). Subordinate claims underwrite arguments (and the higher-level claims they support) based on their own arguments and evidence. In structured argument, there will often be multiple levels subordinate claims, which work together to provide justification for the rational acceptability or truth of top-level claims.

Evidence

This is the collection of artefacts and documentation that provide evidential support for the claims made in the assurance case. A body of evidence is formed by objective, demonstrable, and repeatable information recorded during production and use of a system. This underpins the arguments justifying the assurance claims. In some instances, a body of evidence may be organised in an evidence repository (SACM) where that primary information can be accessed along with secondary information about evidence management, interpretation of evidence, and clarification of evidentiary support underlying the claims of the assurance case. 

Advantages of approaching process-based explanation through argument-based assurance cases

There are several advantages to using argument-based assurance to organise the documentation of your innovation practices for process-based explanations:

  • Assurance cases demand proactive and end-to-end understanding of the impacts and risks that come from each specific AI application. Their effective execution is anchored in building practical controls which show that these impacts and risks have been appropriately managed. Because of this, assurance cases encourage the planned integration of good process-based governance controls. This, in turn, ensures that the goals governing the development of AI systems have been met, with a deliberate and argument-based method of documented assurance that demonstrates this. In argument-based assurance, anticipatory and goal-driven governance and documentation processes work hand-in-glove, mutually strengthening best practices and improving the quality of the products and services they support.
  • Argument-based assurance involves a method of reasoning-based governance practice rather than a task- or technology-specific set of instructions. This allows AI designers and developers to tackle a diverse range of governance activities with a single method of using structured argument to assure properties that meet standard requirements and mitigate risks. This also means that procedures for building assurance cases are uniform and that their documentation can be more readily standardised and automated (as seen, for instance, in various assurance platforms like GSN, SACM, CAE, and NOR-STA Argumentation).
  • Argument-based assurance can enable effective multi-stakeholder communication. It can generate confidence that a given AI application possesses desired properties on the basis of explicit, well-reasoned, and evidence-backed grounds. When done effectively, assurance cases clearly and precisely convey information to various stakeholder groups through structured arguments. These demonstrate that specified goals have been achieved and risks have been mitigated. They do this by providing documentary evidence that the properties of the system needed to meet these goals have been assured by solid arguments.
  • Using the argument-based assurance methodology can enable assurance cases to be customised and tailored to the relevant audiences . These assurance cases are built on structured arguments (claims, justifications, and evidence) in natural language, so they are more readily understood by those who are not technical specialists. While detailed technical arguments and evidence may support assurance cases, the basis of these cases in everyday reasoning makes them especially amenable to non-technical and understandable summary. A summary of an assurance case provided to a decision recipient can then be backed by a more detailed version which includes extended structural arguments better tailored to experts, independent assessors, and auditors. Likewise, the evidence used for an assurance case can be organised to fit the audience and context of explanation. In this way, any potentially commercially sensitive or privacy impinging information, which comprises a part of the body of evidence, may be held in an evidence repository. This can be made accessible to a more limited audience of internal or external overseers, assessors, and auditors.

Governing procurement practices and managing stakeholder expectations through tailored assurance

For both vendors and customers (ie developers and users/procurers), argument-based assurance may provide a reasonable way to govern procurement practices and to mutually manage expectations. If a vendor has a deliberate and anticipatory approach to the explanation-aware AI system design demanded by argument-based assurance, they will be better able to assure (through justification, evidence, and documentation) crucial properties of their models to those interested in acquiring them.

By offering an evidence-backed assurance portfolio, in advance, a vendor will be able to demonstrate that their products have been designed with appropriate normative goals in mind. They will also be able to assure potential customers that these goals have been realised across development processes. This will then also allow users/procurers to pass on this part of the process-based explanation to decision-recipients and affected parties. It would also allow procurers to more effectively assess whether the assurance portfolio meets the normative criteria and AI innovation standards they are looking for based on their organisational culture, domain context, and application interests.

Using assurance cases will also enable standards-based independent assessment and third-party audit. The tasks of information management and sharing can be undertaken efficiently between developers, users, assessors, and auditors. This can provide a common and consolidated platform for process-based explanation, which organises both the presentation of the details of assurance cases and the accessibility of the information which supports them. This will streamline communication processes across all affected stakeholders, while preserving the trust-generating aspects of procedural and organisational transparency all the way down.

Further reading

Resources for exploring documentation and argument-based assurance

General readings on documentation for responsible AI design and implementation

Relevant standards and regulations on argument-based assurance and safety cases

  • ISO/IEC/IEEE 15026-1:2019, Systems and software engineering — Systems and software assurance — Part 1: Concepts and vocabulary.
  • ISO/IEC 15026-2:2011, Systems and software engineering — Systems and software assurance — Part 2: Assurance case.
  • ISO/IEC 15026-3:2015, Systems and software engineering — Systems and software assurance — Part 3: System integrity levels.
  • ISO/IEC 15026-4:2012, Systems and software engineering — Systems and software assurance — Part 4: Assurance in the life cycle.
  • Object Management Group, Structured Assurance Case Metamodel (SACM), Version 2.1 beta, March 2019.
  • Ministry of Defence. Defence Standard 00-42 Issue 2, Reliability and Maintainability (R&M) Assurance Guidance. Part 3, R&M Case, 6 June 2003.
  • Ministry Of Defence. Defence Standard 00-55 (PART 1)/Issue 4, Requirements for Safety Related Software in Defence Equipment Part 1: Requirements, December 2004.
  • Ministry of Defence. Defence Standard 00-55 (PART 2)/Issue 2, Requirements for Safety Related Software in Defence Equipment Part 2: Guidance, 21 August 1997.
  • Ministry of Defence. Defence Standard 00-56. Safety Management Requirements for Defence Systems. Part 1. Requirements Issue 4, 01 June 2007
  • Ministry of Defence. Defence Standard 00-56. Safety Management Requirements for Defence Systems. Part 2 : Guidance on Establishing a Means of Complying with Part 1 Issue 4, 01 June 2007.
  • UK CAA CAP 760 Guidance on the Conduct of Hazard Identification, Risk Assessment and the Production of Safety Cases For Aerodrome Operators and Air Traffic Service Providers, 13 January 2006.
  • The Offshore Installations (Safety Case) Regulations 2005 No. 3117. 
  • The Control of Major Accident Hazards (Amendment) Regulations 2005 No.1088. 
  • Health and Safety Executive. Safety Assessment Principles for Nuclear Facilities. HSE; 2006.
  • The Railways and Other Guided Transport Systems (Safety) Regulations 2006. UK Statutory Instrument 2006 No.599. 
  • EC Directive 91/440/EEC. On the development of the community’s railways. 29 July 1991.

Background readings on methods of argument-based assurance

  • Ankrum, T. S., & Kromholz, A. H. (2005). Structured assurance cases: Three common standards. In Ninth IEEE International Symposium on High-Assurance Systems Engineering (HASE'05) (pp. 99-108).
  • Ashmore, R., Calinescu, R., & Paterson, C. (2019). Assuring the machine learning lifecycle: Desiderata, methods, and challenges. arXiv preprint arXiv:1905.04223.
  • Barry, M. R. (2011). CertWare: A workbench for safety case production and analysis. In 2011 Aerospace conference (pp. 1-10). IEEE.
  • Bloomfield, R., & Netkachova, K. (2014). Building blocks for assurance cases. In 2014 IEEE International Symposium on Software Reliability Engineering Workshops(pp. 186-191). IEEE.
  • Bloomfield, R., & Bishop, P. (2010). Safety and assurance cases: Past, present and possible future–an Adelard perspective. In Making Systems Safer (pp. 51-67). Springer, London.
  • Cârlan, C., Barner, S., Diewald, A., Tsalidis, A., & Voss, S. (2017). ExplicitCase: integrated model-based development of system and safety cases. International Conference on Computer Safety, Reliability, and Security (pp. 52-63). Springer.
  • Denney, E., & Pai, G. (2013). A formal basis for safety case patterns. In International Conference on Computer Safety, Reliability, and Security (pp. 21-32). Springer.
  • Denney, E., Pai, G., & Habli, I. (2015). Dynamic safety cases for through-life safety assurance. IEEE/ACM 37th IEEE International Conference on Software Engineering (Vol. 2, pp. 587-590). IEEE.
  • Denney, E., & Pai, G. (2018). Tool support for assurance case development. Automated Software Engineering, 25(3), 435-499.
  • Despotou, G. (2004). Extending the safety case concept to address dependability. Proceedings of the 22nd International System Safety Conference.
  • Gacek, A., Backes, J., Cofer, D., Slind, K., & Whalen, M. (2014). Resolute: an assurance case language for architecture models. ACM SIGAda Ada Letters, 34(3), 19-28.
  • Ge, X., Rijo, R., Paige, R. F., Kelly, T. P., & McDermid, J. A. (2012). Introducing goal structuring notation to explain decisions in clinical practice. Procedia Technology, 5, 686-695.
  • Gleirscher, M., & Kugele, S. (2019). Assurance of System Safety: A Survey of Design and Argument Patterns. arXiv preprint arXiv:1902.05537.
  • Górski, J., Jarzębowicz, A., Miler, J., Witkowicz, M., Czyżnikiewicz, J., & Jar, P. (2012). Supporting assurance by evidence-based argument services. International Conference on Computer Safety, Reliability, and Security (pp. 417-426). Springer, Berlin, Heidelberg.
  • Habli, I., & Kelly, T. (2014, July). Balancing the formal and informal in safety case arguments. In VeriSure: Verification and Assurance Workshop, colocated with Computer-Aided Verification (CAV).
  • Hawkins, R., Habli, I., Kolovos, D., Paige, R., & Kelly, T. (2015). Weaving an assurance case from design: a model-based approach. 2015 IEEE 16th International Symposium on High Assurance Systems Engineering (pp. 110-117). IEEE.
  • Health Foundation (2012). Evidence: Using safety cases in industry and healthcare.
  • Kelly, T. (1998) Arguing Safety: A Systematic Approach to Managing Safety Cases. Doctoral Thesis. University of York: Department of Computer Science.
  • Kelly, T., & McDermid, J. (1998). Safety case patterns-reusing successful arguments. IEEE Colloquium on Understanding Patterns and Their Application to System Engineering, London.
  • Kelly, T. (2003). A Systematic Approach to Safety Case Management. SAE International.
  • Kelly, T., & Weaver, R. (2004). The goal structuring notation–a safety argument notation. In Proceedings of the dependable systems and networks 2004 workshop on assurance cases.
  • Maksimov, M., Fung, N. L., Kokaly, S., & Chechik, M. (2018). Two decades of assurance case tools: a survey. International Conference on Computer Safety, Reliability, and Security (pp. 49-59). Springer.
  • Nemouchi, Y., Foster, S., Gleirscher, M., & Kelly, T. (2019). Mechanised assurance cases with integrated formal methods in Isabelle. arXiv preprint arXiv:1905.06192.
  • Netkachova, K., Netkachov, O., & Bloomfield, R. (2014). Tool support for assurance case building blocks. In International Conference on Computer Safety, Reliability, and Security (pp. 62-71). Springer.
  • Picardi, C., Hawkins, R., Paterson, C., & Habli, I. (2019, September). A pattern for arguing the assurance of machine learning in medical diagnosis systems. In International Conference on Computer Safety, Reliability, and Security (pp. 165-179). Springer.
  • Picardi, C., Paterson, C., Hawkins, R. D., Calinescu, R., & Habli, I. (2020). Assurance Argument Patterns and Processes for Machine Learning in Safety-Related Systems. Proceedings of the Workshop on Artificial Intelligence Safety (pp. 23-30). CEUR Workshop Proceedings.
  • Rushby, J. (2015). The interpretation and evaluation of assurance cases. Comp. Science Laboratory, SRI International, Tech. Rep. SRI-CSL-15-01.
  • Strunk, E. A., & Knight, J. C. (2008). The essential synthesis of problem frames and assurance cases. Expert Systems, 25(1), 9-27.