Presentation of the Project

SUMMARY

Safety-critical systems incorporate more and more autonomous decision-making, using Artificial Intelligence (AI) techniques into real-life applications. These have a very concrete impact on people’s lives. With safety a major concern, problems of opacity, bias and risk are pressing. Creating Trustworthy AI (TAI) is thus of paramount importance. Advances in AI design still struggle to offer technical implementations driven by conceptual knowledge and qualitative approaches.

This project aims at addressing these limitations, by developing design criteria for TAI based on philosophical analyses of transparency, bias and risk combined with their formalization and technical implementation for a range of platforms, including both supervised and unsupervised learning. We argue that this can be obtained through the explicit formulation of epistemic and normative principles for TAI, their development in formal design procedures and translation into computational implementations.

OBJECTIVES

OBJECTIVE 1

To formulate an epistemological and normative analysis of TAI as undermined by bias and risk, not only with respect to their reliability, but also to their social acceptance.

OBJECTIVE 2

To define a comprehensive formal ontology, including a taxonomy of biases and risks and their mutual relations for autonomous decision systems.

OBJECTIVE 3

To design (sub)-symbolic formal models to reason about safe TAI, and produce associated verification tools.

OBJECTIVE 4

To develop a novel computational framework for TAI systems explanation capabilities, aimed at mitigating the opacity of Machine Learning (ML) models.