Seminario: “How to develop a responsible AI: explaining decisions and avoiding discriminations “

Seminario: “How to develop a responsible AI: explaining decisions and avoiding discriminations “

Mercoledi 8 Giugno 2022, Aula Caldirola

di Greta Greco e Daniele Regoli (Banca Intesa)

Explainable Artificial Intelligence (XAI) is a set of techniques that allows the understanding of both technical and non-technical aspects of Artificial Intelligence (AI) systems. Within XAI techniques, counterfactual explanations aim to provide to end users a set of features (and  their corresponding values) that need to be changed in order to achieve a desired outcome. Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations, and in particular they fall short of considering the causal impact of such actions. In this section, we present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations capturing by design the underlying causal relations from the data, and at the same time to provide feasible recommendations to reach the proposed profile.

Along with explainability, increasing importance id dedicated to assessing to what extent AI models can be considered fair, in terms of the impact they might have on individuals. Algorithms are developed by humans and learn from historical data, chances are that bias and prejudices can be replicated and even exacerbated by models at scale. In this section we present how to measure discrimination in data and what are the strategies that can mitigate bias in AI models predictions.

I commenti sono chiusi