FINAL RESEARCH MEETING

THE MEETING WILL TAKE PLACE AT

UNIVERSITY OF MILAN
VIA FESTA DEL PERDONO 3 (main entrance at 7)
20122 MILAN
ROOMS 302 and 304

YOU WILL FIND INDICATIONS AT BOTH ENTRANCES TO GET TO THE ROOMS OR MAY ASK THE PORTER.

KEYNOTES WILL TAKE PLACE IN ROOM 302.

PROGRAMME

Day 1: 01 July 2025Room 302Room 304
09:00 – 09:30 Welcome
09:30 – 10:30 Emily Postan (Edinburgh): Beyond bias – the risks of using machine learning to shape social ontologies
10:30 – 11:00 Break 
11:00 – 11:30 Baldi & D’Asaro, A system for ethical Reasoning in Symbiotic AIGinammi, The Bias Paradox for Word Embeddings
11:30 – 12:00Ceragioli & Primiero, A Proof-theoretic Approach to Individual and Counterfactual Fairness Quaresmini & Zanotti, A Human-inspired Strategy to Mitigate Algorithmic Misgendering in AGR
12:00 – 12:30Buda, Bias Amplification Chains in Generative AIBottazzi, The Country Incomprehensible to Machines: Philosophy of language against algorithmic surveillance
12:30 – 14:30Lunch
14:30 – 15:00 Papiri et al., Evaluating the trade-off between data quality and fairness in post-processing bias mitigationZanzotto, Generative AI, Deception, and Manipulation: Mapping the field
15:00 – 15:30 Russo, Perceptions of Explainable AI: How Presentation is ContentNaibo & Petrolo, Algorithms and ML systems: An epistemic comparison
15:30 – 16:00 Mombelli et al., Evaluating AI-Generated Explanations: A case study on LLM-assisted candidate assessmentZanotti et al., A Relational View on AI Risk: Complex socio-technical systems and multi-risk
16:00 – 16:30 Break
16:30 – 17:30 Edemilson Paranà (LUT University): Infrastructures, scale and risk: the composition fallacy in Financial AI
18:00 – 20:00Reception: Loggiato via Della Signora
Day 2: 02 July 2025Room 302Room 304
09:30 – 10:30 Sander Beckers (Cornell / UCL London): Counterfactual Reasoning about, and by, Large Language Models
10:30 – 11:00 Break
11:00 – 11:30 Ferrario & Porello, An Ontological Analysis of Biases Grounded in DOLCEZanna, Ethical-driven AI Development for Trustworthy Digital Public Services
11:30 – 12:00Zanotti, Distrusting Trust in AI
12:00 – 12:30Manganini, Data Speak but Sometimes Lie: A game theoretic approach to data bias and algorithmic fairnessFossa, Trust, Reliance, and the Game of Semantic Extension
12:30 – 14:30Lunch 
INDUSTRIAL TRACK
14:30 – 15:00Andrea Guerra (Kube): The Cost of Fairness
15:00 – 15:30Federico Ricciuti (CRIF): Safety and Security risks of GenAI applications
15:30 – 16:00 Alessandro Castelnovo (INTESA SanPaolo): Responsible AI at Intesa Sanpaolo
16:00 – 16:30Lorenzo Gattamorta & Simone Favaro (Deloitte): From Principles to Practice: Deloitte Approach for Trustworthy AI.
16:30 – 17:00Break
17:00 – 17:30Baratella, ONTrust: A Reference Ontology of TrustSancricca & Cappiello, Novel quality metrics for measuring data bias in data-centric AI
17:30 – 18:00 Fumagalli et al., Computational Approaches to Concepts Representation: A whirlwind tourVorster, Navigating Inductive Risk in Machine Learning for Hadron Therapy
18:00 – 18:30Closing

Call for Abstracts (CLOSED)

FINAL RESEARCH MEETING  
“BRIO – BIAS, RISK, OPACITY in AI”  

1st – 2nd July 2025  
Department of Philosophy, University of Milan, Italy 

BRIO – BIAS, RISK, OPACITY in AI: Design, Verification, and Development of Trustworthy AI is a collaborative National Research Project involving the University of Milan, Politecnico di Milano, the University of Genoa, the National Research Council of Italy in Trento, and the University of Naples. The project explores the challenges and limitations of Trustworthy AI through philosophical analyses of transparency, bias, and risk, alongside their formalization and technical implementation. 

The project’s closing event will serve as a key opportunity for interdisciplinary exchange, bringing together experts and scholars in trustworthy, explainable, ethical, and fair AI. 

 
Invited Speakers 

Emily Postan (University of Edinburgh)

Sander Beckers (Cornell University / UCL London) 

Edemilson Paraná (LUT University) 

Industrial Track – Bridging Industry and Academia 

In collaboration with MIRAI, a spinoff of the Department of Philosophy at the University of Milan, the Research Meeting will feature a dedicated Industrial Track. This session will bring together invited partners from leading companies that develop and implement AI systems across various industries. 

The Industrial Track aims to provide insight into how businesses are navigating the rapidly evolving AI landscape while fostering stronger collaborations between industry and academia. Join us for a unique opportunity to engage with experts at the forefront of AI innovation. 

Call for Abstracts 

We welcome contributions in the form of abstracts. Authors of accepted papers will be invited to present their work in person in a 20-minute talk, followed by a 10-minute Q&A session. Additionally, selected authors may be invited to submit extended versions of their papers for consideration in a special issue of a reputable journal. 

Areas of interest include: 

  • Philosophy of Science and Technology 
  • Ethics of Technology 
  • Logics and Formal Ontologies Applied to Technology 
  • Foundational Analysis and Ontology-Based Modeling of Trust, Bias, and Risk 
  • Explainable AI 
  • Machine Learning and Deep Learning Methods for Trustworthy AI  

 
Submission of abstracts 

Deadline: contributions should be submitted by 01/04/2025 

EXTENDED DEADLINE 11/04/2025

Submission through this Google Form   

Format: Submissions should include a brief description of the work (maximum 100 words) for reviewer assignment and an abstract (up to 1000 words) plus references, submitted in PDF format. 

Notification: The program committee will inform contributors of results by 01/05/2025 

Organizing and Program Committe  

Ceragioli, Leonardo, UNIMI 

Chiffi, Daniele, POLIMI 

Coraglia, Greta, UNIMI 

Ferrario, Roberta, ISTC-CNR 

Kubyshkina, Ekaterina, UNIMI 

Manganini, Chiara, UNIMI 

Porello, Daniele, UNIGE 

Prevete, Roberto, UNINA 

Primiero, Giuseppe, UNIMI 

Schiaffonati, Viola, POLIMI 

Zanotti, Giacomo, POLIMI