
Call for Abstracts
FINAL RESEARCH MEETING
“BRIO – BIAS, RISK, OPACITY in AI”
1st – 2nd July 2025
Department of Philosophy, University of Milan, Italy
BRIO – BIAS, RISK, OPACITY in AI: Design, Verification, and Development of Trustworthy AI is a collaborative National Research Project involving the University of Milan, Politecnico di Milano, the University of Genoa, the National Research Council of Italy in Trento, and the University of Naples. The project explores the challenges and limitations of Trustworthy AI through philosophical analyses of transparency, bias, and risk, alongside their formalization and technical implementation.
The project’s closing event will serve as a key opportunity for interdisciplinary exchange, bringing together experts and scholars in trustworthy, explainable, ethical, and fair AI.
Invited Speakers
Emily Postan (University of Edinburgh)
Sander Beckers (Cornell University)
Edemilson Paraná (LUT University)
Industrial Track – Bridging Industry and Academia

In collaboration with MIRAI, a spinoff of the Department of Philosophy at the University of Milan, the Research Meeting will feature a dedicated Industrial Track. This session will bring together invited partners from leading companies that develop and implement AI systems across various industries.
The Industrial Track aims to provide insight into how businesses are navigating the rapidly evolving AI landscape while fostering stronger collaborations between industry and academia. Join us for a unique opportunity to engage with experts at the forefront of AI innovation.
Call for Abstracts
We welcome contributions in the form of abstracts. Authors of accepted papers will be invited to present their work in person in a 20-minute talk, followed by a 10-minute Q&A session. Additionally, selected authors may be invited to submit extended versions of their papers for consideration in a special issue of a reputable journal.
Areas of interest include:
- Philosophy of Science and Technology
- Ethics of Technology
- Logics and Formal Ontologies Applied to Technology
- Foundational Analysis and Ontology-Based Modeling of Trust, Bias, and Risk
- Explainable AI
- Machine Learning and Deep Learning Methods for Trustworthy AI
Submission of abstracts
Deadline: contributions should be submitted by 01/04/2025
EXTENDED DEADLINE 11/04/2025
Submission through this Google Form
Format: Submissions should include a brief description of the work (maximum 100 words) for reviewer assignment and an abstract (up to 1000 words) plus references, submitted in PDF format.
Notification: The program committee will inform contributors of results by 01/05/2025
Organizing and Program Committe
Ceragioli, Leonardo, UNIMI
Chiffi, Daniele, POLIMI
Coraglia, Greta, UNIMI
Ferrario, Roberta, ISTC-CNR
Kubyshkina, Ekaterina, UNIMI
Manganini, Chiara, UNIMI
Porello, Daniele, UNIGE
Prevete, Roberto, UNINA
Primiero, Giuseppe, UNIMI
Schiaffonati, Viola, POLIMI
Zanotti, Giacomo, POLIMI