Robustness and Fairness in Algorithmic Recourse

Relatore:  Francesco Leofante - Centre for Explainable AI at Imperial College London (UK)
  mercoledì 19 giugno 2024 alle ore 14.30 Aula Tessari (solo presenza)

Abstract: Counterfactual explanations (CXs) are advocated as being ideally suited to providing algorithmic recourse for subjects affected by the predictions of machine learning models. While CXs can be beneficial to affected individuals, recent work has exposed severe issues related to the robustness of state-of-the-art methods for obtaining CXs. Since a lack of robustness may compromise the fairness of CXs, techniques to mitigate this risk are in order. In this talk we will begin by introducing the problem of (lack of) robustness, discuss its implications on fairness and present a recent solution we developed to compute robust (and fair) CXs.

Bio: Francesco is an Imperial College Research Fellow affiliated with the Centre for Explainable Artificial Intelligence at Imperial College London. His research focuses on safe and explainable AI, with special emphasis on counterfactual explanations and algorithmic recourse. Since 2022, he leads the project “ConTrust: Robust Contrastive Explanations for Deep Neural Networks”, a four-year effort devoted to the formal study of robustness issues arising in XAI. More details about Francesco and his research can be found at https://fraleo.github.io/.


Referente
Alessandro Farinelli

Referente esterno
Data pubblicazione
23 maggio 2024

Offerta formativa

Condividi