Robustness and Fairness in Algorithmic Recourse

Speaker:  Francesco Leofante - Centre for Explainable AI at Imperial College London (UK)
  Wednesday, June 19, 2024 at 2:30 PM Aula Tessari (solo presenza)

Abstract: Counterfactual explanations (CXs) are advocated as being ideally suited to providing algorithmic recourse for subjects affected by the predictions of machine learning models. While CXs can be beneficial to affected individuals, recent work has exposed severe issues related to the robustness of state-of-the-art methods for obtaining CXs. Since a lack of robustness may compromise the fairness of CXs, techniques to mitigate this risk are in order. In this talk we will begin by introducing the problem of (lack of) robustness, discuss its implications on fairness and present a recent solution we developed to compute robust (and fair) CXs.

Bio: Francesco is an Imperial College Research Fellow affiliated with the Centre for Explainable Artificial Intelligence at Imperial College London. His research focuses on safe and explainable AI, with special emphasis on counterfactual explanations and algorithmic recourse. Since 2022, he leads the project “ConTrust: Robust Contrastive Explanations for Deep Neural Networks”, a four-year effort devoted to the formal study of robustness issues arising in XAI. More details about Francesco and his research can be found at

Programme Director
Alessandro Farinelli

External reference
Publication date
May 23, 2024