Relatore:
Vito Paolo Pastore
- Università di Genova - Dipartimento di informatica, bioingegneria, robotica e ingegneria dei sistemi - DIBRIS
lunedì 1 luglio 2024
alle ore
14.30
Sala Verde
Abstract:
In the last decades, the field of Artificial Intelligence (AI) has experienced remarkable growth, with the development of complex machine learning models featuring a massive number of parameters. However, concerns have arisen about the inherent biases within AI models, which can perpetuate and exacerbate disparities in certain datasets, leading to fairness and ethical issues. Specifically, when dealing with biased datasets, deep neural networks tend to rely on spurious correlations for predictions, failing to learn general and robust representations.
In recent years, the computer vision community has increasingly focused on designing and developing methods for model debiasing to improve model generalization by emphasizing actual semantic features over bias-aligned ones.
This talk will focus on unsupervised model debiasing approaches, which assume no prior information about the bias, in the context of image classification tasks.
After introducing the problem and its landscape, we will demonstrate the importance of accurately predicting bias-conflicting and bias-aligned samples to achieve effective bias mitigation. We will describe a new bias identification method based on anomaly detection, arguing that in biased datasets, bias-conflicting samples can be considered outliers relative to the bias-aligned distribution within the feature space of an intentionally biased model, thus allowing their detection through an anomaly detection method.
By combining the proposed bias identification approach with bias-conflicting data upsampling and augmentation in a two-step strategy, we achieve state-of-the-art performance on both synthetic and real benchmark datasets.