Mining Impersonification Bias in LLMs via Survey Filling
Anno:
2025
Tipologia prodotto:
Articolo in Rivista
Tipologia ANVUR:
Articolo su rivista
Lingua:
Inglese
Referee:
Sì
Nome rivista:
INFORMATION
ISSN Rivista:
2078-2489
N° Volume:
16
Numero o Fascicolo:
11
Intervallo pagine:
1-21
Parole chiave:
large language models, personas, bias, stereotypes
Breve descrizione dei contenuti:
In this paper, we introduce a survey-based methodology to audit LLM-generated personas by simulating 200 US residents and collecting responses to socio-demographic questions in a zero-shot setting. We investigate whether LLMs default to standardized profiles, how these profiles differ across models, and how conditioning on specific attributes affects the resulting portrayals. Our findings reveal that LLMs often produce homogenized personas that underrepresent demographic diversity and that conditioning on attributes such as gender, ethnicity, or disability may trigger stereotypical shifts. These results highlight implicit biases in LLMs and underscore the need for systematic approaches to evaluate and mitigate fairness risks in model outputs.