Gender Bias
Gender bias in artificial intelligence (AI) models, particularly large language models (LLMs) and machine learning systems, is a significant area of concern, focusing on identifying and mitigating the perpetuation of societal stereotypes. Current research investigates bias across various modalities, including text generation, machine translation, image generation, and speech processing, employing techniques like adversarial training, counterfactual analysis, and prompt engineering to reduce bias in model outputs. Understanding and addressing this bias is crucial for ensuring fairness, equity, and trustworthiness in AI applications across diverse sectors, from healthcare and finance to education and employment.
Papers
Beats of Bias: Analyzing Lyrics with Topic Modeling and Gender Bias Measurements
Danqing Chen, Adithi Satish, Rasul Khanbayov, Carolin M. Schuster, Georg Groh
Investigating Gender Bias in Lymph-node Segmentation with Anatomical Priors
Ricardo Coimbra Brioso, Damiano Dei, Nicola Lambri, Pietro Mancosu, Marta Scorsetti, Daniele Loiacono