Gender Bias
Gender bias in artificial intelligence (AI) models, particularly large language models (LLMs) and machine learning systems, is a significant area of concern, focusing on identifying and mitigating the perpetuation of societal stereotypes. Current research investigates bias across various modalities, including text generation, machine translation, image generation, and speech processing, employing techniques like adversarial training, counterfactual analysis, and prompt engineering to reduce bias in model outputs. Understanding and addressing this bias is crucial for ensuring fairness, equity, and trustworthiness in AI applications across diverse sectors, from healthcare and finance to education and employment.
Papers
Gender Representation in TV and Radio: Automatic Information Extraction methods versus Manual Analyses
David Doukhan, Lena Dodson, Manon Conan, Valentin Pelloin, Aurélien Clamouse, Mélina Lepape, Géraldine Van Hille, Cécile Méadel, Marlène Coulomb-Gully
Evaluation of Large Language Models: STEM education and Gender Stereotypes
Smilla Due, Sneha Das, Marianne Andersen, Berta Plandolit López, Sniff Andersen Nexø, Line Clemmensen