Gender Bias
Gender bias in artificial intelligence (AI) models, particularly large language models (LLMs) and machine learning systems, is a significant area of concern, focusing on identifying and mitigating the perpetuation of societal stereotypes. Current research investigates bias across various modalities, including text generation, machine translation, image generation, and speech processing, employing techniques like adversarial training, counterfactual analysis, and prompt engineering to reduce bias in model outputs. Understanding and addressing this bias is crucial for ensuring fairness, equity, and trustworthiness in AI applications across diverse sectors, from healthcare and finance to education and employment.
Papers
Language Models Get a Gender Makeover: Mitigating Gender Bias with Few-Shot Data Interventions
Himanshu Thakur, Atishay Jain, Praneetha Vaddamanu, Paul Pu Liang, Louis-Philippe Morency
Gender, names and other mysteries: Towards the ambiguous for gender-inclusive translation
Danielle Saunders, Katrina Olsen
Gender Lost In Translation: How Bridging The Gap Between Languages Affects Gender Bias in Zero-Shot Multilingual Translation
Lena Cabrera, Jan Niehues
Are Fairy Tales Fair? Analyzing Gender Bias in Temporal Narrative Event Chains of Children's Fairy Tales
Paulina Toro Isaza, Guangxuan Xu, Akintoye Oloko, Yufang Hou, Nanyun Peng, Dakuo Wang