Gender Bias
Gender bias in artificial intelligence (AI) models, particularly large language models (LLMs) and machine learning systems, is a significant area of concern, focusing on identifying and mitigating the perpetuation of societal stereotypes. Current research investigates bias across various modalities, including text generation, machine translation, image generation, and speech processing, employing techniques like adversarial training, counterfactual analysis, and prompt engineering to reduce bias in model outputs. Understanding and addressing this bias is crucial for ensuring fairness, equity, and trustworthiness in AI applications across diverse sectors, from healthcare and finance to education and employment.
Papers
MoESD: Mixture of Experts Stable Diffusion to Mitigate Gender Bias
Guorun Wang, Lucia Specia
Less can be more: representational vs. stereotypical gender bias in facial expression recognition
Iris Dominguez-Catena, Daniel Paternain, Aranzazu Jurio, Mikel Galar
An Empirical Study on the Characteristics of Bias upon Context Length Variation for Bangla
Jayanta Sadhu, Ayan Antik Khan, Abhik Bhattacharjee, Rifat Shahriyar