Gender Bias

Gender bias in artificial intelligence (AI) models, particularly large language models (LLMs) and machine learning systems, is a significant area of concern, focusing on identifying and mitigating the perpetuation of societal stereotypes. Current research investigates bias across various modalities, including text generation, machine translation, image generation, and speech processing, employing techniques like adversarial training, counterfactual analysis, and prompt engineering to reduce bias in model outputs. Understanding and addressing this bias is crucial for ensuring fairness, equity, and trustworthiness in AI applications across diverse sectors, from healthcare and finance to education and employment.

Papers