Topic Bias
Topic bias in artificial intelligence (AI) refers to the systematic errors in AI models stemming from skewed training data or algorithmic design, leading to unfair or inaccurate outputs for certain groups or topics. Current research focuses on identifying and mitigating these biases across various AI applications, including language models (LLMs), image recognition systems, and even models used in healthcare and social sciences, employing techniques like adversarial training, fairness-aware algorithms, and bias detection frameworks. Understanding and addressing topic bias is crucial for ensuring fairness, reliability, and ethical deployment of AI systems across diverse contexts, impacting both the development of more robust AI and the equitable application of AI-driven technologies in society.
Papers
Echoes of Biases: How Stigmatizing Language Affects AI Performance
Yizhi Liu, Weiguang Wang, Guodong Gordon Gao, Ritu Agarwal
"I'm fully who I am": Towards Centering Transgender and Non-Binary Voices to Measure Biases in Open Language Generation
Anaelia Ovalle, Palash Goyal, Jwala Dhamala, Zachary Jaggers, Kai-Wei Chang, Aram Galstyan, Richard Zemel, Rahul Gupta