Absolute Stance Bias
Absolute stance bias refers to systematic errors in machine learning models stemming from skewed training data or algorithmic design, leading to unfair or inaccurate predictions across different groups or contexts. Current research focuses on quantifying and mitigating these biases in various models, including large language models (LLMs), machine translation systems, and image recognition algorithms, often employing techniques like counterfactual fairness, reinforcement learning, and bias-aware evaluation metrics. Understanding and addressing absolute stance bias is crucial for ensuring fairness, reliability, and trustworthiness in AI systems across diverse applications, from healthcare and finance to social media and education.
Papers
Holistic Analysis of Hallucination in GPT-4V(ision): Bias and Interference Challenges
Chenhang Cui, Yiyang Zhou, Xinyu Yang, Shirley Wu, Linjun Zhang, James Zou, Huaxiu Yao
An AI-Guided Data Centric Strategy to Detect and Mitigate Biases in Healthcare Datasets
Faris F. Gulamali, Ashwin S. Sawant, Lora Liharska, Carol R. Horowitz, Lili Chan, Patricia H. Kovatch, Ira Hofer, Karandeep Singh, Lynne D. Richardson, Emmanuel Mensah, Alexander W Charney, David L. Reich, Jianying Hu, Girish N. Nadkarni
Multi-EuP: The Multilingual European Parliament Dataset for Analysis of Bias in Information Retrieval
Jinrui Yang, Timothy Baldwin, Trevor Cohn
Towards objective and systematic evaluation of bias in medical imaging AI
Emma A. M. Stanley, Raissa Souza, Anthony Winder, Vedant Gulve, Kimberly Amador, Matthias Wilms, Nils D. Forkert