Absolute Stance Bias
Absolute stance bias refers to systematic errors in machine learning models stemming from skewed training data or algorithmic design, leading to unfair or inaccurate predictions across different groups or contexts. Current research focuses on quantifying and mitigating these biases in various models, including large language models (LLMs), machine translation systems, and image recognition algorithms, often employing techniques like counterfactual fairness, reinforcement learning, and bias-aware evaluation metrics. Understanding and addressing absolute stance bias is crucial for ensuring fairness, reliability, and trustworthiness in AI systems across diverse applications, from healthcare and finance to social media and education.
Papers
Evaluating Pre-Training Bias on Severe Acute Respiratory Syndrome Dataset
Diego Dimer Rodrigues
Ensuring Equitable Financial Decisions: Leveraging Counterfactual Fairness and Deep Learning for Bias
Saish Shinde
From Bias to Balance: Detecting Facial Expression Recognition Biases in Large Multimodal Foundation Models
Kaylee Chhua, Zhoujinyi Wen, Vedant Hathalia, Kevin Zhu, Sean O'Brien
Reasoning Beyond Bias: A Study on Counterfactual Prompting and Chain of Thought Reasoning
Kyle Moore, Jesse Roberts, Thao Pham, Douglas Fisher
The Power of Bias: Optimizing Client Selection in Federated Learning with Heterogeneous Differential Privacy
Jiating Ma, Yipeng Zhou, Qi Li, Quan Z. Sheng, Laizhong Cui, Jiangchuan Liu
Understanding the Interplay of Scale, Data, and Bias in Language Models: A Case Study with BERT
Muhammad Ali, Swetasudha Panda, Qinlan Shen, Michael Wick, Ari Kobren
Exploring Bengali Religious Dialect Biases in Large Language Models with Evaluation Perspectives
Azmine Toushik Wasi, Raima Islam, Mst Rafia Islam, Taki Hasan Rafi, Dong-Kyu Chae