Absolute Stance Bias
Absolute stance bias refers to systematic errors in machine learning models stemming from skewed training data or algorithmic design, leading to unfair or inaccurate predictions across different groups or contexts. Current research focuses on quantifying and mitigating these biases in various models, including large language models (LLMs), machine translation systems, and image recognition algorithms, often employing techniques like counterfactual fairness, reinforcement learning, and bias-aware evaluation metrics. Understanding and addressing absolute stance bias is crucial for ensuring fairness, reliability, and trustworthiness in AI systems across diverse applications, from healthcare and finance to social media and education.
Papers
Quantifying Feature Contributions to Overall Disparity Using Information Theory
Sanghamitra Dutta, Praveen Venkatesh, Pulkit Grover
Definition drives design: Disability models and mechanisms of bias in AI technologies
Denis Newman-Griffis, Jessica Sage Rauchberg, Rahaf Alharbi, Louise Hickman, Harry Hochheiser
De-biasing "bias" measurement
Kristian Lum, Yunfeng Zhang, Amanda Bower
Bias and Fairness on Multimodal Emotion Detection Algorithms
Matheus Schmitz, Rehan Ahmed, Jimi Cao
Process, Bias and Temperature Scalable CMOS Analog Computing Circuits for Machine Learning
Pratik Kumar, Ankita Nandi, Shantanu Chakrabartty, Chetan Singh Thakur
Towards an Enhanced Understanding of Bias in Pre-trained Neural Language Models: A Survey with Special Emphasis on Affective Bias
Anoop K., Manjary P. Gangan, Deepak P., Lajish V. L
An Examination of Bias of Facial Analysis based BMI Prediction Models
Hera Siddiqui, Ajita Rattani, Karl Ricanek, Twyla Hill