Absolute Stance Bias
Absolute stance bias refers to systematic errors in machine learning models stemming from skewed training data or algorithmic design, leading to unfair or inaccurate predictions across different groups or contexts. Current research focuses on quantifying and mitigating these biases in various models, including large language models (LLMs), machine translation systems, and image recognition algorithms, often employing techniques like counterfactual fairness, reinforcement learning, and bias-aware evaluation metrics. Understanding and addressing absolute stance bias is crucial for ensuring fairness, reliability, and trustworthiness in AI systems across diverse applications, from healthcare and finance to social media and education.
Papers
Towards Detecting Cascades of Biased Medical Claims on Twitter
Libby Tiderman, Juan Sanchez Mercedes, Fiona Romanoschi, Fabricio Murai
A debiasing technique for place-based algorithmic patrol management
Alexander Einarsson, Simen Oestmo, Lester Wollman, Duncan Purves, Ryan Jenkins
DSAP: Analyzing Bias Through Demographic Comparison of Datasets
Iris Dominguez-Catena, Daniel Paternain, Mikel Galar
SocialStigmaQA: A Benchmark to Uncover Stigma Amplification in Generative Language Models
Manish Nagireddy, Lamogha Chiazor, Moninder Singh, Ioana Baldini
On the notion of Hallucinations from the lens of Bias and Validity in Synthetic CXR Images
Gauri Bhardwaj, Yuvaraj Govindarajulu, Sundaraparipurnan Narayanan, Pavan Kulkarni, Manojkumar Parmar