Bias Evaluation
Bias evaluation in machine learning focuses on identifying and quantifying unfair biases in models' outputs, aiming to promote fairness and mitigate discriminatory outcomes. Current research emphasizes developing new metrics and benchmarks to assess bias across diverse model architectures, including large language models and computer vision systems, often employing techniques like counterfactual analysis and probing methods to detect subtle biases. This work is crucial for ensuring the responsible development and deployment of AI systems, impacting fields ranging from healthcare and criminal justice to social media and autonomous driving, where biased algorithms can have significant societal consequences.
Papers
Bias patterns in the application of LLMs for clinical decision support: A comprehensive study
Raphael Poulain, Hamed Fayyaz, Rahmatollah Beheshti
Sum of Group Error Differences: A Critical Examination of Bias Evaluation in Biometric Verification and a Dual-Metric Measure
Alaa Elobaid, Nathan Ramoly, Lara Younes, Symeon Papadopoulos, Eirini Ntoutsi, Ioannis Kompatsiaris
GPTBIAS: A Comprehensive Framework for Evaluating Bias in Large Language Models
Jiaxu Zhao, Meng Fang, Shirui Pan, Wenpeng Yin, Mykola Pechenizkiy
Attribute Annotation and Bias Evaluation in Visual Datasets for Autonomous Driving
David Fernández Llorca, Pedro Frau, Ignacio Parra, Rubén Izquierdo, Emilia Gómez