Bias Evaluation
Bias evaluation in machine learning focuses on identifying and quantifying unfair biases in models' outputs, aiming to promote fairness and mitigate discriminatory outcomes. Current research emphasizes developing new metrics and benchmarks to assess bias across diverse model architectures, including large language models and computer vision systems, often employing techniques like counterfactual analysis and probing methods to detect subtle biases. This work is crucial for ensuring the responsible development and deployment of AI systems, impacting fields ranging from healthcare and criminal justice to social media and autonomous driving, where biased algorithms can have significant societal consequences.
Papers
Less can be more: representational vs. stereotypical gender bias in facial expression recognition
Iris Dominguez-Catena, Daniel Paternain, Aranzazu Jurio, Mikel Galar
Beyond Silence: Bias Analysis through Loss and Asymmetric Approach in Audio Anti-Spoofing
Hye-jin Shim, Md Sahidullah, Jee-weon Jung, Shinji Watanabe, Tomi Kinnunen