Fairness Testing
Fairness testing aims to identify and quantify biases in artificial intelligence systems, particularly focusing on how these systems treat different groups. Current research emphasizes developing automated methods, often employing search-based algorithms like genetic algorithms and particle swarm optimization, to efficiently discover discriminatory behavior in various model architectures, including deep learning models for image recognition and recommender systems. This work is crucial for ensuring the ethical and responsible deployment of AI, mitigating potential harms, and promoting fairness across diverse applications.
Papers
December 30, 2024
December 28, 2024
November 10, 2023
May 8, 2023
April 14, 2023
August 24, 2022
May 16, 2022