Fairness Testing

Fairness testing aims to identify and quantify biases in artificial intelligence systems, particularly focusing on how these systems treat different groups. Current research emphasizes developing automated methods, often employing search-based algorithms like genetic algorithms and particle swarm optimization, to efficiently discover discriminatory behavior in various model architectures, including deep learning models for image recognition and recommender systems. This work is crucial for ensuring the ethical and responsible deployment of AI, mitigating potential harms, and promoting fairness across diverse applications.

Papers