Fairness Perception

Fairness perception in AI focuses on understanding and mitigating biases in algorithms' decisions and their impact on different user groups. Current research emphasizes developing methods to detect and correct biases, often employing techniques like optimal transport, Bayesian approaches, and adversarial training within various model architectures, including deep learning and graph neural networks. This work is crucial for building trustworthy AI systems, addressing ethical concerns, and ensuring equitable outcomes across diverse populations in applications ranging from healthcare and criminal justice to social media moderation and recommender systems. A key challenge lies in achieving fairness generalization across different datasets and contexts, and in understanding how explanations of AI decisions influence fairness perceptions and human reliance on AI recommendations.

Papers