Fairness Analysis

Fairness analysis in machine learning aims to identify and mitigate biases in algorithms that lead to unfair or discriminatory outcomes across different demographic groups. Current research focuses on developing and applying fairness metrics, exploring bias mitigation techniques within various model architectures (including deep learning, graph neural networks, and foundation models), and investigating the impact of data imbalances and missing sensitive attributes. This work is crucial for ensuring equitable outcomes in high-stakes applications like healthcare, finance, and autonomous systems, and for advancing the development of more responsible and ethical AI.

Papers