Fairness Aware Training

Fairness-aware training aims to mitigate biases in machine learning models, ensuring equitable outcomes across different demographic groups. Current research focuses on developing algorithms and frameworks that integrate fairness considerations into the model training process itself (in-processing), encompassing techniques like re-weighting samples, adding fairness penalties to the loss function, and even co-optimizing data, algorithms, and neural network architectures. This work is crucial for addressing societal biases embedded in data and promoting the responsible development and deployment of AI systems across various applications, particularly in sensitive domains like healthcare and criminal justice. The ultimate goal is to improve both model accuracy and fairness simultaneously, a challenge actively pursued through both theoretical analysis and practical implementations.

Papers