Discriminatory Decision

Discriminatory decision-making in machine learning (ML) systems is a critical research area focusing on identifying and mitigating bias that leads to unfair outcomes for certain demographic groups. Current research emphasizes developing methods to measure and reduce bias in datasets with multiple protected attributes, employing techniques like FairDo and knowledge distillation in graph neural networks, and incorporating human oversight through explanation-guided interventions. This work is crucial for ensuring fairness and accountability in high-stakes applications of ML across various sectors, including finance, healthcare, and criminal justice, and for informing the development of effective regulations and policies.

Papers