ML Fairness

Machine learning (ML) fairness research aims to mitigate biases in algorithms and datasets that lead to discriminatory outcomes for certain demographic groups. Current research focuses on developing methods for detecting and mitigating bias in various data modalities (text, images, time series), including techniques like data augmentation and label noise correction, and exploring the use of zero-knowledge proofs to verify fairness in deployed models. This field is crucial for ensuring equitable application of ML across diverse populations and for building trust in AI systems, impacting both the ethical development of AI and its practical deployment in sensitive areas like healthcare and criminal justice.

Papers