Fairness Method

Fairness methods in machine learning aim to mitigate biases in algorithms that lead to discriminatory outcomes across different demographic groups. Current research focuses on developing techniques that achieve fairness while maintaining accuracy, exploring approaches like pre-processing data, modifying model training (in-processing), or adjusting predictions (post-processing), often within specific model architectures such as diffusion models or through optimization techniques like optimal transport. These advancements are crucial for ensuring ethical and equitable use of AI in high-stakes applications like healthcare, loan applications, and criminal justice, impacting both the scientific understanding of algorithmic bias and the development of fairer AI systems.

Papers