Fairness Estimation

Fairness estimation in AI focuses on developing methods to quantify and mitigate bias in machine learning models, particularly concerning sensitive attributes like race or gender. Current research emphasizes integrating privacy-preserving techniques, such as differential privacy, into fairness measurement and model training, often employing generative models or adapting existing algorithms like decision trees. This work is crucial for ensuring responsible AI development, particularly in high-stakes applications like healthcare and finance, where both fairness and data privacy are paramount.

Papers