Fair Predictor
Fair predictor research aims to develop machine learning models that make accurate predictions while avoiding discriminatory outcomes across different demographic groups. Current efforts focus on integrating uncertainty quantification into fairness metrics, incorporating causal reasoning into model training, and developing novel algorithms (e.g., those based on multivariate adaptive regression splines, ensemble post-processing, and convex optimization) to achieve fairness while maintaining predictive accuracy. This work is crucial for mitigating bias in high-stakes decision-making across various domains, including loan applications, education, and healthcare, promoting equitable and trustworthy AI systems.
Papers
June 18, 2022