Biased Model
Biased models, a significant concern in machine learning, arise when models learn spurious correlations from biased training data, leading to unfair or inaccurate predictions on unseen data. Current research focuses on mitigating this bias through techniques like sample re-weighting, ensemble methods that leverage a secondary "bias-only" model, and improved uncertainty calibration within these ensembles. These efforts aim to improve model robustness and fairness, impacting various applications by reducing discriminatory outcomes and enhancing the reliability of AI systems across diverse domains.
Papers
July 3, 2024
February 6, 2023
April 27, 2022
January 22, 2022