Data Bias
Data bias, the presence of systematic errors in datasets that skew model outputs, is a critical concern across machine learning applications. Current research focuses on identifying and mitigating bias through various techniques, including counterfactual examples to improve data quality, Wasserstein barycenters for fairer risk assessment, and self-supervised adversarial training for robust model generalization. Addressing data bias is crucial for ensuring fairness, accuracy, and trustworthiness in machine learning models, impacting fields ranging from healthcare and finance to criminal justice and online security.
Papers
October 29, 2024
April 24, 2024
June 22, 2023
March 27, 2023
February 13, 2023
February 12, 2023
September 14, 2022