Distributional Robustness
Distributional robustness in machine learning aims to develop models that perform reliably across diverse data distributions, mitigating the impact of data shifts between training and deployment. Current research focuses on developing algorithms and optimization frameworks, such as those based on Wasserstein distances and minimax formulations, to enhance model robustness against adversarial attacks and subpopulation shifts, often incorporating techniques from Bayesian methods and causal inference. This field is crucial for building trustworthy and reliable AI systems, particularly in high-stakes applications where model performance must be consistent across various demographics and environmental conditions.
Papers
December 20, 2022
October 22, 2022
October 21, 2022
October 6, 2022
September 30, 2022
September 15, 2022
August 5, 2022
June 16, 2022
June 10, 2022
June 7, 2022
May 31, 2022
May 3, 2022
April 7, 2022
March 23, 2022
March 1, 2022
February 27, 2022
February 3, 2022
December 20, 2021