Distributional Robustness
Distributional robustness in machine learning aims to develop models that perform reliably across diverse data distributions, mitigating the impact of data shifts between training and deployment. Current research focuses on developing algorithms and optimization frameworks, such as those based on Wasserstein distances and minimax formulations, to enhance model robustness against adversarial attacks and subpopulation shifts, often incorporating techniques from Bayesian methods and causal inference. This field is crucial for building trustworthy and reliable AI systems, particularly in high-stakes applications where model performance must be consistent across various demographics and environmental conditions.
Papers
October 30, 2024
October 17, 2024
July 1, 2024
June 22, 2024
June 20, 2024
May 21, 2024
May 8, 2024
April 4, 2024
February 2, 2024
December 22, 2023
November 10, 2023
September 12, 2023
September 3, 2023
August 7, 2023
July 18, 2023
June 16, 2023
June 7, 2023
May 26, 2023