Distribution Robustness
Distribution robustness in machine learning focuses on developing models that maintain high performance when encountering data differing from their training distribution. Current research emphasizes improving out-of-distribution (OOD) robustness through techniques like neural architecture search (NAS) to optimize for flatness in loss landscapes, parameter-efficient transfer learning methods for adapting large language models (LLMs), and generative models for data augmentation. This research is crucial for deploying reliable AI systems in real-world scenarios where data variability is inevitable, impacting fields ranging from medical diagnosis to natural language processing.
Papers
June 17, 2024
February 29, 2024
January 25, 2024
December 29, 2023
November 2, 2023
July 23, 2023
June 7, 2023
March 13, 2023
March 4, 2023
January 27, 2023
September 13, 2022