Distribution Generalization
Distribution generalization in machine learning focuses on developing models that maintain high performance when encountering data significantly different from their training data. Current research emphasizes techniques like invariant learning, multicalibration, and ensemble methods, often applied within transformer, graph neural network, and other architectures, to improve robustness against various distribution shifts (covariate, label, concept shifts). Successfully addressing this challenge is crucial for deploying reliable machine learning systems in real-world applications, where data distributions are inherently complex and dynamic, impacting fields such as autonomous driving, medical diagnosis, and scientific discovery.
Papers
May 2, 2024
April 27, 2024
March 29, 2024
March 26, 2024
March 18, 2024
March 11, 2024
March 4, 2024
February 14, 2024
February 13, 2024
February 9, 2024
February 7, 2024
January 29, 2024
January 8, 2024
January 7, 2024
December 29, 2023
December 26, 2023
December 25, 2023
December 18, 2023