Distribution Generalization
Distribution generalization in machine learning focuses on developing models that maintain high performance when encountering data significantly different from their training data. Current research emphasizes techniques like invariant learning, multicalibration, and ensemble methods, often applied within transformer, graph neural network, and other architectures, to improve robustness against various distribution shifts (covariate, label, concept shifts). Successfully addressing this challenge is crucial for deploying reliable machine learning systems in real-world applications, where data distributions are inherently complex and dynamic, impacting fields such as autonomous driving, medical diagnosis, and scientific discovery.
Papers
May 25, 2023
May 24, 2023
May 23, 2023
May 20, 2023
May 19, 2023
May 6, 2023
April 22, 2023
April 16, 2023
March 23, 2023
March 12, 2023
February 18, 2023
February 13, 2023
February 2, 2023
January 30, 2023
January 28, 2023
January 26, 2023
January 17, 2023
January 12, 2023
December 20, 2022