Distribution Generalization
Distribution generalization in machine learning focuses on developing models that maintain high performance when encountering data significantly different from their training data. Current research emphasizes techniques like invariant learning, multicalibration, and ensemble methods, often applied within transformer, graph neural network, and other architectures, to improve robustness against various distribution shifts (covariate, label, concept shifts). Successfully addressing this challenge is crucial for deploying reliable machine learning systems in real-world applications, where data distributions are inherently complex and dynamic, impacting fields such as autonomous driving, medical diagnosis, and scientific discovery.
Papers
May 24, 2022
May 23, 2022
May 19, 2022
May 18, 2022
April 26, 2022
April 25, 2022
April 21, 2022
April 7, 2022
March 27, 2022
March 24, 2022
March 22, 2022
March 18, 2022
February 25, 2022
February 16, 2022
February 14, 2022
February 11, 2022
February 8, 2022
February 3, 2022
January 28, 2022