Distribution Generalization
Distribution generalization in machine learning focuses on developing models that maintain high performance when encountering data significantly different from their training data. Current research emphasizes techniques like invariant learning, multicalibration, and ensemble methods, often applied within transformer, graph neural network, and other architectures, to improve robustness against various distribution shifts (covariate, label, concept shifts). Successfully addressing this challenge is crucial for deploying reliable machine learning systems in real-world applications, where data distributions are inherently complex and dynamic, impacting fields such as autonomous driving, medical diagnosis, and scientific discovery.
Papers
Benchmarking Out-of-Distribution Generalization Capabilities of DNN-based Encoding Models for the Ventral Visual Cortex
Spandan Madan, Will Xiao, Mingran Cao, Hanspeter Pfister, Margaret Livingstone, Gabriel Kreiman
First-Order Manifold Data Augmentation for Regression Learning
Ilya Kaufman, Omri Azencot
How Far Can Transformers Reason? The Globality Barrier and Inductive Scratchpad
Emmanuel Abbe, Samy Bengio, Aryo Lotfi, Colin Sandon, Omid Saremi
Improving Generalization of Neural Vehicle Routing Problem Solvers Through the Lens of Model Architecture
Yubin Xiao, Di Wang, Xuan Wu, Yuesong Wu, Boyang Li, Wei Du, Liupu Wang, You Zhou