Distribution Generalization
Distribution generalization in machine learning focuses on developing models that maintain high performance when encountering data significantly different from their training data. Current research emphasizes techniques like invariant learning, multicalibration, and ensemble methods, often applied within transformer, graph neural network, and other architectures, to improve robustness against various distribution shifts (covariate, label, concept shifts). Successfully addressing this challenge is crucial for deploying reliable machine learning systems in real-world applications, where data distributions are inherently complex and dynamic, impacting fields such as autonomous driving, medical diagnosis, and scientific discovery.
Papers
Domain penalisation for improved Out-of-Distribution Generalisation
Shuvam Jena, Sushmetha Sumathi Rajendran, Karthik Seemakurthy, Sasithradevi A, Vijayalakshmi M, Prakash Poornachari
Invariant Graph Learning Meets Information Bottleneck for Out-of-Distribution Generalization
Wenyu Mao, Jiancan Wu, Haoyang Liu, Yongduo Sui, Xiang Wang
Benchmarking Out-of-Distribution Generalization Capabilities of DNN-based Encoding Models for the Ventral Visual Cortex
Spandan Madan, Will Xiao, Mingran Cao, Hanspeter Pfister, Margaret Livingstone, Gabriel Kreiman
First-Order Manifold Data Augmentation for Regression Learning
Ilya Kaufman, Omri Azencot