Invariant Learning
Invariant learning aims to develop machine learning models robust to distributional shifts in data, ensuring consistent performance across different environments or datasets. Current research focuses on techniques like invariant risk minimization (IRM) and information bottleneck methods, often applied within neural network architectures (including graph neural networks and decision trees) to extract features invariant to environmental changes. This field is crucial for improving the reliability and generalizability of machine learning models in real-world applications where data distributions are inherently variable, impacting areas such as time-series forecasting, image classification, and graph analysis.
Papers
Winning Prize Comes from Losing Tickets: Improve Invariant Learning by Exploring Variant Parameters for Out-of-Distribution Generalization
Zhuo Huang, Muyang Li, Li Shen, Jun Yu, Chen Gong, Bo Han, Tongliang Liu
Bayesian Domain Invariant Learning via Posterior Generalization of Parameter Distributions
Shiyu Shen, Bin Pan, Tianyang Shi, Tao Li, Zhenwei Shi