Invariant Learning
Invariant learning aims to develop machine learning models robust to distributional shifts in data, ensuring consistent performance across different environments or datasets. Current research focuses on techniques like invariant risk minimization (IRM) and information bottleneck methods, often applied within neural network architectures (including graph neural networks and decision trees) to extract features invariant to environmental changes. This field is crucial for improving the reliability and generalizability of machine learning models in real-world applications where data distributions are inherently variable, impacting areas such as time-series forecasting, image classification, and graph analysis.
Papers
November 18, 2022
November 5, 2022
October 26, 2022
October 24, 2022
July 26, 2022
June 29, 2022
June 7, 2022
June 2, 2022
May 3, 2022
March 10, 2022