Importance Weighting
Importance weighting is a statistical technique used to adjust for discrepancies between training and test data distributions, improving the generalization performance of machine learning models. Current research focuses on applying importance weighting to enhance various machine learning tasks, including large language model self-improvement, variational inference in Gaussian processes, and addressing distribution shifts in both supervised and reinforcement learning settings. This technique is crucial for mitigating biases, improving fairness, and achieving robust performance in real-world applications where data distributions are often non-stationary or differ significantly from the training data. The development of efficient and effective importance weighting methods is a significant area of ongoing research with broad implications across machine learning.
Papers
UMIX: Improving Importance Weighting for Subpopulation Shift via Uncertainty-Aware Mixup
Zongbo Han, Zhipeng Liang, Fan Yang, Liu Liu, Lanqing Li, Yatao Bian, Peilin Zhao, Bingzhe Wu, Changqing Zhang, Jianhua Yao
Importance Tempering: Group Robustness for Overparameterized Models
Yiping Lu, Wenlong Ji, Zachary Izzo, Lexing Ying