Generalization Performance
Generalization performance in machine learning focuses on a model's ability to accurately predict outcomes on unseen data, a crucial aspect for real-world applications. Current research investigates this through various lenses, including mitigating overfitting in self-supervised and federated learning, improving the robustness of models to out-of-distribution data (e.g., using dropout or orthogonal regularization), and enhancing the efficiency of fine-tuning large pre-trained models (e.g., via low-rank adaptation). Understanding and improving generalization is vital for building reliable and adaptable AI systems across diverse domains, impacting fields from image recognition and natural language processing to control of biological neural networks.
Papers
In Search of the Successful Interpolation: On the Role of Sharpness in CLIP Generalization
Alireza Abdollahpoorrostam
Towards Combating Frequency Simplicity-biased Learning for Domain Generalization
Xilin He, Jingyu Hu, Qinliang Lin, Cheng Luo, Weicheng Xie, Siyang Song, Muhammad Haris Khan, Linlin Shen
Can We Theoretically Quantify the Impacts of Local Updates on the Generalization Performance of Federated Learning?
Peizhong Ju, Haibo Yang, Jia Liu, Yingbin Liang, Ness Shroff
On the Limited Generalization Capability of the Implicit Reward Model Induced by Direct Preference Optimization
Yong Lin, Skyler Seto, Maartje ter Hoeve, Katherine Metcalf, Barry-John Theobald, Xuan Wang, Yizhe Zhang, Chen Huang, Tong Zhang