Strong Generalization
Strong generalization, the ability of machine learning models to perform well on unseen data, is a central objective in current research. Active areas of investigation include improving the robustness of self-supervised learning, understanding the optimization dynamics of transformers and other architectures (including CNNs and RNNs), and developing methods to enhance generalization through data augmentation, regularization techniques (e.g., logical regularization, consistency regularization), and improved training strategies (e.g., few-shot learning, meta-learning). These advancements are crucial for building reliable and adaptable AI systems across diverse applications, from image classification and natural language processing to healthcare and robotics.
Papers
Improvement and generalization of ABCD method with Bayesian inference
Ezequiel Alvarez, Leandro Da Rold, Manuel Szewc, Alejandro Szynkman, Santiago A. Tanco, Tatiana Tarutina
Generalizing across Temporal Domains with Koopman Operators
Qiuhao Zeng, Wei Wang, Fan Zhou, Gezheng Xu, Ruizhi Pu, Changjian Shui, Christian Gagne, Shichun Yang, Boyu Wang, Charles X. Ling
Assessing Generalization for Subpopulation Representative Modeling via In-Context Learning
Gabriel Simmons, Vladislav Savinov
On the Out-Of-Distribution Generalization of Multimodal Large Language Models
Xingxuan Zhang, Jiansheng Li, Wenjing Chu, Junjia Hai, Renzhe Xu, Yuqing Yang, Shikai Guan, Jiazheng Xu, Peng Cui
How Uniform Random Weights Induce Non-uniform Bias: Typical Interpolating Neural Networks Generalize with Narrow Teachers
Gon Buzaglo, Itamar Harel, Mor Shpigel Nacson, Alon Brutzkus, Nathan Srebro, Daniel Soudry
Neural networks for abstraction and reasoning: Towards broad generalization in machines
Mikel Bober-Irizar, Soumya Banerjee
Learning from Teaching Regularization: Generalizable Correlations Should be Easy to Imitate
Can Jin, Tong Che, Hongwu Peng, Yiyuan Li, Dimitris N. Metaxas, Marco Pavone
Image-Caption Encoding for Improving Zero-Shot Generalization
Eric Yang Yu, Christopher Liao, Sathvik Ravi, Theodoros Tsiligkaridis, Brian Kulis
Adversarial Training on Purification (AToP): Advancing Both Robustness and Generalization
Guang Lin, Chao Li, Jianhai Zhang, Toshihisa Tanaka, Qibin Zhao
On the generalization of learned constraints for ASP solving in temporal domains
Javier Romero, Torsten Schaub, Klaus Strauch
Few and Fewer: Learning Better from Few Examples Using Fewer Base Classes
Raphael Lafargue, Yassir Bendou, Bastien Pasdeloup, Jean-Philippe Diguet, Ian Reid, Vincent Gripon, Jack Valmadre