Strong Generalization
Strong generalization, the ability of machine learning models to perform well on unseen data, is a central objective in current research. Active areas of investigation include improving the robustness of self-supervised learning, understanding the optimization dynamics of transformers and other architectures (including CNNs and RNNs), and developing methods to enhance generalization through data augmentation, regularization techniques (e.g., logical regularization, consistency regularization), and improved training strategies (e.g., few-shot learning, meta-learning). These advancements are crucial for building reliable and adaptable AI systems across diverse applications, from image classification and natural language processing to healthcare and robotics.
Papers
Rethinking Conventional Wisdom in Machine Learning: From Generalization to Scaling
Lechao Xiao
Region Mixup
Saptarshi Saha, Utpal Garain
Revisiting Video Quality Assessment from the Perspective of Generalization
Xinli Yue, Jianhui Sun, Liangchao Yao, Fan Xia, Yuetang Deng, Tianyi Wang, Lei Li, Fengyun Rao, Jing Lv, Qian Wang, Lingchen Zhao
STAND: Data-Efficient and Self-Aware Precondition Induction for Interactive Task Learning
Daniel Weitekamp, Kenneth Koedinger
Fast Medical Shape Reconstruction via Meta-learned Implicit Neural Representations
Gaia Romana De Paolis, Dimitrios Lenis, Johannes Novotny, Maria Wimmer, Astrid Berg, Theresa Neubauer, Philip Matthias Winter, David Major, Ariharasudhan Muthusami, Gerald Schröcker, Martin Mienkina, Katja Bühler
A Practical Theory of Generalization in Selectivity Learning
Peizhi Wu, Haoshu Xu, Ryan Marcus, Zachary G. Ives