Strong Generalization
Strong generalization, the ability of machine learning models to perform well on unseen data, is a central objective in current research. Active areas of investigation include improving the robustness of self-supervised learning, understanding the optimization dynamics of transformers and other architectures (including CNNs and RNNs), and developing methods to enhance generalization through data augmentation, regularization techniques (e.g., logical regularization, consistency regularization), and improved training strategies (e.g., few-shot learning, meta-learning). These advancements are crucial for building reliable and adaptable AI systems across diverse applications, from image classification and natural language processing to healthcare and robotics.
Papers - Page 34
Foundations for Transfer in Reinforcement Learning: A Taxonomy of Knowledge Modalities
Markus Wulfmeier, Arunkumar Byravan, Sarah Bechtle, Karol Hausman, Nicolas HeessGeneralization by Adaptation: Diffusion-Based Domain Extension for Domain-Generalized Semantic Segmentation
Joshua Niemeijer, Manuel Schwonberg, Jan-Aike Termöhlen, Nico M. Schmidt, Tim FingscheidtAdaptive operator selection utilising generalised experience
Mehmet Emin Aydin, Rafet Durgut, Abdur Rakib
Risk Bounds of Accelerated SGD for Overparameterized Linear Regression
Xuheng Li, Yihe Deng, Jingfeng Wu, Dongruo Zhou, Quanquan GuRobustness-Reinforced Knowledge Distillation with Correlation Distance and Network Pruning
Seonghak Kim, Gyeongdo Ham, Yucheol Cho, Daeshik KimAlgorithmic Fairness Generalization under Covariate and Dependence Shifts Simultaneously
Chen Zhao, Kai Jiang, Xintao Wu, Haoliang Wang, Latifur Khan, Christan Grant, Feng Chen
Do Smaller Language Models Answer Contextualised Questions Through Memorisation Or Generalisation?
Tim Hartill, Joshua Bensemann, Michael Witbrock, Patricia J. RiddleEnhancing Visual Grounding and Generalization: A Multi-Task Cycle Training Approach for Vision-Language Models
Xiaoyu Yang, Lijian Xu, Hao Sun, Hongsheng Li, Shaoting Zhang
Large Learning Rates Improve Generalization: But How Large Are We Talking About?
Ekaterina Lobacheva, Eduard Pockonechnyy, Maxim Kodryan, Dmitry VetrovGeneralization and Hallucination of Large Vision-Language Models through a Camouflaged Lens
Lv Tang, Peng-Tao Jiang, Zhihao Shen, Hao Zhang, Jinwei Chen, Bo Li
Generalizable Imitation Learning Through Pre-Trained Representations
Wei-Di Chang, Francois Hogan, Scott Fujimoto, David Meger, Gregory DudekImproving Generalization of Drowsiness State Classification by Domain-Specific Normalization
Dong-Young Kim, Dong-Kyun Han, Seo-Hyeon Park, Geun-Deok Jang, Seong-Whan Lee