Strong Generalization
Strong generalization, the ability of machine learning models to perform well on unseen data, is a central objective in current research. Active areas of investigation include improving the robustness of self-supervised learning, understanding the optimization dynamics of transformers and other architectures (including CNNs and RNNs), and developing methods to enhance generalization through data augmentation, regularization techniques (e.g., logical regularization, consistency regularization), and improved training strategies (e.g., few-shot learning, meta-learning). These advancements are crucial for building reliable and adaptable AI systems across diverse applications, from image classification and natural language processing to healthcare and robotics.
Papers
Items or Relations -- what do Artificial Neural Networks learn?
Renate Krause, Stefan Reimann
RanLayNet: A Dataset for Document Layout Detection used for Domain Adaptation and Generalization
Avinash Anand, Raj Jaiswal, Mohit Gupta, Siddhesh S Bangar, Pijush Bhuyan, Naman Lal, Rajeev Singh, Ritika Jha, Rajiv Ratn Shah, Shin'ichi Satoh
Domain Generalization through Meta-Learning: A Survey
Arsham Gholamzadeh Khoee, Yinan Yu, Robert Feldt
Adaptive Affinity-Based Generalization For MRI Imaging Segmentation Across Resource-Limited Settings
Eddardaa B. Loussaief, Mohammed Ayad, Domenc Puig, Hatem A. Rashwan
Adaptive Sampling Policies Imply Biased Beliefs: A Generalization of the Hot Stove Effect
Jerker Denrell
Can Biases in ImageNet Models Explain Generalization?
Paul Gavrikov, Janis Keuper
UniArk: Improving Generalisation and Consistency for Factual Knowledge Extraction through Debiasing
Yijun Yang, Jie He, Pinzhen Chen, Víctor Gutiérrez-Basulto, Jeff Z. Pan
Diverse Perspectives, Divergent Models: Cross-Cultural Evaluation of Depression Detection on Twitter
Nuredin Ali, Charles Chuankai Zhang, Ned Mayo, Stevie Chancellor