Model Generalizability
Model generalizability, the ability of a machine learning model to perform well on unseen data or tasks, is a critical area of research aiming to improve the robustness and reliability of AI systems. Current efforts focus on enhancing generalizability through techniques like meta-learning, data augmentation strategies (including adversarial methods), and architectural innovations such as multi-task learning and the incorporation of prior knowledge or numerical priors into model design. Improved generalizability is crucial for deploying AI models in real-world applications across diverse and unpredictable conditions, impacting fields ranging from healthcare and robotics to materials science and network security.
Papers
Efficient and generalizable prediction of molecular alterations in multiple cancer cohorts using H&E whole slide images
Kshitij Ingale, Sun Hae Hong, Qiyuan Hu, Renyu Zhang, Bo Osinski, Mina Khoshdeli, Josh Och, Kunal Nagpal, Martin C. Stumpe, Rohan P. Joshi
Learning to Manipulate Anywhere: A Visual Generalizable Framework For Reinforcement Learning
Zhecheng Yuan, Tianming Wei, Shuiqi Cheng, Gu Zhang, Yuanpei Chen, Huazhe Xu
Gradient-flow adaptive importance sampling for Bayesian leave one out cross-validation with application to sigmoidal classification models
Joshua C Chang, Xiangting Li, Shixin Xu, Hao-Ren Yao, Julia Porcino, Carson Chow
Epistemic Exploration for Generalizable Planning and Learning in Non-Stationary Settings
Rushang Karia, Pulkit Verma, Alberto Speranzon, Siddharth Srivastava