Stronger Generalizability
Stronger generalizability in machine learning models is a crucial research area aiming to improve the ability of models trained on one dataset to perform well on unseen data or tasks. Current efforts focus on developing robust methodologies for model evaluation, exploring architectures like Graph Neural Networks and transformers, and investigating techniques such as prompt engineering, data augmentation, and ensemble methods to enhance model performance across diverse scenarios. This pursuit is vital for building reliable and trustworthy AI systems applicable across various domains, from healthcare and drug discovery to robotics and environmental monitoring, ultimately increasing the impact and practical utility of machine learning.
Papers
RAG-Driver: Generalisable Driving Explanations with Retrieval-Augmented In-Context Learning in Multi-Modal Large Language Model
Jianhao Yuan, Shuyang Sun, Daniel Omeiza, Bo Zhao, Paul Newman, Lars Kunze, Matthew Gadd
Generalizability of Mixture of Domain-Specific Adapters from the Lens of Signed Weight Directions and its Application to Effective Model Pruning
Tuc Nguyen, Thai Le
RIDGE: Reproducibility, Integrity, Dependability, Generalizability, and Efficiency Assessment of Medical Image Segmentation Models
Farhad Maleki, Linda Moy, Reza Forghani, Tapotosh Ghosh, Katie Ovens, Steve Langer, Pouria Rouzrokh, Bardia Khosravi, Ali Ganjizadeh, Daniel Warren, Roxana Daneshjou, Mana Moassefi, Atlas Haddadi Avval, Susan Sotardi, Neil Tenenholtz, Felipe Kitamura, Timothy Kline
Segment Anything Model Can Not Segment Anything: Assessing AI Foundation Model's Generalizability in Permafrost Mapping
Wenwen Li, Chia-Yu Hsu, Sizhe Wang, Yezhou Yang, Hyunho Lee, Anna Liljedahl, Chandi Witharana, Yili Yang, Brendan M. Rogers, Samantha T. Arundel, Matthew B. Jones, Kenton McHenry, Patricia Solis