Zero Shot Learning
Zero-shot learning (ZSL) aims to enable machine learning models to classify data from unseen categories without requiring any training examples for those categories, leveraging knowledge transferred from seen categories. Current research focuses on improving ZSL performance across various modalities (image, text, audio, graph data) using large language models (LLMs), vision-language models (VLMs), and graph neural networks (GNNs), often incorporating techniques like prompt engineering and contrastive learning. This capability is highly significant for addressing data scarcity issues in many fields, including medical image analysis, natural language processing, and robotics, enabling more efficient and adaptable AI systems. The development of more efficient and robust ZSL methods is a key area of ongoing research.
Papers
Zero-Shot AutoML with Pretrained Models
Ekrem Öztürk, Fabio Ferreira, Hadi S. Jomaa, Lars Schmidt-Thieme, Josif Grabocka, Frank Hutter
Self-Generated In-Context Learning: Leveraging Auto-regressive Language Models as a Demonstration Generator
Hyuhng Joon Kim, Hyunsoo Cho, Junyeob Kim, Taeuk Kim, Kang Min Yoo, Sang-goo Lee
Tight Lower Bounds on Worst-Case Guarantees for Zero-Shot Learning with Attributes
Alessio Mazzetto, Cristina Menghini, Andrew Yuan, Eli Upfal, Stephen H. Bach
Self-Guided Noise-Free Data Generation for Efficient Zero-Shot Learning
Jiahui Gao, Renjie Pi, Yong Lin, Hang Xu, Jiacheng Ye, Zhiyong Wu, Weizhong Zhang, Xiaodan Liang, Zhenguo Li, Lingpeng Kong