Zero Shot Learning
Zero-shot learning (ZSL) aims to enable machine learning models to classify data from unseen categories without requiring any training examples for those categories, leveraging knowledge transferred from seen categories. Current research focuses on improving ZSL performance across various modalities (image, text, audio, graph data) using large language models (LLMs), vision-language models (VLMs), and graph neural networks (GNNs), often incorporating techniques like prompt engineering and contrastive learning. This capability is highly significant for addressing data scarcity issues in many fields, including medical image analysis, natural language processing, and robotics, enabling more efficient and adaptable AI systems. The development of more efficient and robust ZSL methods is a key area of ongoing research.
Papers
OverPrompt: Enhancing ChatGPT through Efficient In-Context Learning
Jiazheng Li, Runcong Zhao, Yongxin Yang, Yulan He, Lin Gui
Universal Self-Adaptive Prompting
Xingchen Wan, Ruoxi Sun, Hootan Nakhost, Hanjun Dai, Julian Martin Eisenschlos, Sercan O. Arik, Tomas Pfister
EXnet: Efficient In-context Learning for Data-less Text classification
Debaditya Shome, Kuldeep Yadav