Zero Shot Learning
Zero-shot learning (ZSL) aims to enable machine learning models to classify data from unseen categories without requiring any training examples for those categories, leveraging knowledge transferred from seen categories. Current research focuses on improving ZSL performance across various modalities (image, text, audio, graph data) using large language models (LLMs), vision-language models (VLMs), and graph neural networks (GNNs), often incorporating techniques like prompt engineering and contrastive learning. This capability is highly significant for addressing data scarcity issues in many fields, including medical image analysis, natural language processing, and robotics, enabling more efficient and adaptable AI systems. The development of more efficient and robust ZSL methods is a key area of ongoing research.
Papers
Compositional Zero-shot Learning via Progressive Language-based Observations
Lin Li, Guikun Chen, Jun Xiao, Long Chen
HOMOE: A Memory-Based and Composition-Aware Framework for Zero-Shot Learning with Hopfield Network and Soft Mixture of Experts
Do Huu Dat, Po Yuan Mao, Tien Hoang Nguyen, Wray Buntine, Mohammed Bennamoun
Survival of the Most Influential Prompts: Efficient Black-Box Prompt Search via Clustering and Pruning
Han Zhou, Xingchen Wan, Ivan Vulić, Anna Korhonen
MedAI Dialog Corpus (MEDIC): Zero-Shot Classification of Doctor and AI Responses in Health Consultations
Olumide E. Ojo, Olaronke O. Adebanji, Alexander Gelbukh, Hiram Calvo, Anna Feldman