Zero Shot Learning
Zero-shot learning (ZSL) aims to enable machine learning models to classify data from unseen categories without requiring any training examples for those categories, leveraging knowledge transferred from seen categories. Current research focuses on improving ZSL performance across various modalities (image, text, audio, graph data) using large language models (LLMs), vision-language models (VLMs), and graph neural networks (GNNs), often incorporating techniques like prompt engineering and contrastive learning. This capability is highly significant for addressing data scarcity issues in many fields, including medical image analysis, natural language processing, and robotics, enabling more efficient and adaptable AI systems. The development of more efficient and robust ZSL methods is a key area of ongoing research.
Papers
Zero-Shot Learning and Key Points Are All You Need for Automated Fact-Checking
Mohammad Ghiasvand Mohammadkhani, Ali Ghiasvand Mohammadkhani, Hamid Beigy
Navigating Data Scarcity using Foundation Models: A Benchmark of Few-Shot and Zero-Shot Learning Approaches in Medical Imaging
Stefano Woerner, Christian F. Baumgartner