Zero Shot
Zero-shot learning aims to enable models to perform tasks on unseen data without any task-specific training, leveraging pre-trained knowledge to generalize to new situations. Current research focuses on improving zero-shot capabilities across diverse modalities (vision, language, audio) using large language models (LLMs), vision-language models (VLMs), and diffusion models, often incorporating techniques like chain-of-thought prompting, knowledge retrieval, and prompt engineering to enhance performance and interpretability. This field is significant because it promises more efficient and adaptable AI systems, impacting various applications from image editing and medical diagnosis to robotics and natural language processing.
Papers
Large Language Model-Based Evolutionary Optimizer: Reasoning with elitism
Shuvayan Brahmachary, Subodh M. Joshi, Aniruddha Panda, Kaushik Koneripalli, Arun Kumar Sagotra, Harshil Patel, Ankush Sharma, Ameya D. Jagtap, Kaushic Kalyanaraman
Zero-shot Generalizable Incremental Learning for Vision-Language Object Detection
Jieren Deng, Haojian Zhang, Kun Ding, Jianhua Hu, Xingxuan Zhang, Yunkuan Wang
Zero-shot cross-lingual transfer in instruction tuning of large language models
Nadezhda Chirkova, Vassilina Nikoulina
Zero-Shot Pediatric Tuberculosis Detection in Chest X-Rays using Self-Supervised Learning
Daniel Capellán-Martín, Abhijeet Parida, Juan J. Gómez-Valverde, Ramon Sanchez-Jacob, Pooneh Roshanitabrizi, Marius G. Linguraru, María J. Ledesma-Carbayo, Syed M. Anwar
Leveraging Large Language Models for Concept Graph Recovery and Question Answering in NLP Education
Rui Yang, Boming Yang, Sixun Ouyang, Tianwei She, Aosong Feng, Yuang Jiang, Freddy Lecue, Jinghui Lu, Irene Li
Zero-shot generalization across architectures for visual classification
Evan Gerritz, Luciano Dyballa, Steven W. Zucker
Zero-BEV: Zero-shot Projection of Any First-Person Modality to BEV Maps
Gianluca Monaci, Leonid Antsfeld, Boris Chidlovskii, Christian Wolf
The Lay Person's Guide to Biomedicine: Orchestrating Large Language Models
Zheheng Luo, Qianqian Xie, Sophia Ananiadou
Contrastive Prompts Improve Disentanglement in Text-to-Image Diffusion Models
Chen Wu, Fernando De la Torre