Zero Shot
Zero-shot learning aims to enable models to perform tasks on unseen data without any task-specific training, leveraging pre-trained knowledge to generalize to new situations. Current research focuses on improving zero-shot capabilities across diverse modalities (vision, language, audio) using large language models (LLMs), vision-language models (VLMs), and diffusion models, often incorporating techniques like chain-of-thought prompting, knowledge retrieval, and prompt engineering to enhance performance and interpretability. This field is significant because it promises more efficient and adaptable AI systems, impacting various applications from image editing and medical diagnosis to robotics and natural language processing.
Papers
AutoAD-Zero: A Training-Free Framework for Zero-Shot Audio Description
Junyu Xie, Tengda Han, Max Bain, Arsha Nagrani, Gül Varol, Weidi Xie, Andrew Zisserman
Test-Time Low Rank Adaptation via Confidence Maximization for Zero-Shot Generalization of Vision-Language Models
Raza Imam, Hanan Gani, Muhammad Huzaifa, Karthik Nandakumar
Stretching Each Dollar: Diffusion Training from Scratch on a Micro-Budget
Vikash Sehwag, Xianghao Kong, Jingtao Li, Michael Spranger, Lingjuan Lyu
Robust Calibration of Large Vision-Language Adapters
Balamurali Murugesan, Julio Silva-Rodriguez, Ismail Ben Ayed, Jose Dolz
Open-World Visual Reasoning by a Neuro-Symbolic Program of Zero-Shot Symbols
Gertjan Burghouts, Fieke Hillerström, Erwin Walraven, Michael van Bekkum, Frank Ruis, Joris Sijs, Jelle van Mil, Judith Dijk
CoAPT: Context Attribute words for Prompt Tuning
Gun Lee, Subin An, Sungyong Baik, Soochahn Lee
MEDIC: Zero-shot Music Editing with Disentangled Inversion Control
Huadai Liu, Jialei Wang, Xiangtai Li, Rongjie Huang, Yang Liu, Jiayang Xu, Zhou Zhao
Zero-shot Cross-Lingual Transfer for Synthetic Data Generation in Grammatical Error Detection
Gaetan Lopez Latouche, Marc-André Carbonneau, Ben Swanson
Mask-guided cross-image attention for zero-shot in-silico histopathologic image generation with a diffusion model
Dominik Winter, Nicolas Triltsch, Marco Rosati, Anatoliy Shumilov, Ziya Kokaragac, Yuri Popov, Thomas Padel, Laura Sebastian Monasor, Ross Hill, Markus Schick, Nicolas Brieu
Zero-Shot Adaptation for Approximate Posterior Sampling of Diffusion Models in Inverse Problems
Yaşar Utku Alçalar, Mehmet Akçakaya