Zero Shot
Zero-shot learning aims to enable models to perform tasks on unseen data without any task-specific training, leveraging pre-trained knowledge to generalize to new situations. Current research focuses on improving zero-shot capabilities across diverse modalities (vision, language, audio) using large language models (LLMs), vision-language models (VLMs), and diffusion models, often incorporating techniques like chain-of-thought prompting, knowledge retrieval, and prompt engineering to enhance performance and interpretability. This field is significant because it promises more efficient and adaptable AI systems, impacting various applications from image editing and medical diagnosis to robotics and natural language processing.
Papers
GraphGPT: Graph Instruction Tuning for Large Language Models
Jiabin Tang, Yuhao Yang, Wei Wei, Lei Shi, Lixin Su, Suqi Cheng, Dawei Yin, Chao Huang
MedAI Dialog Corpus (MEDIC): Zero-Shot Classification of Doctor and AI Responses in Health Consultations
Olumide E. Ojo, Olaronke O. Adebanji, Alexander Gelbukh, Hiram Calvo, Anna Feldman
Prompting Scientific Names for Zero-Shot Species Recognition
Shubham Parashar, Zhiqiu Lin, Yanan Li, Shu Kong
Estimating Uncertainty in Multimodal Foundation Models using Public Internet Data
Shiladitya Dutta, Hongbo Wei, Lars van der Laan, Ahmed M. Alaa
Zero-Shot Object Goal Visual Navigation With Class-Independent Relationship Network
Xinting Li, Shiguang Zhang, Yue LU, Kerry Dang, Lingyan Ran
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
PaintHuman: Towards High-fidelity Text-to-3D Human Texturing via Denoised Score Distillation
Jianhui Yu, Hao Zhu, Liming Jiang, Chen Change Loy, Weidong Cai, Wayne Wu