Training Free
Training-free methods represent a burgeoning area of research aiming to leverage pre-trained large language models (LLMs) and other foundation models for various tasks without requiring further training. Current research focuses on adapting these models for diverse applications, including content moderation, spelling correction, question answering, image captioning, and even video generation, often employing techniques like attention re-weighting, prompt engineering, and model merging. This approach offers significant advantages in terms of reduced computational cost and faster deployment, potentially impacting various fields by enabling efficient and adaptable AI solutions across diverse hardware and resource constraints.
Papers
Token Prepending: A Training-Free Approach for Eliciting Better Sentence Embeddings from LLMs
Yuchen Fu, Zifeng Cheng, Zhiwei Jiang, Zhonghui Wang, Yafeng Yin, Zhengliang Li, Qing Gu
Text and Image Are Mutually Beneficial: Enhancing Training-Free Few-Shot Classification with CLIP
Yayuan Li, Jintao Guo, Lei Qi, Wenbin Li, Yinghuan Shi
TANGO: Training-free Embodied AI Agents for Open-world Tasks
Filippo Ziliotto, Tommaso Campari, Luciano Serafini, Lamberto Ballan
RMD: A Simple Baseline for More General Human Motion Generation via Training-free Retrieval-Augmented Motion Diffuse
Zhouyingcheng Liao, Mingyuan Zhang, Wenjia Wang, Lei Yang, Taku Komura
Rethinking Token Reduction in MLLMs: Towards a Unified Paradigm for Training-Free Acceleration
Yuhang Han, Xuyang Liu, Pengxiang Ding, Donglin Wang, Honggang Chen, Qingsen Yan, Siteng Huang
LampMark: Proactive Deepfake Detection via Training-Free Landmark Perceptual Watermarks
Tianyi Wang, Mengxiao Huang, Harry Cheng, Xiao Zhang, Zhiqi Shen