Training Free
Training-free methods represent a burgeoning area of research aiming to leverage pre-trained large language models (LLMs) and other foundation models for various tasks without requiring further training. Current research focuses on adapting these models for diverse applications, including content moderation, spelling correction, question answering, image captioning, and even video generation, often employing techniques like attention re-weighting, prompt engineering, and model merging. This approach offers significant advantages in terms of reduced computational cost and faster deployment, potentially impacting various fields by enabling efficient and adaptable AI solutions across diverse hardware and resource constraints.
Papers
Select2Plan: Training-Free ICL-Based Planning through VQA and Memory Retrieval
Davide Buoso, Luke Robinson, Giuseppe Averta, Philip Torr, Tim Franzmeyer, Daniele De Martini
The N-Grammys: Accelerating Autoregressive Inference with Learning-Free Batched Speculation
Lawrence Stewart (SIERRA), Matthew Trager, Sujan Kumar Gonugondla, Stefano Soatto (UCLA-CS)
Story-Adapter: A Training-free Iterative Framework for Long Story Visualization
Jiawei Mao, Xiaoke Huang, Yunfei Xie, Yuanqi Chang, Mude Hui, Bingjie Xu, Yuyin Zhou
BroadWay: Boost Your Text-to-Video Generation Model in a Training-free Way
Jiazi Bu, Pengyang Ling, Pan Zhang, Tong Wu, Xiaoyi Dong, Yuhang Zang, Yuhang Cao, Dahua Lin, Jiaqi Wang
PortLLM: Personalizing Evolving Large Language Models with Training-Free and Portable Model Patches
Rana Muhammad Shahroz Khan, Pingzhi Li, Sukwon Yun, Zhenyu Wang, Shahriar Nirjon, Chau-Wai Wong, Tianlong Chen
QAEncoder: Towards Aligned Representation Learning in Question Answering System
Zhengren Wang, Qinhan Yu, Shida Wei, Zhiyu Li, Feiyu Xiong, Xiaoxing Wang, Simin Niu, Hao Liang, Wentao Zhang
TROPE: TRaining-Free Object-Part Enhancement for Seamlessly Improving Fine-Grained Zero-Shot Image Captioning
Joshua Feinglass, Yezhou Yang