Training Free
Training-free methods represent a burgeoning area of research aiming to leverage pre-trained large language models (LLMs) and other foundation models for various tasks without requiring further training. Current research focuses on adapting these models for diverse applications, including content moderation, spelling correction, question answering, image captioning, and even video generation, often employing techniques like attention re-weighting, prompt engineering, and model merging. This approach offers significant advantages in terms of reduced computational cost and faster deployment, potentially impacting various fields by enabling efficient and adaptable AI solutions across diverse hardware and resource constraints.
Papers
Video Diffusion Models are Training-free Motion Interpreter and Controller
Zeqi Xiao, Yifan Zhou, Shuai Yang, Xingang Pan
Unchosen Experts Can Contribute Too: Unleashing MoE Models' Power by Self-Contrast
Chufan Shi, Cheng Yang, Xinyu Zhu, Jiahao Wang, Taiqiang Wu, Siheng Li, Deng Cai, Yujiu Yang, Yu Meng
DiM: Diffusion Mamba for Efficient High-Resolution Image Synthesis
Yao Teng, Yue Wu, Han Shi, Xuefei Ning, Guohao Dai, Yu Wang, Zhenguo Li, Xihui Liu