Training Free
Training-free methods represent a burgeoning area of research aiming to leverage pre-trained large language models (LLMs) and other foundation models for various tasks without requiring further training. Current research focuses on adapting these models for diverse applications, including content moderation, spelling correction, question answering, image captioning, and even video generation, often employing techniques like attention re-weighting, prompt engineering, and model merging. This approach offers significant advantages in terms of reduced computational cost and faster deployment, potentially impacting various fields by enabling efficient and adaptable AI solutions across diverse hardware and resource constraints.
Papers
DDSB: An Unsupervised and Training-free Method for Phase Detection in Echocardiography
Zhenyu Bu, Yang Liu, Jiayu Huo, Jingjing Peng, Kaini Wang, Guangquan Zhou, Rachel Sparks, Prokar Dasgupta, Alejandro Granados, Sebastien Ourselin
Understanding and Improving Training-free Loss-based Diffusion Guidance
Yifei Shen, Xinyang Jiang, Yezhen Wang, Yifan Yang, Dongqi Han, Dongsheng Li