Training Free
Training-free methods represent a burgeoning area of research aiming to leverage pre-trained large language models (LLMs) and other foundation models for various tasks without requiring further training. Current research focuses on adapting these models for diverse applications, including content moderation, spelling correction, question answering, image captioning, and even video generation, often employing techniques like attention re-weighting, prompt engineering, and model merging. This approach offers significant advantages in terms of reduced computational cost and faster deployment, potentially impacting various fields by enabling efficient and adaptable AI solutions across diverse hardware and resource constraints.
Papers
Context-Aware Replanning with Pre-explored Semantic Map for Object Navigation
Hung-Ting Su, Ching-Yuan Chen, Po-Chen Ko, Jia-Fong Yeh, Min Sun, Winston H. Hsu
Training-Free Point Cloud Recognition Based on Geometric and Semantic Information Fusion
Yan Chen, Di Huang, Zhichao Liao, Xi Cheng, Xinghui Li, Lone Zeng
Unleashing the Temporal-Spatial Reasoning Capacity of GPT for Training-Free Audio and Language Referenced Video Object Segmentation
Shaofei Huang, Rui Ling, Hongyu Li, Tianrui Hui, Zongheng Tang, Xiaoming Wei, Jizhong Han, Si Liu
Divide, Conquer and Combine: A Training-Free Framework for High-Resolution Image Perception in Multimodal Large Language Models
Wenbin Wang, Liang Ding, Minyan Zeng, Xiabin Zhou, Li Shen, Yong Luo, Dacheng Tao
Grasping by Hanging: a Learning-Free Grasping Detection Method for Previously Unseen Objects
Wanze Li, Wan Su, Gregory S. Chirikjian
Hybrid SD: Edge-Cloud Collaborative Inference for Stable Diffusion Models
Chenqian Yan, Songwei Liu, Hongjian Liu, Xurui Peng, Xiaojian Wang, Fangmin Chen, Lean Fu, Xing Mei