Visual Adaptation
Visual adaptation focuses on efficiently adapting large pre-trained vision models to new, downstream tasks, minimizing computational costs and maximizing performance. Current research emphasizes parameter-efficient fine-tuning (PEFT) methods, employing techniques like adapters, prompts, and low-rank attention mechanisms to update only a small subset of model parameters. These advancements are crucial for deploying large models on resource-constrained devices and improving the efficiency of transfer learning across diverse visual tasks, including medical imaging and autonomous driving. The resulting improvements in accuracy and efficiency have significant implications for various applications, ranging from medical image analysis to robotics.
Papers
Human Observation-Inspired Trajectory Prediction for Autonomous Driving in Mixed-Autonomy Traffic Environments
Haicheng Liao, Shangqian Liu, Yongkang Li, Zhenning Li, Chengyue Wang, Yunjian Li, Shengbo Eben Li, Chengzhong Xu
Low-rank Attention Side-Tuning for Parameter-Efficient Fine-Tuning
Ningyuan Tang, Minghao Fu, Ke Zhu, Jianxin Wu