Pre Trained Model
Pre-trained models are foundational large-scale models trained on massive datasets, subsequently adapted for specific downstream tasks using techniques like fine-tuning or parameter-efficient fine-tuning (PEFT). Current research emphasizes improving the efficiency and effectiveness of these adaptation methods, exploring architectures such as Vision Transformers and diffusion models, and developing algorithms like LoRA and its nonlinear extensions to minimize resource consumption while maximizing performance. This field is crucial for advancing various applications, from medical image analysis and environmental sound classification to autonomous driving and natural language processing, by enabling the development of high-performing models with limited data and computational resources.
Papers
Adaptive Rank, Reduced Forgetting: Knowledge Retention in Continual Learning Vision-Language Models with Dynamic Rank-Selective LoRA
Haodong Lu, Chongyang Zhao, Jason Xue, Lina Yao, Kristen Moore, Dong Gong
Learning on Less: Constraining Pre-trained Model Learning for Generalizable Diffusion-Generated Image Detection
Yingjian Chen, Lei Zhang, Yakun Niu, Lei Tan, Pei Chen
Pre-training for Action Recognition with Automatically Generated Fractal Datasets
Davyd Svyezhentsev, George Retsinas, Petros Maragos
New Test-Time Scenario for Biosignal: Concept and Its Approach
Yong-Yeon Jo, Byeong Tak Lee, Beom Joon Kim, Jeong-Ho Hong, Hak Seung Lee, Joon-myoung Kwon
Not All Adapters Matter: Selective Adapter Freezing for Memory-Efficient Fine-Tuning of Language Models
Hyegang Son, Yonglak Son, Changhoon Kim, Young Geun Kim
Integrating Dual Prototypes for Task-Wise Adaption in Pre-Trained Model-Based Class-Incremental Learning
Zhiming Xu, Suorong Yang, Baile Xu, Jian Zhao, Furao Shen