Pre Trained Model
Pre-trained models are foundational large-scale models trained on massive datasets, subsequently adapted for specific downstream tasks using techniques like fine-tuning or parameter-efficient fine-tuning (PEFT). Current research emphasizes improving the efficiency and effectiveness of these adaptation methods, exploring architectures such as Vision Transformers and diffusion models, and developing algorithms like LoRA and its nonlinear extensions to minimize resource consumption while maximizing performance. This field is crucial for advancing various applications, from medical image analysis and environmental sound classification to autonomous driving and natural language processing, by enabling the development of high-performing models with limited data and computational resources.
Papers
Cross-video Identity Correlating for Person Re-identification Pre-training
Jialong Zuo, Ying Nie, Hanyu Zhou, Huaxin Zhang, Haoyu Wang, Tianyu Guo, Nong Sang, Changxin Gao
How Effective is Pre-training of Large Masked Autoencoders for Downstream Earth Observation Tasks?
Jose Sosa, Mohamed Aloulou, Danila Rukhovich, Rim Sleimi, Boonyarit Changaival, Anis Kacem, Djamila Aouada
Lessons Learned from a Unifying Empirical Study of Parameter-Efficient Transfer Learning (PETL) in Visual Recognition
Zheda Mai, Ping Zhang, Cheng-Hao Tu, Hong-You Chen, Li Zhang, Wei-Lun Chao
Fine-Tuning is Fine, if Calibrated
Zheda Mai, Arpita Chowdhury, Ping Zhang, Cheng-Hao Tu, Hong-You Chen, Vardaan Pahuja, Tanya Berger-Wolf, Song Gao, Charles Stewart, Yu Su, Wei-Lun Chao
Transfer Learning for Passive Sonar Classification using Pre-trained Audio and ImageNet Models
Amirmohammad Mohammadi, Tejashri Kelhe, Davelle Carreiro, Alexandra Van Dine, Joshua Peeples
Deep Learning and Machine Learning, Advancing Big Data Analytics and Management: Tensorflow Pretrained Models
Keyu Chen, Ziqian Bi, Qian Niu, Junyu Liu, Benji Peng, Sen Zhang, Ming Liu, Ming Li, Xuanhe Pan, Jiawei Xu, Jinlang Wang, Pohsun Feng
Data Diet: Can Trimming PET/CT Datasets Enhance Lesion Segmentation?
Alexander Jaus, Simon Reiß, Jens Klesiek, Rainer Stiefelhagen
Enhancing Canine Musculoskeletal Diagnoses: Leveraging Synthetic Image Data for Pre-Training AI-Models on Visual Documentations
Martin Thißen, Thi Ngoc Diep Tran, Ben Joel Schönbein, Ute Trapp, Barbara Esteve Ratsch, Beate Egner, Romana Piat, Elke Hergenröther
Reimagining Linear Probing: Kolmogorov-Arnold Networks in Transfer Learning
Sheng Shen, Rabih Younes
Transfer Learning Applied to Computer Vision Problems: Survey on Current Progress, Limitations, and Opportunities
Aaryan Panda, Damodar Panigrahi, Shaswata Mitra, Sudip Mittal, Shahram Rahimi