Transfer Learning
Transfer learning leverages knowledge gained from training a model on one task (the source) to improve its performance on a related but different task (the target), addressing data scarcity and reducing computational costs. Current research focuses on optimizing source data selection, employing various deep learning architectures like CNNs, LSTMs, and Transformers, and exploring techniques like data augmentation and hyperparameter optimization to enhance transferability across diverse domains. This approach significantly impacts various fields, from improving the accuracy and efficiency of medical image analysis and natural language processing to enabling more robust and adaptable AI systems in resource-constrained environments.
Papers
The PV-ALE Dataset: Enhancing Apple Leaf Disease Classification Through Transfer Learning with Convolutional Neural Networks
Joseph Damilola Akinyemi, Kolawole John Adebayo
Yoga Pose Classification Using Transfer Learning
M. M. Akash, Rahul Deb Mohalder, Md. Al Mamun Khan, Laboni Paul, Ferdous Bin Ali
Advancing Efficient Brain Tumor Multi-Class Classification -- New Insights from the Vision Mamba Model in Transfer Learning
Yinyi Lai, Anbo Cao, Yuan Gao, Jiaqi Shang, Zongyu Li, Jia Guo