Transfer Learning
Transfer learning leverages knowledge gained from training a model on one task (the source) to improve its performance on a related but different task (the target), addressing data scarcity and reducing computational costs. Current research focuses on optimizing source data selection, employing various deep learning architectures like CNNs, LSTMs, and Transformers, and exploring techniques like data augmentation and hyperparameter optimization to enhance transferability across diverse domains. This approach significantly impacts various fields, from improving the accuracy and efficiency of medical image analysis and natural language processing to enabling more robust and adaptable AI systems in resource-constrained environments.
Papers
Deep Transfer $Q$-Learning for Offline Non-Stationary Reinforcement Learning
Jinhang Chai, Elynn Chen, Jianqing Fan
Rapid Automated Mapping of Clouds on Titan With Instance Segmentation
Zachary Yahn, Douglas M Trent, Ethan Duncan, Benoît Seignovert, John Santerre, Conor Nixon
Comparison of Neural Models for X-ray Image Classification in COVID-19 Detection
Jimi Togni, Romis Attux
Improving Dialectal Slot and Intent Detection with Auxiliary Tasks: A Multi-Dialectal Bavarian Case Study
Xaver Maria Krückl, Verena Blaschke, Barbara Plank
SelectiveFinetuning: Enhancing Transfer Learning in Sleep Staging through Selective Domain Alignment
Siyuan Zhao, Chenyu Liu, Yi Ding, Xinliang Zhou
Transfer Learning for Deep-Unfolded Combinatorial Optimization Solver with Quantum Annealer
Ryo Hagiwara, Shunta Arai, Satoshi Takabe