Knowledge Transfer
Knowledge transfer in machine learning focuses on efficiently leveraging knowledge learned from one task or model (the "teacher") to improve performance on a different task or model (the "student"). Current research emphasizes techniques like knowledge distillation, often employing multi-mentor or student-oriented approaches, and explores diverse methods for aligning and transferring knowledge across different modalities (e.g., image and text) or heterogeneous devices. This field is crucial for improving model efficiency, reducing training costs, and enabling adaptation to new domains and data scarcity, with applications ranging from medical image analysis to robotics and natural language processing.
Papers
On Explaining Knowledge Distillation: Measuring and Visualising the Knowledge Transfer Process
Gereziher Adhane, Mohammad Mahdi Dehshibi, Dennis Vetter, David Masip, Gemma Roig
Understanding and Analyzing Model Robustness and Knowledge-Transfer in Multilingual Neural Machine Translation using TX-Ray
Vageesh Saxena, Sharid Loáiciga, Nils Rethmeier