Knowledge Transfer
Knowledge transfer in machine learning focuses on efficiently leveraging knowledge learned from one task or model (the "teacher") to improve performance on a different task or model (the "student"). Current research emphasizes techniques like knowledge distillation, often employing multi-mentor or student-oriented approaches, and explores diverse methods for aligning and transferring knowledge across different modalities (e.g., image and text) or heterogeneous devices. This field is crucial for improving model efficiency, reducing training costs, and enabling adaptation to new domains and data scarcity, with applications ranging from medical image analysis to robotics and natural language processing.
Papers
LLM-KT: A Versatile Framework for Knowledge Transfer from Large Language Models to Collaborative Filtering
Nikita Severin, Aleksei Ziablitsev, Yulia Savelyeva, Valeriy Tashchilin, Ivan Bulychev, Mikhail Yushkov, Artem Kushneruk, Amaliya Zaryvnykh, Dmitrii Kiselev, Andrey Savchenko, Ilya Makarov
MoD: A Distribution-Based Approach for Merging Large Language Models
Quy-Anh Dang, Chris Ngo
Student-Oriented Teacher Knowledge Refinement for Knowledge Distillation
Chaomin Shen, Yaomin Huang, Haokun Zhu, Jinsong Fan, Guixu Zhang
Harmonizing knowledge Transfer in Neural Network with Unified Distillation
Yaomin Huang, Zaomin Yan, Chaomin Shen, Faming Fang, Guixu Zhang
Towards Diverse Device Heterogeneous Federated Learning via Task Arithmetic Knowledge Integration
Mahdi Morafah, Vyacheslav Kungurtsev, Hojin Chang, Chen Chen, Bill Lin