Knowledge Distillation
Knowledge distillation is a machine learning technique that transfers knowledge from a large, complex "teacher" model to a smaller, more efficient "student" model, aiming to improve the student's performance and reduce computational costs. Current research focuses on improving distillation methods for various model architectures, including convolutional neural networks, transformers, and large language models, often incorporating techniques like parameter-efficient fine-tuning, multi-task learning, and data augmentation to enhance knowledge transfer. This approach is significant because it enables the deployment of high-performing models on resource-constrained devices and addresses challenges related to model size, training time, and privacy in diverse applications such as image captioning, speech processing, and medical diagnosis.
Papers
Self-Supervised Keypoint Detection with Distilled Depth Keypoint Representation
Aman Anand, Elyas Rashno, Amir Eskandari, Farhana Zulkernine
Enhance Reasoning by Learning from Mistakes: Peer-Review Knowledge Distillation from Multiple Large Language Models
Zhuochun Li, Yuelyu Ji, Rui Meng, Daqing He
DocKD: Knowledge Distillation from LLMs for Open-World Document Understanding Models
Sungnyun Kim, Haofu Liao, Srikar Appalaraju, Peng Tang, Zhuowen Tu, Ravi Kumar Satzoda, R. Manmatha, Vijay Mahadevan, Stefano Soatto
Enhancing Romanian Offensive Language Detection through Knowledge Distillation, Multi-Task Learning, and Data Augmentation
Vlad-Cristian Matei, Iulian-Marius Tăiatu, Răzvan-Alexandru Smădu, Dumitru-Clementin Cercel
Linear Projections of Teacher Embeddings for Few-Class Distillation
Noel Loo, Fotis Iliopoulos, Wei Hu, Erik Vee
HYDRA-FL: Hybrid Knowledge Distillation for Robust and Accurate Federated Learning
Momin Ahmad Khan, Yasra Chandio, Fatima Muhammad Anwar
Student-Oriented Teacher Knowledge Refinement for Knowledge Distillation
Chaomin Shen, Yaomin Huang, Haokun Zhu, Jinsong Fan, Guixu Zhang
Harmonizing knowledge Transfer in Neural Network with Unified Distillation
Yaomin Huang, Zaomin Yan, Chaomin Shen, Faming Fang, Guixu Zhang
Towards Diverse Device Heterogeneous Federated Learning via Task Arithmetic Knowledge Integration
Mahdi Morafah, Vyacheslav Kungurtsev, Hojin Chang, Chen Chen, Bill Lin
DSG-KD: Knowledge Distillation from Domain-Specific to General Language Models
Sangyeon Cho, Jangyeong Jeon, Dongjoon Lee, Changhee Lee, Junyeong Kim
Pre-trained Language Model and Knowledge Distillation for Lightweight Sequential Recommendation
Li Li, Mingyue Cheng, Zhiding Liu, Hao Zhang, Qi Liu, Enhong Chen