Knowledge Distillation
Knowledge distillation is a machine learning technique that transfers knowledge from a large, complex "teacher" model to a smaller, more efficient "student" model, aiming to improve the student's performance and reduce computational costs. Current research focuses on improving distillation methods for various model architectures, including convolutional neural networks, transformers, and large language models, often incorporating techniques like parameter-efficient fine-tuning, multi-task learning, and data augmentation to enhance knowledge transfer. This approach is significant because it enables the deployment of high-performing models on resource-constrained devices and addresses challenges related to model size, training time, and privacy in diverse applications such as image captioning, speech processing, and medical diagnosis.
Papers
Enhancing Romanian Offensive Language Detection through Knowledge Distillation, Multi-Task Learning, and Data Augmentation
Vlad-Cristian Matei, Iulian-Marius Tăiatu, Răzvan-Alexandru Smădu, Dumitru-Clementin Cercel
Linear Projections of Teacher Embeddings for Few-Class Distillation
Noel Loo, Fotis Iliopoulos, Wei Hu, Erik Vee
HYDRA-FL: Hybrid Knowledge Distillation for Robust and Accurate Federated Learning
Momin Ahmad Khan, Yasra Chandio, Fatima Muhammad Anwar
Student-Oriented Teacher Knowledge Refinement for Knowledge Distillation
Chaomin Shen, Yaomin Huang, Haokun Zhu, Jinsong Fan, Guixu Zhang
Harmonizing knowledge Transfer in Neural Network with Unified Distillation
Yaomin Huang, Zaomin Yan, Chaomin Shen, Faming Fang, Guixu Zhang
Towards Diverse Device Heterogeneous Federated Learning via Task Arithmetic Knowledge Integration
Mahdi Morafah, Vyacheslav Kungurtsev, Hojin Chang, Chen Chen, Bill Lin
DSG-KD: Knowledge Distillation from Domain-Specific to General Language Models
Sangyeon Cho, Jangyeong Jeon, Dongjoon Lee, Changhee Lee, Junyeong Kim
Pre-trained Language Model and Knowledge Distillation for Lightweight Sequential Recommendation
Li Li, Mingyue Cheng, Zhiding Liu, Hao Zhang, Qi Liu, Enhong Chen
Fast Streaming Transducer ASR Prototyping via Knowledge Distillation with Whisper
Iuliia Thorbecke, Juan Zuluaga-Gomez, Esaú Villatoro-Tello, Shashi Kumar, Pradeep Rangappa, Sergio Burdisso, Petr Motlicek, Karthik Pandia, Aravind Ganapathiraju
Neural-Symbolic Collaborative Distillation: Advancing Small Language Models for Complex Reasoning Tasks
Huanxuan Liao, Shizhu He, Yao Xu, Yuanzhe Zhang, Kang Liu, Jun Zhao
Efficient Knowledge Distillation: Empowering Small Language Models with Teacher Model Insights
Mohamad Ballout, Ulf Krumnack, Gunther Heidemann, Kai-Uwe Kühnberger
Enhancing Knowledge Distillation of Large Language Models through Efficient Multi-Modal Distribution Alignment
Tianyu Peng, Jiajun Zhang
Exploring and Enhancing the Transfer of Distribution in Knowledge Distillation for Autoregressive Language Models
Jun Rao, Xuebo Liu, Zepeng Lin, Liang Ding, Jing Li, Dacheng Tao
LLMR: Knowledge Distillation with a Large Language Model-Induced Reward
Dongheng Li, Yongchang Hao, Lili Mou
Bayesian-Optimized One-Step Diffusion Model with Knowledge Distillation for Real-Time 3D Human Motion Prediction
Sibo Tian, Minghui Zheng, Xiao Liang