Knowledge Distillation
Knowledge distillation is a machine learning technique that transfers knowledge from a large, complex "teacher" model to a smaller, more efficient "student" model, aiming to improve the student's performance and reduce computational costs. Current research focuses on improving distillation methods for various model architectures, including convolutional neural networks, transformers, and large language models, often incorporating techniques like parameter-efficient fine-tuning, multi-task learning, and data augmentation to enhance knowledge transfer. This approach is significant because it enables the deployment of high-performing models on resource-constrained devices and addresses challenges related to model size, training time, and privacy in diverse applications such as image captioning, speech processing, and medical diagnosis.
Papers
Efficient Fine-Tuning and Concept Suppression for Pruned Diffusion Models
Reza Shirkavand, Peiran Yu, Shangqian Gao, Gowthami Somepalli, Tom Goldstein, Heng Huang
Self-Evolution Knowledge Distillation for LLM-based Machine Translation
Yuncheng Song, Liang Ding, Changtong Zan, Shujian Huang
SCKD: Semi-Supervised Cross-Modality Knowledge Distillation for 4D Radar Object Detection
Ruoyu Xu, Zhiyu Xiang, Chenwei Zhang, Hanzhi Zhong, Xijun Zhao, Ruina Dang, Peng Xu, Tianyu Pu, Eryun Liu
Multi-Level Optimal Transport for Universal Cross-Tokenizer Knowledge Distillation on Language Models
Xiao Cui, Mo Zhu, Yulei Qin, Liang Xie, Wengang Zhou, Houqiang Li
Knowledge Distillation in RNN-Attention Models for Early Prediction of Student Performance
Sukrit Leelaluk, Cheng Tang, Valdemar Švábenský, Atsushi Shimada
Enhancing Knowledge Distillation for LLMs with Response-Priming Prompting
Vijay Goyal, Mustafa Khan, Aprameya Tirupati, Harveer Saini, Michael Lam, Kevin Zhu
On Explaining Knowledge Distillation: Measuring and Visualising the Knowledge Transfer Process
Gereziher Adhane, Mohammad Mahdi Dehshibi, Dennis Vetter, David Masip, Gemma Roig
Learnable Prompting SAM-induced Knowledge Distillation for Semi-supervised Medical Image Segmentation
Kaiwen Huang, Tao Zhou, Huazhu Fu, Yizhe Zhang, Yi Zhou, Chen Gong, Dong Liang
In-Context Learning Distillation for Efficient Few-Shot Fine-Tuning
Yifei Duan, Liu Li, Zirui Zhai, Jinxia Yao
Efficient Speech Command Recognition Leveraging Spiking Neural Network and Curriculum Learning-based Knowledge Distillation
Jiaqi Wang, Liutao Yu, Liwei Huang, Chenlin Zhou, Han Zhang, Zhenxi Song, Min Zhang, Zhengyu Ma, Zhiguo Zhang