Knowledge Distillation
Knowledge distillation is a machine learning technique that transfers knowledge from a large, complex "teacher" model to a smaller, more efficient "student" model, aiming to improve the student's performance and reduce computational costs. Current research focuses on improving distillation methods for various model architectures, including convolutional neural networks, transformers, and large language models, often incorporating techniques like parameter-efficient fine-tuning, multi-task learning, and data augmentation to enhance knowledge transfer. This approach is significant because it enables the deployment of high-performing models on resource-constrained devices and addresses challenges related to model size, training time, and privacy in diverse applications such as image captioning, speech processing, and medical diagnosis.
Papers
DL-KDD: Dual-Light Knowledge Distillation for Action Recognition in the Dark
Chi-Jui Chang, Oscar Tai-Yuan Chen, Vincent S. Tseng
RKLD: Reverse KL-Divergence-based Knowledge Distillation for Unlearning Personal Information in Large Language Models
Bichen Wang, Yuzhe Zi, Yixin Sun, Yanyan Zhao, Bing Qin
Scalable Detection of Salient Entities in News Articles
Eliyar Asgarieh, Kapil Thadani, Neil O'Hare
GKT: A Novel Guidance-Based Knowledge Transfer Framework For Efficient Cloud-edge Collaboration LLM Deployment
Yao Yao, Zuchao Li, Hai Zhao
Relation Modeling and Distillation for Learning with Noisy Labels
Xiaming Che, Junlin Zhang, Zhuang Qi, Xin Qi
Why Not Transform Chat Large Language Models to Non-English?
Xiang Geng, Ming Zhu, Jiahuan Li, Zhejian Lai, Wei Zou, Shuaijie She, Jiaxin Guo, Xiaofeng Zhao, Yinglu Li, Yuang Li, Chang Su, Yanqing Zhao, Xinglin Lyu, Min Zhang, Jiajun Chen, Hao Yang, Shujian Huang
Joint Optimization of Streaming and Non-Streaming Automatic Speech Recognition with Multi-Decoder and Knowledge Distillation
Muhammad Shakeel, Yui Sudo, Yifan Peng, Shinji Watanabe
Low-Resolution Chest X-ray Classification via Knowledge Distillation and Multi-task Learning
Yasmeena Akhter, Rishabh Ranjan, Richa Singh, Mayank Vatsa
Stereo-Knowledge Distillation from dpMV to Dual Pixels for Light Field Video Reconstruction
Aryan Garg, Raghav Mallampali, Akshat Joshi, Shrisudhan Govindarajan, Kaushik Mitra
Distill-then-prune: An Efficient Compression Framework for Real-time Stereo Matching Network on Edge Devices
Baiyu Pan, Jichao Jiao, Jianxing Pang, Jun Cheng