Teacher Model
Teacher models are large, pre-trained models used in knowledge distillation to train smaller, more efficient student models while preserving performance. Current research focuses on improving the accuracy and efficiency of this knowledge transfer, exploring techniques like data augmentation, loss function optimization (e.g., MSE loss), and novel architectures such as multi-teacher and online distillation frameworks. This work is significant because it addresses the computational cost and resource limitations associated with deploying large language and vision models, enabling broader accessibility and application in various fields including object detection, natural language processing, and ecological monitoring.
Papers
Efficient Knowledge Distillation: Empowering Small Language Models with Teacher Model Insights
Mohamad Ballout, Ulf Krumnack, Gunther Heidemann, Kai-Uwe Kühnberger
Exploring and Enhancing the Transfer of Distribution in Knowledge Distillation for Autoregressive Language Models
Jun Rao, Xuebo Liu, Zepeng Lin, Liang Ding, Jing Li, Dacheng Tao
VLM-KD: Knowledge Distillation from VLM for Long-Tail Visual Recognition
Zaiwei Zhang, Gregory P. Meyer, Zhichao Lu, Ashish Shrivastava, Avinash Ravichandran, Eric M. Wolff
MST-KD: Multiple Specialized Teachers Knowledge Distillation for Fair Face Recognition
Eduarda Caldeira, Jaime S. Cardoso, Ana F. Sequeira, Pedro C. Neto