Privileged Knowledge Distillation
Privileged knowledge distillation (PKD) leverages information available during training but absent during inference to improve the performance of a "student" model trained on limited data. Current research focuses on refining PKD techniques across diverse applications, including image matting, semantic segmentation, and various robotic control tasks, often employing multi-teacher architectures or incorporating confidence-aware weighting to enhance student model accuracy and generalization. This approach offers significant potential for improving model efficiency and performance in scenarios with limited or expensive data, impacting fields ranging from computer vision and robotics to medical image analysis.
Papers
October 3, 2024
October 2, 2024
August 16, 2024
April 25, 2024
January 17, 2024
December 4, 2023
June 13, 2023
May 29, 2023
November 25, 2022
September 19, 2022