Privileged Knowledge Distillation

Privileged knowledge distillation (PKD) leverages information available during training but absent during inference to improve the performance of a "student" model trained on limited data. Current research focuses on refining PKD techniques across diverse applications, including image matting, semantic segmentation, and various robotic control tasks, often employing multi-teacher architectures or incorporating confidence-aware weighting to enhance student model accuracy and generalization. This approach offers significant potential for improving model efficiency and performance in scenarios with limited or expensive data, impacting fields ranging from computer vision and robotics to medical image analysis.

Papers