Cross Task Knowledge Distillation

Cross-task knowledge distillation focuses on transferring knowledge learned by a model on one task to improve the performance of a model on a different, related task, even when the tasks have different data distributions or label spaces. Current research explores various methods, including prototype-based approaches that leverage shared feature representations and techniques like inverted projections to filter out task-specific noise, as well as frameworks for collaborative learning across multiple models. This approach is significant because it allows leveraging pre-trained models for new tasks, reducing training data requirements and computational costs, and improving generalization across diverse applications such as object detection, speech enhancement, and recommendation systems.

Papers