Mutual Distillation
Mutual distillation is a machine learning technique focused on improving model efficiency and robustness by transferring knowledge between multiple models. Current research emphasizes its application in diverse areas, including model compression (e.g., pruning and knowledge distillation of large language and vision models), federated learning (addressing data heterogeneity and bias), and dataset condensation (creating smaller, representative datasets). This approach offers significant potential for reducing computational costs, enhancing model generalization, and improving the privacy and security of machine learning systems across various applications.
Papers
LiRCDepth: Lightweight Radar-Camera Depth Estimation via Knowledge Distillation and Uncertainty Guidance
Huawei Sun, Nastassia Vysotskaya, Tobias Sukianto, Hao Feng, Julius Ott, Xiangyuan Peng, Lorenzo Servadei, Robert Wille
DOLLAR: Few-Step Video Generation via Distillation and Latent Reward Optimization
Zihan Ding, Chi Jin, Difan Liu, Haitian Zheng, Krishna Kumar Singh, Qiang Zhang, Yan Kang, Zhe Lin, Yuchen Liu
Wearable Accelerometer Foundation Models for Health via Knowledge Distillation
Salar Abbaspourazad, Anshuman Mishra, Joseph Futoma, Andrew C. Miller, Ian Shapiro
ProFe: Communication-Efficient Decentralized Federated Learning via Distillation and Prototypes
Pedro Miguel Sánchez Sánchez, Enrique Tomás Martínez Beltrán, Miguel Fernández Llamas, Gérôme Bovet, Gregorio Martínez Pérez, Alberto Huertas Celdrán