Mutual Distillation
Mutual distillation is a machine learning technique focused on improving model efficiency and robustness by transferring knowledge between multiple models. Current research emphasizes its application in diverse areas, including model compression (e.g., pruning and knowledge distillation of large language and vision models), federated learning (addressing data heterogeneity and bias), and dataset condensation (creating smaller, representative datasets). This approach offers significant potential for reducing computational costs, enhancing model generalization, and improving the privacy and security of machine learning systems across various applications.
Papers
October 10, 2023
September 27, 2023
September 21, 2023
August 28, 2023
August 22, 2023
August 21, 2023
August 9, 2023
August 7, 2023
July 31, 2023
July 22, 2023
June 30, 2023
June 24, 2023
June 15, 2023
June 10, 2023
May 28, 2023
May 25, 2023
May 23, 2023
May 17, 2023