Mutual Distillation
Mutual distillation is a machine learning technique focused on improving model efficiency and robustness by transferring knowledge between multiple models. Current research emphasizes its application in diverse areas, including model compression (e.g., pruning and knowledge distillation of large language and vision models), federated learning (addressing data heterogeneity and bias), and dataset condensation (creating smaller, representative datasets). This approach offers significant potential for reducing computational costs, enhancing model generalization, and improving the privacy and security of machine learning systems across various applications.
Papers
May 16, 2023
May 14, 2023
May 9, 2023
May 3, 2023
April 20, 2023
March 21, 2023
March 10, 2023
March 2, 2023
March 1, 2023
February 22, 2023
February 10, 2023
February 8, 2023
January 26, 2023
January 12, 2023
January 5, 2023
November 22, 2022
November 15, 2022
November 1, 2022
October 27, 2022