Mutual Distillation
Mutual distillation is a machine learning technique focused on improving model efficiency and robustness by transferring knowledge between multiple models. Current research emphasizes its application in diverse areas, including model compression (e.g., pruning and knowledge distillation of large language and vision models), federated learning (addressing data heterogeneity and bias), and dataset condensation (creating smaller, representative datasets). This approach offers significant potential for reducing computational costs, enhancing model generalization, and improving the privacy and security of machine learning systems across various applications.
Papers
January 25, 2024
January 18, 2024
January 12, 2024
January 11, 2024
January 9, 2024
January 4, 2024
December 23, 2023
December 22, 2023
December 12, 2023
December 10, 2023
December 5, 2023
November 29, 2023
November 27, 2023
November 7, 2023
November 6, 2023
October 28, 2023
October 22, 2023
October 17, 2023
October 13, 2023