One Stage Self Distillation
One-stage self-distillation is a machine learning technique aiming to improve model efficiency and performance by training a single model to mimic its own behavior at different stages or with different data subsets. Current research focuses on refining distillation methods, particularly addressing challenges like mitigating poisoning attacks in federated learning settings and improving the accuracy and efficiency of models for tasks such as video processing and object detection. This approach holds significant promise for accelerating training, enhancing model robustness, and enabling efficient deployment of complex models in resource-constrained environments.
Papers
November 4, 2024
July 25, 2024
April 16, 2024
February 11, 2024
May 18, 2023
October 23, 2022
April 4, 2022