Self Distillation
Self-distillation is a machine learning technique where a model learns from its own predictions, improving performance and efficiency without requiring a separate teacher model. Current research focuses on applying self-distillation to diverse tasks and model architectures, including spiking neural networks, transformers, and various deep learning models for image, point cloud, and natural language processing. This approach is particularly valuable for resource-constrained environments, enabling model compression and improved performance in scenarios with limited data or computational power, impacting fields like robotics, medical imaging, and natural language understanding.
Papers
November 1, 2024
October 28, 2024
October 23, 2024
October 19, 2024
September 28, 2024
September 24, 2024
September 19, 2024
September 17, 2024
September 11, 2024
September 3, 2024
July 18, 2024
July 14, 2024
July 5, 2024
June 19, 2024
June 18, 2024
June 17, 2024
June 12, 2024
June 6, 2024
May 23, 2024