Self Distillation
Self-distillation is a machine learning technique where a model learns from its own predictions, improving performance and efficiency without requiring a separate teacher model. Current research focuses on applying self-distillation to diverse tasks and model architectures, including spiking neural networks, transformers, and various deep learning models for image, point cloud, and natural language processing. This approach is particularly valuable for resource-constrained environments, enabling model compression and improved performance in scenarios with limited data or computational power, impacting fields like robotics, medical imaging, and natural language understanding.
Papers
May 14, 2024
April 19, 2024
April 18, 2024
April 4, 2024
March 30, 2024
March 29, 2024
March 24, 2024
March 6, 2024
February 29, 2024
February 21, 2024
February 19, 2024
February 16, 2024
February 15, 2024
January 31, 2024
January 25, 2024
January 3, 2024
January 1, 2024
December 28, 2023