Low Temperature Distillation
Low-temperature distillation, a model compression technique, aims to transfer knowledge from a large, computationally expensive "teacher" model to a smaller, more efficient "student" model. Current research focuses on improving distillation methods across diverse architectures (including CNNs, Transformers, and GNNs), addressing challenges like mitigating backdoors in teacher models, handling data scarcity, and achieving robust performance across different datasets and tasks. These advancements are significant for deploying complex models on resource-constrained devices and improving the efficiency and scalability of machine learning applications.
Papers
October 30, 2023
October 13, 2023
July 27, 2023
June 26, 2023
June 5, 2023
May 22, 2023
April 10, 2023
April 4, 2023
March 30, 2023
March 15, 2023
February 21, 2023
February 17, 2023
February 16, 2023
January 30, 2023
November 3, 2022
October 5, 2022
August 25, 2022
July 17, 2022
June 29, 2022