Low Temperature Distillation
Low-temperature distillation, a model compression technique, aims to transfer knowledge from a large, computationally expensive "teacher" model to a smaller, more efficient "student" model. Current research focuses on improving distillation methods across diverse architectures (including CNNs, Transformers, and GNNs), addressing challenges like mitigating backdoors in teacher models, handling data scarcity, and achieving robust performance across different datasets and tasks. These advancements are significant for deploying complex models on resource-constrained devices and improving the efficiency and scalability of machine learning applications.
Papers
June 16, 2022
May 23, 2022
April 14, 2022
April 12, 2022
April 11, 2022
March 12, 2022
February 16, 2022
November 17, 2021