Task Agnostic Distillation
Task-agnostic distillation aims to compress large, powerful machine learning models (like transformers and convolutional neural networks) into smaller, more efficient versions without sacrificing significant performance across various tasks. Current research focuses on optimizing distillation techniques for different model architectures, including exploring various knowledge transfer methods (e.g., hidden state, attention mechanism transfer) and addressing challenges like distribution mismatch between teacher and student models. This research is crucial for deploying advanced AI models on resource-constrained devices and improving the accessibility and scalability of machine learning applications.
Papers
December 11, 2023
October 13, 2023
August 8, 2023
June 23, 2023
June 3, 2023
May 21, 2023
March 21, 2023
March 16, 2023
February 19, 2023
January 9, 2023
July 14, 2022