Small Model
Small models (SMs) in machine learning represent a counterpoint to the trend of ever-larger language models (LLMs), focusing on efficiency and resource constraints. Current research emphasizes techniques like knowledge distillation, model compression, and data selection to improve SM performance, often leveraging insights from LLMs as teachers or for data augmentation. This focus on SMs is driven by the need for deploying AI in resource-limited environments and the desire for more interpretable and computationally affordable models, impacting both research and practical applications across various domains.
Papers
February 19, 2024
February 16, 2024
December 14, 2023
October 31, 2023
October 20, 2023
August 9, 2023
June 24, 2023
June 16, 2023
May 23, 2023
May 15, 2023
May 4, 2023
May 3, 2023
October 8, 2022
October 7, 2022
October 6, 2022
August 28, 2022
June 27, 2022