Small Model
Small models (SMs) in machine learning represent a counterpoint to the trend of ever-larger language models (LLMs), focusing on efficiency and resource constraints. Current research emphasizes techniques like knowledge distillation, model compression, and data selection to improve SM performance, often leveraging insights from LLMs as teachers or for data augmentation. This focus on SMs is driven by the need for deploying AI in resource-limited environments and the desire for more interpretable and computationally affordable models, impacting both research and practical applications across various domains.
Papers
October 31, 2024
October 18, 2024
October 16, 2024
September 10, 2024
August 21, 2024
July 18, 2024
July 15, 2024
June 15, 2024
June 5, 2024
May 30, 2024
April 12, 2024
April 4, 2024
March 31, 2024
March 28, 2024
March 26, 2024
March 20, 2024
March 12, 2024