Small Model

Small models (SMs) in machine learning represent a counterpoint to the trend of ever-larger language models (LLMs), focusing on efficiency and resource constraints. Current research emphasizes techniques like knowledge distillation, model compression, and data selection to improve SM performance, often leveraging insights from LLMs as teachers or for data augmentation. This focus on SMs is driven by the need for deploying AI in resource-limited environments and the desire for more interpretable and computationally affordable models, impacting both research and practical applications across various domains.

Papers