Smaller Language Model
Smaller language models (SLMs) aim to achieve comparable performance to their larger counterparts while significantly reducing computational costs and resource demands. Current research focuses on enhancing SLMs through techniques like knowledge distillation from larger models, dataset augmentation using LLMs, and innovative training methods such as contrastive fine-tuning and instruction tuning. These advancements are crucial for making advanced natural language processing capabilities accessible to a wider range of applications and researchers with limited resources, particularly in domains with constrained computational power or data availability.
Papers
October 28, 2024
October 24, 2024
October 16, 2024
October 15, 2024
October 9, 2024
September 21, 2024
September 19, 2024
September 15, 2024
August 22, 2024
August 20, 2024
August 1, 2024
June 19, 2024
June 17, 2024
June 16, 2024
May 17, 2024
May 1, 2024
April 30, 2024
April 16, 2024
April 3, 2024