Compact Language Model
Compact language models (CLMs) aim to create smaller, more efficient versions of large language models (LLMs) while retaining comparable performance. Current research focuses on techniques like pruning existing LLMs, knowledge distillation from multiple larger models, and training with carefully curated datasets to improve CLM capabilities in various tasks, including named entity recognition and argument classification. This research is significant because CLMs offer reduced computational costs and improved deployment flexibility, making advanced language processing accessible to a wider range of applications and researchers with limited resources.
Papers
July 19, 2024
March 29, 2024
March 20, 2024
February 23, 2024
February 7, 2024