Small LLM
Small Language Models (LLMs) are a burgeoning area of research focusing on developing efficient and effective language models with significantly reduced computational requirements compared to their larger counterparts. Current research emphasizes improving their reasoning abilities through techniques like self-play mutual reasoning and enhancing their performance on specific tasks via methods such as logistic regression on embeddings and specialized finetuning strategies like LoRA. This work is significant because it addresses the accessibility, cost, and explainability limitations of large LLMs, potentially democratizing access to powerful language technologies and enabling their deployment in resource-constrained environments.
Papers
August 12, 2024
August 6, 2024
July 19, 2024
July 16, 2024
February 20, 2024
February 8, 2024
January 14, 2024
November 1, 2023
October 30, 2023
October 2, 2023
August 15, 2023