Open Source Model
Open-source large language models (LLMs) aim to democratize access to powerful AI by providing freely available model weights, code, and sometimes even training data. Current research focuses on improving the performance and safety of these models, including developing novel training techniques, exploring efficient model compression methods like pruning and merging, and establishing robust benchmarks for evaluating trustworthiness, bias, and safety. This open approach fosters collaboration, accelerates innovation, and addresses concerns about proprietary model limitations, particularly regarding data privacy and accessibility for researchers and developers in various fields.
Papers
January 10, 2025
January 9, 2025
December 19, 2024
December 16, 2024
December 13, 2024
December 4, 2024
November 28, 2024
November 26, 2024
November 25, 2024
October 31, 2024
October 25, 2024
October 22, 2024
October 21, 2024
October 20, 2024
October 15, 2024
October 3, 2024
October 1, 2024
September 28, 2024