Open Source Model
Open-source large language models (LLMs) aim to democratize access to powerful AI by providing freely available model weights, code, and sometimes even training data. Current research focuses on improving the performance and safety of these models, including developing novel training techniques, exploring efficient model compression methods like pruning and merging, and establishing robust benchmarks for evaluating trustworthiness, bias, and safety. This open approach fosters collaboration, accelerates innovation, and addresses concerns about proprietary model limitations, particularly regarding data privacy and accessibility for researchers and developers in various fields.
Papers
October 31, 2024
October 25, 2024
October 22, 2024
October 21, 2024
October 20, 2024
October 15, 2024
October 3, 2024
October 1, 2024
September 28, 2024
September 16, 2024
September 7, 2024
August 24, 2024
August 21, 2024
August 20, 2024
August 5, 2024
July 22, 2024
July 20, 2024
July 18, 2024
July 15, 2024