Open Source Model
Open-source large language models (LLMs) aim to democratize access to powerful AI by providing freely available model weights, code, and sometimes even training data. Current research focuses on improving the performance and safety of these models, including developing novel training techniques, exploring efficient model compression methods like pruning and merging, and establishing robust benchmarks for evaluating trustworthiness, bias, and safety. This open approach fosters collaboration, accelerates innovation, and addresses concerns about proprietary model limitations, particularly regarding data privacy and accessibility for researchers and developers in various fields.
Papers
February 19, 2024
February 15, 2024
February 6, 2024
February 2, 2024
January 15, 2024
January 12, 2024
January 9, 2024
November 29, 2023
November 15, 2023
November 6, 2023
October 30, 2023
October 24, 2023
October 11, 2023
October 8, 2023
October 7, 2023
September 28, 2023
August 25, 2023
August 19, 2023
August 18, 2023