Open Source Model
Open-source large language models (LLMs) aim to democratize access to powerful AI by providing freely available model weights, code, and sometimes even training data. Current research focuses on improving the performance and safety of these models, including developing novel training techniques, exploring efficient model compression methods like pruning and merging, and establishing robust benchmarks for evaluating trustworthiness, bias, and safety. This open approach fosters collaboration, accelerates innovation, and addresses concerns about proprietary model limitations, particularly regarding data privacy and accessibility for researchers and developers in various fields.
Papers
November 15, 2023
November 6, 2023
October 30, 2023
October 24, 2023
October 11, 2023
October 8, 2023
October 7, 2023
September 28, 2023
August 25, 2023
August 19, 2023
August 18, 2023
May 24, 2023
May 15, 2023
April 14, 2023
March 10, 2022
February 26, 2022